You are on page 1of 556

Studies in Computational Intelligence 952

Theodor Borangiu · Damien Trentesaux ·
Paulo Leitão · Olivier Cardin ·
Samir Lamouri   Editors

Service Oriented,
Holonic and
Multi-Agent
Manufacturing
Systems for Industry
of the Future
Proceedings of SOHOMA 2020
Studies in Computational Intelligence

Volume 952

Series Editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
The series “Studies in Computational Intelligence” (SCI) publishes new develop-
ments and advances in the various areas of computational intelligence—quickly and
with a high quality. The intent is to cover the theory, applications, and design
methods of computational intelligence, as embedded in the fields of engineering,
computer science, physics and life sciences, as well as the methodologies behind
them. The series contains monographs, lecture notes and edited volumes in
computational intelligence spanning the areas of neural networks, connectionist
systems, genetic algorithms, evolutionary computation, artificial intelligence,
cellular automata, self-organizing systems, soft computing, fuzzy systems, and
hybrid intelligent systems. Of particular value to both the contributors and the
readership are the short publication timeframe and the world-wide distribution,
which enable both wide and rapid dissemination of research output.
Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago.
All books published in the series are submitted for consideration in Web of
Science.

More information about this series at http://www.springer.com/series/7092


Theodor Borangiu Damien Trentesaux
• •

Paulo Leitão Olivier Cardin


• •

Samir Lamouri
Editors

Service Oriented, Holonic


and Multi-Agent
Manufacturing Systems
for Industry of the Future
Proceedings of SOHOMA 2020

123
Editors
Theodor Borangiu Damien Trentesaux
Faculty of Automatic Control Université Polytechnique Hauts-de-France,
and Computer Science Le Mont Houy
University Politehnica of Bucharest Valenciennes, France
Bucharest, Romania
Olivier Cardin
Paulo Leitão Department of Génie Mecanique
Research Centre in Digitalization et Productique
and Intelligent Robotics (CeDRI) Université de Nantes
Instituto Politécnico de Bragança, Carquefou, France
Campus de Santa Apolónia
Bragança, Portugal

Samir Lamouri
LAMIH Arts et Métiers Paris Tech
Paris, France

ISSN 1860-949X ISSN 1860-9503 (electronic)


Studies in Computational Intelligence
ISBN 978-3-030-69372-5 ISBN 978-3-030-69373-2 (eBook)
https://doi.org/10.1007/978-3-030-69373-2
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2021
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword

I would like to thank the SOHOMA Steering Committee for offering me the
opportunity to share my views and ideas with the SOHOMA community and the
manufacturing control and systems domain researchers at large. I worked in
manufacturing control for more than twenty years and have witnessed the research
progress of the domain from the first distributed architectures, including first
holonic manufacturing systems models, to the myriad models for digital transfor-
mation of manufacturing through service orientation, and to the distributed intel-
ligence models that are employed in what was recently called “manufacturing as a
service” or shortly MaaS. The SOHOMA workshop, now at its anniversary tenth
edition, has always kept with the times and even went through a few name changes
to better capture the evolving nature of our work as a community. But, nevertheless,
it always welcomed submissions from around the world covering cutting-edge
manufacturing control modelling, promoted transformative research and moved
forward the knowledge frontier. As confirmation, the last SOHOMA workshop held
in October featured the overarching theme “manufacturing as a service—virtual-
izing and encapsulating manufacturing resources and controls into cloud networked
services” and, to name a few, included articles covering MaaS aspects such as
cloud-based manufacturing control, digital twins in manufacturing, holonic and
multi-agent process control, ethics and social automation, human factors integra-
tion, and physical Internet and logistics.
This foreword makes an attempt to capture the readers’ attention by highlighting
current MaaS developments, as well as outline potential areas of research for the
SOHOMA community and beyond. Manufacturing as a service includes local and
potentially geographical distributed, service-oriented, knowledge-based smart
manufacturing models that provide customized design and product solutions to
individual or group-based customer types. It leverages technologies such as big data
analytics, cloud, edge and fog computing, digital twins, artificial intelligence/ma-
chine learning (AI/ML), including deep learning, 3D printing, 5G broadband and
SDN networks, and Internet of things. All within constraints such as high effi-
ciency, safety of operations, cybersecurity of digital transactions, ethics, human–
machine interaction, low energy consumption and reduced logistics imprint.

v
vi Foreword

Including synergistic processes such as design as a service (DaaS), predict as a


service (PraaS) and maintain as a service (MAaaS) results in bringing into being of
a robust, responsive, secure, value chain-based and customer-oriented MaaS
ecosystem that goes beyond the separate “smart factory” or digital manufacturing
solutions. The quintessential research objective to accomplish this vision is to
investigate the seamless integration of the above technologies into the MaaS
ecosystem. SOHOMA researchers already started the heavy lifting for this daring
task by building on the previous editions of the workshop and especially on the
work presented at the current anniversary edition.
While I cannot address here all the above-named MaaS technologies and their
constraint-based ecosystem integration, I will be making the case for the MaaS AI/
ML-based control software and cybersecurity assessment for future manufacturing
systems. In future, tailored MaaS computational software systems will be enabled
by sensor-equipped resources, scalable data infrastructure, fast and reliable secure
communications, cloud, edge and fog computing, real-time operating systems and
predictive analytics. AI/ML adaptive control systems will increase the generality
and scale of the search space and include remote possible combinations of inputs,
controls and environmental conditions. Hence, AI/ML systems will be able to select
among many feasible system responses, including some that otherwise might have
been overlooked. Algorithm training will eliminate responses that do not provide
the optimal control action and responses for which the control action is provided too
late, too early, or out of sequence, or applied too long or stopped too soon. AI/ML
systems will have provisions for outlier detection and work with sets of data having
some or all of the known big data “V” characteristics: volume, variety, velocity,
volatility, value and variability. The big step forward towards what is called arti-
ficial general intelligence (AGI) domain should not make a detour for manufac-
turing control and initial characteristics of AGI, such as the below, are expected to
be implemented in the decades to come:
• Avoiding negative side effects: ensure adverse actions towards the actors and the
environment are not introduced through objective function optimization.
• Avoiding reward hacking: ensure the cumulative reward is not maximized at the
cost of exposing the system to attacks.
• Scalable oversight: ensure the objective function can be optimized when fre-
quent evaluation of constraints may not be cost effective.
• Safe exploration: ensure the probabilistic activities taken to avoid local optimum
solutions do not result in increased system vulnerabilities.
• Robustness to distributional shift: ensure no unsafe actions are performed in an
environment having distinct characteristics than the training environment.
I am looking forward for SOHOMA researchers to delve into building MaaS AI/
ML control software that addresses one or more of the above characteristics and in
ten years, at the twentieth anniversary workshop, to report on their transformative
AI/ML manufacturing research.
Foreword vii

On the second topic I mentioned above, cybersecurity assessment for future


manufacturing systems, there is so much to talk about which is both good and not
so good. In contrast to other domains of the global economy, manufacturing was
not the subject to sustained and successful cybersecurity attacks. It seems that
manufacturing organizations do not necessarily consider themselves at risk for
cybersecurity attacks. The discussion here does not cover the IT systems of man-
ufacturing organizations, which carry the same risk as other IT domain systems, but
rather the specific systems for manufacturing operations. The variety of cyber-
physical systems, industrial control systems, supervisory control and data acqui-
sition systems (SCADA), and networked machines, sensors, data and control
software are all at risk, with the manufacturing supply chain being at an elevated
risk. As it can be inferred, MaaS includes all these types of systems and additionally
more. Bringing into being the MaaS ecosystem is not possible without strong
cybersecurity measures in place. The probability of an unprecedented cyber-attack
with serious implications in the manufacturing domain is increasing by the day, so
preparation must start now.
I talked above about MaaS software control using AI/ML, but the fact is AI/ML
systems can also be used in future for cybersecurity preparation. They can be used
to test and prepare for worst-case scenarios and to analyse potential cyber-threats to
the MaaS systems using sensor data and defensive systems responses. They can be
used to enhance human–machine integration and minimize any human error con-
sequences in the MaaS ecosystem. They can be used to discover patterns and
anomalies in different customer or manufacturer datasets and thus assess the like-
lihood of an attack. Finally, they can be used to securely develop, deploy and
operate MaaS software systems, and protect against low-level attack vectors or
logic errors. Some of the potential research topics for SOHOMA researchers in the
years to come include:
• Predictions related to potential attackers’ intent against the MaaS ecosystem.
• Uncertainty planning for cooperative and non-cooperative environments, both
being equally possible in the competitive MaaS ecosystem.
• Predictions related to human erroneous decisions that inadvertently increase
MaaS system vulnerability.
• Verification and validation of datasets for identification of detrimental flaws and
vulnerabilities that can be externally exploited.
• Implementation of best practices for secure system operations that reduce the
internal threat in the MaaS ecosystem.
Cybersecurity protection of manufacturing environment is at crossroads and the
only step forward is changing the mindset of it being an IT issue and make it an
integral part of the MaaS ecosystem research. Again, I look forward for SOHOMA
researchers to address the cybersecurity assessment of future manufacturing systems
and report their work at the coming SOHOMA workshops.
viii Foreword

I hope our readers will enjoy and find valuable the high-quality articles included
in this anniversary volume. I am confident that previous SOHOMA authors will
continue to contribute to the advancement of manufacturing research, and I invite
other academia and industry practitioners reading this volume to submit their work
to the future editions of the SOHOMA workshop. Together we will build the
manufacturing systems for the industry of the future.

November 2020 Radu Babiceanu


Preface

This volume gathers the peer-reviewed papers presented at the tenth edition of the
international workshop on Service Oriented, Holonic and Multi-Agent
Manufacturing Systems for Industry of the Future—SOHOMA’20 organized on
1–2 October 2020 by Arts et Métiers ParisTech in collaboration with University
Politehnica of Bucharest (the CIMR Research Centre in Computer Integrated
Manufacturing and Robotics), Université Polytechnique Hauts-de-France (the
LAMIH Laboratory of Industrial and Human Automation Control, Mechanical
Engineering and Computer Science) and Polytechnic Institute of Bragança (the
CeDRI Research Centre in Digitalization and Intelligent Robotics).
The main objective of SOHOMA workshops is to foster innovation in smart and
sustainable manufacturing and logistics systems by promoting concepts, methods
and solutions addressing trends in service orientation of agent-based control tech-
nologies with distributed intelligence.
The book is structured in eight parts that correspond to the technical sessions
of the workshop’s program and include papers describing results of the research
addressing the development and application of key enabling technologies (KET:
production-, digital- and cyber-physical technologies) for the industry of the future.
In concurrence with this vision of future manufacturing, the eight sections of the
book address control and organization problems in the manufacturing value chain
and offer smart solutions for smart factories networked in the cloud, implemented in
cyber-physical systems with all resources integrated, sharing information and
infrastructures, collaborating, adapting to reality and self-configuring at runtime for
efficiency, agility and safety.
These subjects are treated in the book’s Part 1: Cloud Networked Models of
Knowledge-based Intelligent Control; Part 2: Digital Twins in Manufacturing and
Beyond; Part 3: Holonic and Multi-Agent Process Control; Part 4: Ethics and
Social Automation in Industry 4.0; Part 5: New Organizations based on Human
Factors Integration in Industry 4.0; Part 6: Intelligent Products and Smart
Processes; Part 7: Physical Internet and Logistics; Part 8: Optimal Production and
Supply Chain Planning.

ix
x Preface

Along the nine previous annual workshop editions, the SOHOMA scientific
community introduced and developed new concepts, methods and solutions that
aligned with the worldwide effort of modernizing and digitalizing manufacturing in
the twenty-first century’s context of high dynamics market globalization, product-
centric control, direct digital manufacturing, customer- and service-oriented man-
ufacturing, enterprise networking and cloud-based infrastructure sharing. The ten-
year anniversary edition in 2020 hosted presentations of the most important
research contributions of SOHOMA groups, which are also included in this book.
These papers reflect continuity in approach and demonstrate the impact of the
community’s research reported on specific evolution lines in manufacturing systems
towards ‘industry of the future’ (IoF).
SOHOMA research has identified and permanently addressed the technological
and computational enablers that potentiate the IoF characteristics: global opti-
mization and intelligence distribution in manufacturing execution systems (MES);
decoupling supervision from control; extended digital modelling of processes,
products and resources; pervasive instrumenting of shop floor entities and edge
computing in the industrial Internet of things (IIoT) framework; strongly coupling
systems of systems of autonomous and cooperative elements in cyber-physical
systems (CPS) according to the holonic paradigm; on-demand sharing of technol-
ogy and computing resources through cloud-type services; predictive production
control and resource maintenance based on artificial intelligence (AI, machine
learning-ML) techniques. Concerning these enablers, orchestrating technologies
are essential to coordinate and synchronize the two classes of IoF enablers towards
implementation and deployment. There are three such technologies that have been
systematically developed, applied and improved during the last decade; they rep-
resent the triple brand of the SOHOMA scientific community: service orientation,
holonic manufacturing and multi-agent systems (MAS) in the industrial
environment.
Service orientation in the manufacturing domain was not limited to just Web
services or technology and technical infrastructure either; instead, service-oriented
architectures (SOA) which were developed reflect a new way of thinking about
processes, resources, orders and their information counterparts (the service-oriented
agents) reinforcing the value of commoditization, reuse, semantics and information,
and creating business value for the factory. A complete manufacturing service
(MService) theory and implementing model have been established.
The holonic paradigm has been used to develop smart, distributed manufacturing
control architectures (for mixed batch planning and scheduling, resource allocation,
material flow and environment conditioning), based on the definition of a set of
abstract entities: resources (technology, humans—reflecting the producer’s profile,
capabilities, skills), orders (reflecting the business solutions) and products
(reflecting the customers’ needs, value propositions). These entities are represented
by autonomous holons communicating and collaborating in holarchies to reach a
common production-related goal. The holonic paradigm provides the attributes of
flexibility, agility and optimality by means of a completely decentralized manu-
facturing control architecture based on a social organization of intelligent entities
Preface xi

called holons with specific behaviours and goals. From the control perspective, in
the dynamic organizations of holons (the holarchies), decision-making character-
istics (e.g. scheduling, negotiating, allocating, reconfiguring) are combined with
reality-reflecting features (robustness, fault-tolerance, agility) provided by holons.
Holarchies allow for object-oriented aggregation, while the specialization incor-
porated in control architectures provides support for abstraction; in this way, the
holonic control paradigm has been increasingly transposed in control models of
diverse types of industrial processes.
The shop floor control scheme is scalable, decoupled from the decision-making
(supervision) MES layer which assures adaptability and reconfigurability of the
global production control which is thus keeping free of induced constraints and
limitations such as myopia or the incapacity to react at unexpected events.
In the context of holonic manufacturing, the strongly coupled networks of
software agents—information counterparts of holons’ physical parts—cooperate to
solve global production problems. These are multi-agent systems that constitute the
implementing frameworks for holonic manufacturing control and reengineering of
shop floor resource coalitions. MAS allow distributing intelligence in the MES.
They are able to control in decentralized (heterarchical) mode production systems.
Mixed approaches were developed, e.g. patterns of delegate MAS (D-MAS) are
mandated by the holons representing structural production elements to undertake
tasks reconfiguring operation scheduling and resource allocation in case of dis-
turbances such as resource breakdowns. Bio-inspired MAS for manufacturing
control with social behaviour and short-term forecasting of resource availability
through ant colony engineering or recurrent neural networks are AI-based tech-
niques for heterarchical control with MAS.
Because reality awareness and robustness of control systems represent priorities
of the industry, semi-heterarchical models of holonic manufacturing control were
developed to offer a dual behaviour that combines optimized system scheduling
with agile, reactive scheduling that is done in real time by D-MAS. The semi-
heterarchical manufacturing control architecture deals rapidly with unexpected
events affecting orders in current execution, while computing in parallel (eventually
in the cloud) new optimized schedules for the rest of orders waiting to be processed;
this operating mode reduces the myopia of the system at global batch level and
preserves the system’s agility.
In the SOHOMA research, MAS were often used as implementing framework
for holonic semi-heterarchical control in SOA. The three orchestrating technologies
represent the basis of manufacturing CPS design and implementing; book chapters
in part 3 and part 8 describe research carried out in these three topics.
During the past ten years, a lot of research works have been done by the
SOHOMA community in the domain of intelligent products. Intelligent products
(IP) are created temporarily in the production stage by embedding intelligence on
the physical order or product that is linked to information and rules governing the
way it is intended to be made (with recipe, resources), routed, inspected and stored;
this enables the product to support and/or influence these operations. IP virtual-
ization moves the processing from the intelligence embedded in the product to the
xii Preface

virtual machine in the cloud using a thin hypervisor on the product carrier and WI-
FI connection, either in a dedicated workload or in a shared workload to make
decisions relevant to its own destiny. The research contributions can be grouped in
three areas: 1) product-driven systems, 2) product lifecycle information systems and
3) physical Internet.
Product-driven systems (PDS) were defined as a way to optimize the whole
product lifecycle by dealing with products whose informational content is perma-
nently bound to their virtual or material contents and are able thus to influence
decisions made about them, participating actively to different control processes in
which they are involved throughout their lifecycle. Designing a PDS is a challenge
that involves three fundamentals aspects: functions, architecture and interactions.
Several bio-inspired approaches have been proposed by SOHOMA authors such as
ant colony optimization, the firefly algorithm and a mechanism inspired from
stigmergy using the notion of volatile knowledge.
An important facet of the intelligent product is related to data. There have been
defined two levels of product intelligence (PI): 1) Level 1 (information-oriented)—
PI is related to the (customer) needs linked to the production order, e.g. goods
required, quality, timing, cost agreed; PI allows communicating with the local
organization (and with the customer for the order); PI monitors/tracks the progress
of the order through the industrial supply chain; 2) Level 2 (decision-oriented)—PI
influences the choice between different options affecting the order when such a
choice needs to be made; PI adapts the order management depending on real
production conditions. The management of product information along the product’s
lifecycle was treated in the community’s research work by means of distributed
Product Lifecycle Information Management (PLIM) Systems. Different PLIM
architectures messaging protocols and formats have been proposed. The EPCIS
architecture is one such distributed data management architecture, specially adapted
to product tracking in the supply chain [32]. DIALOG is another architecture
proposed by SOHOMA members, based on a multi-agent system distributed in
every actor of a given supply chain. In this architecture, a specific messaging
protocol initially called product messaging interface (PMI) and further named
quantum lifecycle management (QLM) is used.
The physical Internet (PI) concept has been proposed and formally defined as an
open global logistics system leveraging interconnected supply networks through a
standard set of modular containers, collaborative protocols and interfaces for
increased efficiency and sustainability. The concepts of physical Internet and
intelligent product were merged in SOHOMA works with the main idea to realize
the notion of PI-container (smart container used in the physical Internet paradigm)
by applying the activeness concept to a normal container. Also, concepts from the
PDS area have also been applied to the physical Internet, e.g. the PROSIS archi-
tecture was first applied in an intra-logistics context that uses wireless holon net-
works constituted by mobile holons (shuttles, moving products) and fixed holons
(workstations).
Parts 6 and 7 of the book include descriptions of SOHOMA research in the areas
of intelligent product and physical Internet.
Preface xiii

Introduced first as the “Conceptual Ideal for Product Lifecycle Management”


(PLM) centre, the digital twin concept was developed by the SOHOMA scientific
community with all its currently accepted elements—real space, virtual space and
connection with data/information flow between the virtual and real space. Within
digital twin research at SOHOMA, a set of envisioned functions of DTs has been
formulated: it was assumed that the primary function of DTs should be to reflect the
reality of their physical counterpart as a single source of truth. This then represents
the value of DTs, because the accurate and updated mirroring of reality is extremely
valuable and plays a critical role in effective control, scheduling and planning. In
order to perform this primary function, DT architectures and implementations must
provide key supporting functions, such as:
• Support data and information exchange between physical and digital worlds.
• Gather and aggregate data from the physical world, from multiple sources.
• Couple the virtual representation to their physical counterpart.
• Store historical data of the physical twin over its entire lifespan
Building on these supporting functions, the reported research works frequently
refer to the high-level functions, or roles, that DTs are envisioned to fulfil. These
roles can be summarized as follows: remote monitoring, predictive maintenance,
simulation of “what-if” scenarios, planning and optimization. Three regimes can be
distinguished in the above four roles: Firstly, some roles require an emulation of the
physical twin (i.e. remote monitoring that reflects the current operation). Secondly,
some roles rely on a simulation model of the physical twin to predict its future
behaviour, either using historical information (e.g. predictive maintenance) or a
combination of historical information and chosen scenarios (e.g. planning and
“what-if” simulations). The third regime, control, is also focused on the future but is
aimed at affecting the physical twin’s behaviour (e.g. planning and optimization).
The simulation regime contains the roles that most significantly distinguish a DT
from a supervisory control and data acquisition system.
The DT architectures developed by SOHOMA groups aim to encapsulate the
functionality of the DT in multiple layers (usually six layers); this principle results
from the holonic systems influence in DT design. At the lowest level of these
architectures are the interfaces to the physical twin, where data is gathered through
smart sensors, embedded devices, device controllers and data acquisition systems.
The open platform communications unified architecture (OPC UA) is proposed for
the transfer of the collected data to the cyber-levels of the architecture—in essence
on layers 2 and 3 (data sources, data transmission, data stream processing—
equivalent in their functionality). All DT architectures emphasize the need for data
aggregation, aligning in time and map-reducing, as achieved in layer. This function
aims to convert raw sensed data into contextualized information and to reduce the
amount of (streamed) data that must be processed and analysed within the DT.
Database storage for historical information is used to support the highest levels of
functionality in DT architectures, referred either as ‘analysis and decision-making’
xiv Preface

or ‘machine learning’; these functions are inferred by providing decision-makers


with access to emulation and simulation functions that build on the DT data.
It is considered that in future, digital twins will become middleware interfacing
many applications in manufacturing systems, as well as reality models embedded in
control that use not only currently measured data from processes, but also historical
information about components, behaviours and events and forecasts of these data;
this will assure accurate reality awareness, realistic optimization and prediction of
unexpected events during the manufacturing cycle. However, the high variability of
technologies and the large amount of data to be collected from shop floor devices
and processed in real time tend to increase the complexity of software development
which might become the main barriers to achieving actual DT implementations on
industrial scales, unless big data streaming techniques and HPC analytics are made
available for real-time contexts.
DT research contributions of SOHOMA authors and an historical overview are
described in part 2 of this book.
Cloud manufacturing (CMfg) is a research topic constantly addressed by the
scientific SOHOMA community in accordance with its general evolution and
relationship with advances in information, communication and control technologies
(IC2T)—for all three KET domains—applied to the manufacturing industry. CMfg
moved the vision of dynamic mass customization a step further by providing ser-
vice-oriented networked product development models in which clients may con-
figure, select and use customized product-making resources and services, ranging
from computer-aided design and engineering software to reconfigurable manufac-
turing systems.
In early SOHOMA research stage (2010–2012), contributions related to CMfg
addressed the vertical integration of manufacturing enterprises having already
adopted cloud computing on the higher layers of business and operations man-
agement processes for supply, ERP and digital marketing, however not yet inte-
grated with the production and logistics layers. The integration along the vertical
enterprise axis: business management, ERP, high-level production control (MES),
shop floor distributed control is based on the SOA concept and marks a shift from
the agent-centric architecture to SOA. The application of SOA principles in the
factory automation domain consisted in encapsulating the functionality and busi-
ness logic of components in the production environment (legacy software and
devices) by means of Web services. Cloud-based enterprise networking in the
manufacturing value chain (MVC) was also part of the ‘enterprise integration’
theme in this early CMfg research stage.
The design of cloud models and infrastructures for manufacturing was an
objective of the SOHOMA scientific community, who considered that cloud ser-
vices for MES are based on the virtualization of shop floor devices and a new
control and computing mode that operates in the global cloud manufacturing model
(CC-CMfg), with progressive solutions towards real time. In the vision of the
SOHOMA community, CC-CMfg services orchestrate a dual OT (operation tech-
nology control) and IT (computing) model that:
Preface xv

[1] Transposes pools of shop floor resources (robots, CNC machines), products
(recipes, client spec.), orders (work plans, task sequences) into on-demand
making services;
[2] Enables pervasive, on-demand network access to a shared pool of config-
urable HPC resources (servers, storage, applications) that can be rapidly
provisioned and released as services to various high-level MES tasks with
minimal management effort or interaction. Hence, CC-CMfg may use cloud
computing facilities.
There were proposed CC models (public + private) and techniques for the
integration of an infrastructure-as-a-service (IaaS) cloud system with a manufac-
turing system based on the virtualization of multiple shop floor resources (robots,
machines). Major contributions were brought for the virtualization of shop floor
entities (resource, intelligent product) and MES workloads in the cloud. The
solutions use a combination of virtual machines (VM) deployed in cloud before
production start (offering the static part of services) and of containers executed on
the VMs which run the dynamic part of services because they are deployed much
faster than VMs. High availability (HA) methods and software-defined networking
(SDN) mechanisms for interoperability, resilience and cybersecurity in dual CC-
CMfg architecture were also developed being considered as major achievements.
The full dual CC-CMfg model was adopted for production planning and control
of manufacturing systems with multiple resources and products with embedded
intelligence. CC features were taken over by operational technologies (control,
supervision, dynamic reconfiguring): i) the product-making services are provi-
sioned automatically by MES optimal resource allocation programs; ii) the CC
component offers network access to HPC services through distributed message
platforms such as the manufacturing service bus (MSB); iii) the shop floor
resources are placed in clusters with known location relative to the material flow
and dynamically assigned at batch run time; this location is one input parameter
weighting the optimal resource allocation; iv) the CC services can scale rapidly in
order to sustain the variable real-time computing demand for order rescheduling
respectively anomaly detection, the resources being assigned or released elastically;
v) the assigned CMfg resources are monitored and controlled and both the MES
(service consumer) and the cloud (service provider) are notified about the usage
within the smart control application; the cost model ‘pay as you go’ is used to
establish the cost offers for client orders in service-level agreements. Such cloud
models and services were developed for optimized, energy-aware production at
batch level with resource sharing in semi-heterarchical control topology.
The SOHOMA community worked out ML-based approaches for reality
awareness and efficiency in cloud manufacturing and proposed applications of
machine learning algorithms for global optimization of manufacturing at batch
level, robust behaviour at disturbances and safe utilization of manufacturing
resources. The focus was put on the prediction of key performance indicators (KPI)
like instant power consumption of resources, energy consumption and/or execution
time for product operations, to provide more accurate input for the cloud-based
xvi Preface

system scheduler (SS)—optimization engine for mixed product and operation


scheduling plus resource allocation: in addition to history (stored records) or current
(last measured) energy consumption values, short-term forecasted values are used
as input for SS optimization at batch execution horizon. Three new technologies
were used for this type of smart cloud-based manufacturing control:
• Big data (BD) streaming for shop floor data processing: aggregating at the right
logical levels when data originates from multiple sources; aligning data streams
in normalized time intervals; extracting insights from real-time data streams.
• Digital twins (DT) of production assets (resources, products and orders) and
system actions (control, maintenance and tracking) to record and maintain a
complete view of past behaviours and KPIs of resources, processes and out-
comes, and to forecast their future evolutions. Recording historical data is
needed to train ML patterns.
• ML workload virtualization in cloud using HPC and fast deployment techniques
for: i) updating DTs: deep learning patterns and measurement variations as basis
for predictions; classification, when DT finds classes for feature vectors; clus-
tering, which searches and identifies similarities in non-tagged, multiple-
dimension data and tags suitably each feature vector; ii) embedding DTs:
making intelligent decisions for smart production control in two roles: smooth
reconfiguring of CMfg resources for global batch optimization based on the
predicted cost of their usage; resource health management and predictive
maintenance.
The SOHOMA research focuses on the use of AI methods in the cloud-based
smart manufacturing control vision of the ‘factory of the future’ (FoF), based on the
concepts of digitalization and interconnection of distributed manufacturing entities
in a ‘system of systems’ approach: i) new types of production resources are strongly
coupled and self-organizing in the entire value chain while products will decide
upon their own making systems; ii) new types of decision-making support will be
available from real-time production data collected from resources and products.
The vision on the future of cloud manufacturing research relies on the concept of
‘cloud anything’ extended beyond the production phases and facilities. This vision
integrates the technologies and tools available for industry such as: PLM, PLC,
MES, ERP and the frameworks under development (CPS for production and
industrial IoT-IIoT) with the dual cloud model CC-CMfg on top of these infras-
tructures to create a product–service-centric closed-loop collaboration.
From the product lifecycle perspective, both the virtual (design and engineering)
and the physical parts of the product (making) are assisted respectively tracked in
the cloud. In fact, products conceived and designed to be embedded with intelli-
gence and so to be “smart” both in production (product-driven automation) and
utilization phases (after sales) are able to exchange information both within and
beyond the limit of the factory. These smart objects are connected in the cloud with
assets and enterprises in the supply networks and can provide a new type of
Preface xvii

cooperation, enabling collaborative demand and supply planning, traceability and


execution.
Part 1 of the book includes papers describing research results in CMfg.
Research at SOHOMA also considers a human-centred approach to the design
of intelligent manufacturing systems pointing out that modern manufacturing sys-
tems must have human awareness in the IoF vision, while keeping human decision-
making in the loop at different levels of automation. The research is focused on the
integration of human factors in system design, the optimization of human resource
organization, improving working conditions, reducing musculoskeletal disorders
risks, etc. Considering the integration of humans into Industry 4.0 environments,
the operators’ roles become more dynamic and decision-oriented. Operators in
factories of the future require freedom from laborious tasks, flexibility in com-
munication, and personalized and optimized information delivery.
Research works proposed ambient intelligence environments which are com-
prised of intelligent interfaces supported by embedded computing and networking
technology; such architectures manage human–machine communication through
available interfacing services as part of a digital administration shell for integrating
human workers into the Industry 4.0 environment. For autonomous cyber-physical
systems as future production systems, a variety of technologies and mechanisms
integrating humans and machines have been analysed, among which benchmarking
platforms.
Based on the development of artificial intelligence models and methods, Industry
4.0 fosters the development of more autonomous, intelligent systems interacting or
cooperating with humans; SOHOMA authors consider that these developments
should pay strong attention to ethical and societal issues involved in the design,
development, operation and maintenance of such systems and their automation
beyond classical key performances indicators expressed in terms of effectiveness or
efficiency. Ethics and social automation must get the necessary and sufficient
consideration with the emergence of autonomous intelligent systems (AIS) not only
in controlled environments but in society at large and with tangible applications that
affect people.
Aspects relevant to ethics of the artificial and social automation have been
addressed by SOHOMA groups: ethical behaviour of researchers (ethical design of
systems, techno-ethics), the study of the ethical behaviour of artificial systems
designed (design of ethical systems, machine ethics), the impact of automation on
society, the ethical risks relevant to the over-integration of humans with artificial
systems (e.g. operator 4.0), algorithmic bias and transparency in autonomous
intelligent systems and their applications. Interdisciplinary research on AIS and
studies on the applicability of different ethical and societal frameworks in Industry
4.0 including legal and economic aspects have been also undertaken with com-
plementary analyses from philosophy, sci-fi literature and other fields relevant to
humanities.
The theme of the SOHOMA’20 workshop is ‘manufacturing as a service—
virtualizing and encapsulating manufacturing resources and controls into cloud
networked services’.
xviii Preface

Manufacturing as a service or shortly MaaS stands for new models of service-


oriented, knowledge-based smart manufacturing systems optimized and reality-
aware, with high efficiency and low energy consumption that deliver value to
customer and manufacturer via big data analytics, edge computing and industrial
IoT communications, cognitive robotics, digital twins and machine learning—
components of cyber-physical systems. From product design to after-sales services,
MaaS relies on the servitization of manufacturing operations that can be integrated
into different manufacturing cloud service models, such as design as a service
(DaaS), predict as a service (PraaS) or maintain as a service (MNaaS).
MaaS relies on a layered cloud networked manufacturing perspective, from the
factory low-level CMfg shop floor resource sharing model to the virtual enterprise
high-level, by distributing the cost of manufacturing infrastructure (equipment,
software, maintenance and networking) across customers. MaaS is based on real-
time insights into the status of manufacturing equipment, retrieved through ML
techniques; big data streaming technology will transfer this essential information
about production and shop floor to the cloud computing platform. This information
will accurately represent the manufacturing context by help of digital twin software.
All these aspects are presented in the book. We think that students, researchers
and engineers will find this volume useful for the study of digital manufacturing
control.

October 2020 Theodor Borangiu


Damien Trentesaux
Paulo Leitão
Olivier Cardin
Samir Lamouri
Contents

Cloud-Based Manufacturing Control


Cloud Networked Models of Knowledge-Based Intelligent Control
Towards Manufacturing as a Service . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Theodor Borangiu, Radu F. Babiceanu, Silviu Răileanu, Octavian Morariu,
Florin Anton, Cristina Morariu, and Silvia Anton
About the Applicability of IoT Concept for Classical
Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Carlos Pascal, Doru Pănescu, and Cătălin Dosoftei
An Open-Source Machine Vision Framework for Smart
Manufacturing Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Silviu Răileanu, Theodor Borangiu, and Florin Anton
Using Cognitive Technologies as Cloud Services for Product Quality
Control. A Case Study for Greenhouse Vegetables . . . . . . . . . . . . . . . . 66
Florin Anton, Theodor Borangiu, Silvia Anton, and Silviu Răileanu

Digital Twins in Manufacturing and Beyond


Past and Future Perspectives on Digital Twin Research at SOHOMA . . . 81
K. Kruger, A. J. H. Redelinghuys, A. H. Basson, and O. Cardin
Decision Support Based on Digital Twin Simulation: A Case Study . . . 99
Flávia Pires, Matheus Souza, Bilal Ahmad, and Paulo Leitão
Digital Twin Data Pipeline Using MQTT in SLADTA . . . . . . . . . . . . . . 111
C. Human, A. H. Basson, and K. Kruger
Toward Digital Twin for Cyber Physical Production Systems
Maintenance: Observation Framework Based on Artificial Intelligence
Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Farah Abdoune, Maroua Nouiri, Pierre Castagna, and Olivier Cardin

xix
xx Contents

An Aggregated Digital Twin Solution for Human-Robot Collaboration


in Industry 4.0 Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
A. J. Joseph, K. Kruger, and A. H. Basson

Holonic and Multi-agent Process Control


Ten years of SOHOMA Workshop Proceedings: A Bibliometric
Analysis and Leading Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Jose-Fernando Jimenez, Eliana Gonzalez-Neira,
Gloria Juliana Arias-Paredes, Jorge Andrés Alvarado-Valencia,
Olivier Cardin, and Damien Trentesaux
Proposition of an Enrichment for Holon Internal Structure:
Introduction of Model and KPI Layers . . . . . . . . . . . . . . . . . . . . . . . . . 169
Erica Capawa Fotsoh, Pierre Castagna, Olivier Cardin, and Karel Kruger
Holonic Architecture for a Table Grape Production
Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Johan J. Rossouw, Karel Kruger, and Anton H. Basson
Learning Distributed Control for Job Shops - A Comparative
Simulation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Oliver Antons and Julia C. Arlinghaus
A Reactive Approach for Reducing the Myopic and Nervous
Behaviour of Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Sebastian-Mateo Meza, Jose-Fernando Jimenez,
and Carlos Rodrigo Ruiz-Cruz
Multi-agent Approach for Smart Resilient City . . . . . . . . . . . . . . . . . . . 215
Sergey Kozhevnikov, Miroslav Svitek, and Petr Skobelev

Ethics and Social Automation in Industry 4.0


Decision-Making in Future Industrial Systems: Is Ethics a New
Performance Indicator? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Lamia Berrah and Damien Trentesaux
Ethics of Autonomous Intelligent Systems in the Human Society:
Cross Views from Science, Law and Science-Fiction . . . . . . . . . . . . . . . 246
Damien Trentesaux, Raphaël Rault, Emmanuel Caillaud,
and Arnaud Huftier
Analysis of New Job Profiles for the Factory of the Future . . . . . . . . . . 262
Lucas Sakurada, Carla A. S. Geraldes, Florbela P. Fernandes,
Joseane Pontes, and Paulo Leitão
Evaluation Methods of Ergonomics Constraints in Manufacturing
Operations for a Sustainable Job Balancing in Industry 4.0 . . . . . . . . . 274
Nicolas Murcia, Abdelmoula Mohafid, and Olivier Cardin
Contents xxi

Toward a Social Holonic Manufacturing Systems Architecture Based


on Industry 4.0 Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Etienne Valette, Hind Bril El-Haouzi, and Guillaume Demesure

New Organizations Based on Human Factors Integration


in Industry 4.0
Interfacing with Humans in Factories of the Future: Holonic Interface
Services for Ambient Intelligence Environments . . . . . . . . . . . . . . . . . . 299
Dale Sparrow, Nicole Taylor, Karel Kruger, Anton Basson,
and Anriëtte Bekker
A Benchmarking Platform for Human-Machine Cooperation
in Cyber-Physical Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . . . 313
Quentin Berdal, Marie-Pierre Pacaux-Lemoine, Thérèse Bonte,
Damien Trentesaux, and Christine Chauvin
Human-Machine Cooperation with Autonomous CPS in the Context
of Industry 4.0: A Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Corentin Gely, Damien Trentesaux, Marie-Pierre Pacaux-Lemoine,
and Olivier Sénéchal
Simulation on RFID Interactive Tabletop of Working Conditions
in Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Nicolas Vispi, Yoann Lebrun, Sophie Lepreux, Sondès Chaabane,
and Christophe Kolski
Multi-agent Simulation of Occupant Behaviour Impact on Building
Energy Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Habtamu Tkubet Ebuy, Hind Bril El Haouzi, Rémi Pannequin,
and Riad Benelmir

Intelligent Products and Smart Processes


Intelligent Products through SOHOMA Prism . . . . . . . . . . . . . . . . . . . . 367
William Derigent, Duncan McFarlane, and Hind Bril El-Haouzi
Multi-protocol Communication Tool for Virtualized Cyber
Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Pascal André, Olivier Cardin, and Fawzi Azzi
Is Communicating Material an Intelligent Product Instantiation?
Application to the McBIM Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
H. Wan, M. David, and W. Derigent
The Concept of Smart Hydraulic Press . . . . . . . . . . . . . . . . . . . . . . . . . 409
Denis Jankovič, Marko Šimic, and Niko Herakovič
xxii Contents

Distributed Dynamic Measures of Criticality for Telecommunication


Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
Yaniv Proselkov, Manuel Herrera, Ajith Kumar Parlikad,
and Alexandra Brintrup

Physical Internet and Logistics


A Multi-agent Model for the Multi-plant Multi-product Physical
Internet Supply Chain Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
Maroua Nouiri, Abdelghani Bekrar, Adriana Giret, Olivier Cardin,
and Damien Trentesaux
Survey on a Set of Features for New Urban Warehouse Management
Inspired by Industry 4.0 and the Physical Internet . . . . . . . . . . . . . . . . 449
Aurélie Edouard, Yves Sallez, Virginie Fortineau, Samir Lamouri,
and Alexandre Berger
Multi-objective Cross-Docking in Physical Internet Hubs Under
Arrival Time Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Tarik Chargui, Fatma Essghaier, Abdelghani Bekrar, Hamid Allaoui,
Damien Trentesaux, and Gilles Goncalves
A Hybrid Simulation Model to Analyse and Assess Industrial
Sustainability Business Models: The Use of Industrial
Symbiosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
Melissa Demartini, Filippo Bertani, Gianluca Passano, and Flavio Tonelli

Optimal Production and Supply Chain Planning


Realization of an Optimal Production Plan in a Smart Factory
with On-line Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
Hugo Zupan, Marko Šimic, and Niko Herakovič
Dynamic Scheduling of Robotic Mildew Treatment by UV-c
in Horticulture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
Merouane Mazar, Belgacem Bettayeb, Nathalie Klement,
M’hammed Sahnoun, and Anne Louis
Understanding Data-Related Concepts in Smart Manufacturing
and Supply Chain Through Text Mining . . . . . . . . . . . . . . . . . . . . . . . . 508
Angie Nguyen, Juan Pablo Usuga-Cadavid, Samir Lamouri,
Bernard Grabot, and Robert Pellerin
Benchmarking Simulation Software Capabilities Against Distributed
Control Requirements: FlexSim vs AnyLogic . . . . . . . . . . . . . . . . . . . . . 520
Ali Attajer, Saber Darmoul, Sondes Chaabane, Fouad Riane,
and Yves Sallez
Contents xxiii

A Proposal to Model the Monitoring Architecture of a Complex


Transportation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
Issam Mallouk, Thierry Berger, Badr Abou El Majd, and Yves Sallez

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543


Cloud-Based Manufacturing Control
Cloud Networked Models of Knowledge-Based
Intelligent Control Towards Manufacturing
as a Service

Theodor Borangiu1(B) , Radu F. Babiceanu2 , Silviu Răileanu1 , Octavian Morariu1 ,


Florin Anton1 , Cristina Morariu1 , and Silvia Anton1
1 Department of Automation and Applied Informatics, University Politehnica of Bucharest,
Bucharest, Romania
{theodor.borangiu,silviu.raileanu,octavian.morariu,florin.anton,
cristina.morariu,silvia.anton}@cimr.pub.ro
2 Embry-Riddle Aeronautical University, Daytona Beach, FL 32114, USA

babicear@erau.edu

Abstract. This paper describes a 10-year scientific journey in the area of Cloud-
based manufacturing in the SOHOMA research community. The tour started in
Paris on June 20, 2011 at École Nationale Supérieure d’Arts et Métiers, Paris and
returns here on 1st October 2020 after annual stops in Bucharest, Valenciennes,
Nancy, Cambridge, Lisbon, Nantes, Bergamo and Valencia. Several stages in the
evolution of Cloud manufacturing research are recalled in their historical order:
vertical enterprise integration and networking; resource and product virtualization
and cloud infrastructure design; batch optimization with cloud services; real time
big shop floor data streaming, machine learning in the cloud for predictive pro-
duction control, resource health monitoring and predictive maintenance. Major
contributions of SOHOMA authors are evoked: extending the cloud computing
model to on demand shop floor resource sharing, infrastructure sharing in cloud
networked enterprises, MES workload virtualization, deploying cloud services
in real time with virtual machine and containers, high availability solutions and
software defined networking, machine learning for predictive manufacturing.

Keywords: Service orientation · Enterprise integration · Resource


virtualization · CMFg infrastructure · Cyber-security · Software Defined
Networking · MES virtualization · Optimization · Big Data · Machine learning ·
Digital Twin · MaaS

1 Introduction. Stages in the Evolution of Cloud Manufacturing


Addressed by the SOHOMA Community
The 5th and last of the manufacturing paradigms that succeeded one another, always
seeking for smaller volumes and costs while rising the product variety, is Cloud Man-
ufacturing (in short CMfg) that was first referred about the year 2000. CMfg moved
this vision of dynamic mass customization a step further by providing service-oriented

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 3–40, 2021.
https://doi.org/10.1007/978-3-030-69373-2_1
4 T. Borangiu et al.

networked product development models in which clients are enabled to configure, select
and use customized product making resources and services, ranging from computer-
aided design and engineering software to reconfigurable manufacturing systems [1].
Several generic applications using Cloud platforms have been reported in the first years
after 2000, for hosting and exposing services related to manufacturing such as customer
order management, adaptive capacity planning, collaborative product design, networked
supplier relationship management, etc.
Historically, the relationship between computer science and manufacturing control
started in the ’70s with the initial idea of “digital manufacturing” and “numeric interpo-
lation” for CNC machines. Since then, advances in computer science have given birth to
the Cloud Computing (CC) paradigm, where computing resources are seen as a service
offered to end-users. CC has been used to improve first the IT infrastructure of the enter-
prise’s business management and capacity planning (ERP) layers and its connectivity
in the Manufacturing Value Chain (MVC), and then to increase the High Performance
Computing (HPC) of the manufacturing control infrastructure; its principles have also
inspired the new CMfg paradigm with the perspective of benefits for both manufacturers
and their customers.
The benefits of Cloud for manufacturing enterprises are numerous; Cloud as a pro-
curement model delivers undisputed cost efficiencies and flexibility, while increasing
reliability, elasticity, usability, scalability and disaster recovery. The key difference
between Cloud Computing and CMfg is that resources involved in CC are primarily
computational (e.g., server, storage, network, software), while in CMfg resources and
abilities involved in the whole life cycle of product making are virtualized and encapsu-
lated in different service models [2] where different product stakeholders can search and
invoke the qualified services according to their needs, and assemble them into a virtual
environment or solution to complete their orders and manufacturing tasks [15].
Cloud manufacturing is a research topic permanently addressed by the SOHOMA
scientific community in accordance with its general evolution and relationship with
advances in Information, Communication and Control Technologies (IC2 T) applied to
the manufacturing industry.
The first contributions related to CMfg addressed the vertical integration of manu-
facturing enterprises having already adopted cloud computing on the higher layers of
business and operations management processes for supply, ERP and digital marketing,
however not yet integrated with the production and logistics layers [3]. The integration
along the vertical enterprise axis: business management, ERP, high level production con-
trol (Manufacturing Execution System - MES), shop floor distributed control is based
on the Service Oriented Architecture (SOA) concept and marks a shift from the agent-
centric architecture to SOA. The application of SOA principles in the factory automation
domain consisted in encapsulating the functionality and business logic of components
in the production environment (legacy software and devices) by means of Web services
[4]. Cloud-based enterprise networking in the MVC was also part of the ‘enterprise
integration’ theme in this early CMfg stage.
The design of cloud models and infrastructures for manufacturing was a topic present
since the 2014 edition. Cloud services in MES are based on the virtualization of shop
floor devices and a new control and computing mode that operates in the global Cloud
Cloud Networked Models of Knowledge-Based Intelligent Control 5

Manufacturing model (CC-CMfg), with progressive solutions towards real time. In the
vision of the SOHOMA community, CC-CMfg services orchestrate a dual OT (operation
technology control) and IT (computing) model that:

a. Transposes pools of shop floor resources (robots, CNC machines), products (recipes,
client spec.), orders (work plans, task sequences) into on-demand making services;
b. Enables pervasive, on-demand network access to a shared pool of configurable HPC
resources (servers, storage, applications) that can be rapidly provisioned and released
as services to various high level MES tasks with minimal management effort or
interaction [5]. Hence, CC-CMfg may use cloud computing facilities.

There were proposed CC models (public, private) and techniques for the integra-
tion of an Infrastructure as a Service (IaaS) cloud system with a manufacturing system
based on the virtualization of multiple shop floor resources (robots, machines). Major
contributions were brought for shop floor entities (resource, intelligent product) and
MES workload virtualization in the cloud. One solution is a combination of virtual
machines (VM) deployed in cloud before production start (offering the static part of ser-
vices), and of containers executed on the VMs which run the dynamic part of services
because they are deployed much faster than VMs [6]. High availability (HA) methods
and Software-Defined Networking (SDN) mechanisms for interoperability, resilience
and cyber-security in the interconnected CC-CMfg architecture were developed [7].
However, despite the usefulness of CC for CMfg (option b. of the Cloud manufactur-
ing model), SOHOMA’17 authors of [8] advocate that considering CC as a core enabling
technology for Cloud manufacturing - as is often put forth in the literature - is limited to
the early stage CMfg history and should be reconsidered. A new core-enabling vision
toward Cloud manufacturing, called Cloud Anything (CA) is exemplified by option a.
of the CMfg model previously defined. CA is based on the idea of abstracting low-level
resources, beyond computing resources, into a set of core control building blocks pro-
viding the grounds on top of which any domain could be “cloudified”. This vision leads
finally to the more general sharing concept of Manufacturing as a Service (MaaS) which
is based on the “cloud networked manufacturing” paradigm.
The full dual CC-CMfg model was adopted by the SOHOMA research for pro-
duction planning and control of manufacturing systems with multiple resources and
products with embedded intelligence. CC features were taken over by operational tech-
nologies (control, supervision, dynamic reconfiguring): i) the product-making services
are provisioned automatically by a MES optimal resource allocation program; ii) the
cloud computing component offers network access to HPC services through distributed
message platform / manufacturing service bus (MSB); iii) the shop floor resources of
the CMfg component are placed in clusters with known location relative to the material
flow, and dynamically assigned at batch run time; this location is one input parameter
weighting the optimal resource allocation; iv) the CC services can scale rapidly in order
to sustain the variable real time computing demand for order rescheduling respectively
anomaly detection, the resources being assigned or released elastically; v) the assigned
CMfg resources are monitored and controlled and both the MES (service consumer)
and the Cloud (service provider) are notified about the usage within the smart control
application; the cost model ‘pay as you go’ is used to establish the cost offers for client
6 T. Borangiu et al.

orders in the service level agreements. Such Cloud models and services were devel-
oped for optimized, energy-aware production at batch level with resource sharing in
semi-heterarchical control topology [9, 10].
In the last years’ context of economic environment changes, manufacturing firms
need to shift their focus from linearly improving efficiency towards real time learning
from big data and contextual decision making. This approach reduces the uncertainty
by allowing accurate predictions of relevant key performance indicators based on his-
torical production data. Data and the way it can be processed in real time become thus
differencing success factors in these companies. The digitalization processes of large
manufacturing enterprises and the integration of increasingly smart shop floor devices
and control software caused an explosion in the data points available at shop floor and
MES layers. The degree in which enterprises can capture value from processing these
data and extract useful insights from them represents a differentiating factor on short-
and medium-term development of the processes that optimize production.
Machine learning (ML) and Big Data technologies have gained increased traction
by being adopted, as more computation power became available, in some critical areas
of planning and control. Cloud manufacturing provides a robust platform for developing
these solutions, lowering the cost of experimentation and solution implementation.
In this context, the SOHOMA community worked out ML-based approaches for
reality awareness and efficiency in cloud manufacturing and proposed applications of
machine learning algorithms for global optimization of manufacturing at batch level,
robust behaviour at disturbances and safe utilisation of manufacturing resources. The
focus was put on the prediction of Key Performance Indicators (KPI), like instant power
consumption of resources or execution time of product operations, to provide more
accurate input for the Cloud-based System Scheduler (SS) - an optimization engine
for mixed product and operation scheduling plus resource allocation: instead of static
(history records) or current (last measured) energy consumption values, short-term fore-
casted values are used as input for SS optimization at batch execution horizon. Three
new technologies were used for this type of smart Cloud-based manufacturing control:

• Big Data (BD) streaming for shop floor data processing: aggregating at the right
logical levels when data originates from multiple sources; aligning data streams in
normalized time intervals; extracting insights from real time data streams.
• Digital Twins (DT) of production assets (resources, products, orders) and system
(control, maintenance and tracking) to record and maintain a complete view of past
behaviours and KPIs of resources, processes and outcomes, and to forecast their future
evolutions. Recording historical data is needed to train ML patterns.
• ML workload virtualization in Cloud using HPC and fast deployment techniques
for: i) updating DTs: deep learning patterns and measurement variations as basis
for predictions; classification, when DT finds classes for feature vectors; clustering,
which searches and identifies similarities in non-tagged, multiple-dimension data and
tags suitably each feature vector; ii) embedding DTs: making intelligent decisions for
smart production control in two roles: smooth reconfiguring of CMfg resources for
global batch optimization based on the predicted cost of their usage; resource health
management and predictive maintenance [11].
Cloud Networked Models of Knowledge-Based Intelligent Control 7

This research topic focuses on the use of Artificial Intelligence (AI) methods in the
Cloud-based smart manufacturing control vision of the ‘Factory of the Future’ (FoF),
based on the concepts of digitalization and interconnection of distributed manufactur-
ing entities in a ‘system of systems’ approach: i) new types of production resources will
be strongly coupled and self-organizing in the entire value chain, while products will
decide upon their own making systems; ii) new types of decision-making support will be
available from real time production data collected from resources and products [5, 12].
Local initiatives like Industry 4.0 (Germany) and Advanced Manufacturing (US)
address common FoF topics among which CMfg. Thus, Industry 4.0 focuses on Cyber-
Physical Systems (CPS) for manufacturing which will provide intelligent services and
interoperable interfaces in order to support flexible and networked production environ-
ments. Smart embedded devices will work together seamlessly via the Industrial IoT, and
the centralized system controls will be transferred to networks of distributed intelligence
from machine-to-machine (M2M) to factory-to-factory (F2F) connectivity.
The vision on the future of Cloud manufacturing research was expressed since the
7th edition of the SOHOMA event held in Nantes, France and developed since then. It
relies on the concept of ‘Cloud Anything’ extended beyond the production phases and
facilities. This vision integrates the technologies and tools available for industry such as:
PLM, PLC, MES, ERP and the frameworks under development (CPS for production -
CPPS and Industrial IoT - IIoT) with the dual Cloud model CC-CMfg on top of these
infrastructures to create a product-service centric closed loop collaboration.
From the product lifecycle perspective, both the virtual (Design and Engineering) and
the physical parts of the product (Making) are assisted respectively tracked in the Cloud.
In fact, products conceived and designed to be embedded with computational power and
intelligence and so to be “smart” both in production (product-driven automation) and
utilization phases are able to exchange information both within and beyond the limit of
the factory. These smart objects are connected in the Cloud with assets and enterprises in
the supply networks and can provide a new type of cooperation, enabling collaborative
demand and supply planning, traceability, and execution [13].
CPS take advantage from the integration of Cloud-based and Service-Oriented Archi-
tecture to deploy end-to-end support along both product lifecycle (including after sales
services, etc.) and factory lifecycle. On a factory lifecycle perspective, CPS are able to
interact with all the hierarchical layers of the automation pyramid - from field level to
ERP - and to empower the exchange of information across all the process and service
stages, resulting in a better product-service development. This will foster the value net-
work alignment with its customers’ changing needs and optimization against different
perspectives (quality, time to market, costs, sustainability goals, etc.).
The architecture of Cloud-based CPS in manufacturing is organized on 7 hierarchical
layers: Physical Product; Sensors/Actuators; IoT node; Fog; Middleware; Cloud; Cloud
analytics. Information are transferred from-to the digital world with sensors and actuators
connected with IoT gateway and embedded aggregation nodes, able to pre-process and
store data. A first level of data analysis, cleaning and decision is the Fog computing layer
that aligns data in time, aggregates it and issues ad-hoc, urgent decisions, reducing the
amount of data transmitted and managed by cloud. Data is streamed via middleware to
the cloud where insights are extracted, time series are updated and patterns are created
8 T. Borangiu et al.

to allow ML operations: prediction, classification, clustering. The Cloud analytics layer


makes intelligent decisions for: collaborative product design, capacity planning, supply
control, optimizing production, managing rush orders, resource maintenance.
This vision is that of the future cloud networked manufacturing on which the con-
cept of Manufacturing as a Service (MaaS) is based - the shared use of a networked
manufacturing infrastructure to produce goods; manufacturers use the Internet and the
Cloud to share manufacturing equipment, capacity and services in order to be agile to
market demand, collaborate with the customer for product design, reduce costs, make
better products and properly manage assets [14].
CMfg is one of the main a research topics approached by the SOHOMA community
since 2011 in three stages: integration, infrastructure and smart control. There were
published 48 papers on Cloud-based manufacturing control in the annual workshops
and the post-event special journal issues, which represents 17.5% of the global scientific
production of the workgroup. Table 1 specifies these research themes and the number of
papers in each stage which showed a significant and steady increase.

Table 1. SOHOMA research lines and scientific production on Cloud manufacturing

SOHOMA research lines CMfg No. of SOHOMA papers


domains
2011–2013 2014–2016 2017–2019 Global
Analysis Adapting CC to - 1 1 2
manufacturing.
CMfg evolution
Enterprise Service orientation 4 - - 4
integration for vertical
enterprise
integration
ESB - MSB
connectivity
Cloud-based 2 1 2 5
enterprise
networking in the
MVC
Infrastructure Resource and MES - 4 4 8
design virtualization.
Security and HA
design of cloud
infrastructures
Software-Defined - 2 1 3
Networking
mechanisms for
inter-operability,
resilience and
cyber-security in
CMfg
(continued)
Cloud Networked Models of Knowledge-Based Intelligent Control 9

Table 1. (continued)

SOHOMA research lines CMfg No. of SOHOMA papers


domains
2011–2013 2014–2016 2017–2019 Global
Analysis Adapting CC to - 1 1 2
manufacturing.
CMfg evolution
Batch Cloud models and 1 4 3 8
optimization services for
optimized,
energy-aware
production
Predictive Intelligent decision - 1 3 4
control and making in CMfg
maintenance through Big Data
streaming and
machine learning
Strongly coupled - - 10 10
controls through
cloud services and
Digital Twins
embedded in
manufacturing
CPS with
distributed data
processing
MaaS MaaS for lifecycle - 1 3 4
product
management and
sharing
infrastructures:
new added value
services in the
cloud
All lines Global SOHOMA 7 14 27 48
research in CMfg

The next chapters present details of the scientific work performed in the Cloud
manufacturing domain and some of the main contributions of SOHOMA research groups.
10 T. Borangiu et al.

2 Enterprise Integration and Networking in the Manufacturing


Value Chain

2.1 Service Orientation for Vertical Enterprise Integration ESB-MSB Link

This topic represents the early SOHOMA research line in Cloud-based manufacturing.
Morariu et al. analyse in [15] the integration of job shop activities in business processes at
enterprise level; they report the design and implementation of a web service abstraction
layer for holonic manufacturing systems that allow business process orchestration of
Customer Order Management (COM) module. The COM module interacts with the
MES layer using real time events handled by the BPEL process implementation in the
execution stage.
Close related to IT infrastructures of Web Services (WS), the Service Oriented Archi-
tecture was considered as a technical architecture and enterprise integration source based
on defining production processes as workflows, decomposing them into production tasks
carried out by proper invocations to WSs and coordinated through orchestration and
choreography mechanisms. In this context, SOA was also accepted as a natural technol-
ogy to create collaborative environments linking levels 3 and 4 of ISA 95-type manufac-
turing enterprises and as implementation mean for Multi-Agent frameworks (MAS) used
to distribute intelligence across hierarchical management and control levels. Business
and process information systems integration and interoperability at enterprise level are
feasible by considering the customized product as “active controller” of the enterprise
resources, thus providing consistency between the material and informational flows.
Gerber et al. describe in [16] a flexible communication architecture approach for the
vertical integration of production process-relevant data, for closing the gap between the
business (strategic) and technical (operations) levels. The approach enables the transfer
of information in form of key performance indicators which support decision-making
processes in manufacturing companies.
In the first SOHOMA edition, a study reported in [16] focused on an active search
mechanism that creates a bridge between the enterprise use cases and intelligent man-
ufacturing systems. The research presents a framework for intelligent search of web
services that expose the offer request management functionality for intelligent manu-
facturing systems. A novel concept of volunteer-based search was introduced, in which
the search criteria is passed to the manufacturing system for self-assessment. The inte-
gration of Holonic Manufacturing Systems (HMS) in SOA/BPEL processes presents a
great advantage to enterprises, allowing simple process modification and reconfiguration
using standard tools. Also, integrated enterprise architectures allow a better tracking and
auditing of business process executions, providing valuable information based on which
the processes can be optimised and improved.
An early SOHOMA research work about the vertical enterprise integration of the
Manufacturing Operations Management layer (MES production control) with the Busi-
ness Logistics layer is described in [17]; the authors propose a conceptual model for Man-
ufacturing Systems Performance Monitoring (MSPM) derived from Gartner’s Research
Application Performance Monitoring (APM) conceptual framework [18]. The shop floor
monitoring solution is based on distributed MAS architecture capable of real time
resource-, product- and service-monitoring, and analytics/reporting. For each metric
Cloud Networked Models of Knowledge-Based Intelligent Control 11

collected by the target monitoring agents, the data is stored in Cloud database tables
using two strategies: short term storage (staging), consists in a rolling table containing
the “last N time intervals” of that particular metric and is used to display real time data
in the web application; long term storage, consists in a table containing averaged data
for each metric and is used for system tuning and for long term reporting. Thus, SOA
governance assures the capability for dynamic composition of services at runtime with-
out human intervention, allowing the manufacturing system to automatically align itself
to the business drivers.
The early stage of SOHOMA applied research for vertical enterprise integration from
shop floor layer up to ERP layer has been strongly influenced by IBM’s Manufactur-
ing Integration Framework (MIF) [19], initially developed together with manufacturing
enterprises from the automotive domain in order to assure vertical integration from MES
up to the business layer and external partners. MIF is a solution enablement workbench
built on open standards and on SOA technology and should be understood as a framework
rather than a complete application. Figure 1 illustrates the MIF architecture consisting
in a workbench application and the actual MIF runtime.

Fig. 1. IBM Manufacturing Integration Framework (C. Morariu, UT Brasov [19])

The Production Process Workbench application is an integrated development envi-


ronment based on Eclipse platform that facilitates the MIF implementation. The MIF
runtime consists from four applications: Enterprise Service Bus with IBM WebSphere
MQ ESB implementation that provides mediation services (protocol conversion and
data transformation) and assures temporal decoupling between the PLC systems, con-
figuration database and MIF runtime by message queuing; Event Sequencer that allows
sequencing of PLC inbound events based on specific rules; Event Action Manager that
performs the event wrapping and the invocation of the corresponding BPEL process
12 T. Borangiu et al.

using the BPEL engine APIs; BPEL Engine is the IBM WebSphere Process Server that
implements a BPEL runtime platform [20].
Inspired by this technology, the SOHOMA research reported by C. Morariu et al. in
[21] concerns a framework for manufacturing integration which matches plant floor solu-
tions with business systems and suppliers. This solution focuses on achieving flexibility
by enabling a low coupling design of the entire enterprise system through leveraging
SOA and Manufacturing Service Bus (MSB) as best practices. The article presents the
integration between an upper layer Enterprise Service Bus (ESB)-based business sys-
tem with a distributed Holonic MES (HMES) system based on MSB built using JADE
multi agent platform, event triggered communication and dynamic business rules. At
architectural level, ESB provides a uniform and centralized information flow across all
business components; at technical level, the ESB assures message mediation and data
transformation, offering a uniform messaging platform.
At shop floor layer the horizontal data flow is enabled by using a manufacturing
adaptation of the ESB concept – the Manufacturing Service Bus. The MSB integration
model developed by the authors is an adaptation of ESB for manufacturing enterprises
and introduces the concept of bus communication for the manufacturing systems. The
MSB acts as an intermediary for the data flows, assuring loose coupling between modules
at shop floor level. The proposed MIF Integration framework with MSB-based HMES
is illustrated in Fig. 2.

Fig. 2. MIF - MSB integration with Mediation Agent


Cloud Networked Models of Knowledge-Based Intelligent Control 13

The lower level MSB integrates the shop floor components, while MIF is used to
integrate the business level components of the manufacturing enterprise. The two busses
are linked together by a Mediation Agent which is plugged in both busses and contains a
set of rules for message passing between them. The authors demonstrate experimentally
that this architecture is superior to single bus architecture for two reasons: firstly it
provides a loosely coupled architecture at both MIF and HMES layers based on open
standards, which assures flexibility and scalability to the whole system; secondly, the
MSB implementation shields the enterprises wide ESB from a large amount of messages
that are produced and consumed at HMES layer.
The developments for service-oriented vertical integration of manufacturing enter-
prises with hierarchical ISA 95 organization have been validated in industry scenarios.

2.2 Cloud-Based Interconnection in the Manufacturing Value Chain


Providing the cloud virtualized abstractions of manufacturing resources, process con-
trol and integration activities throughout the MVC became the objective of SOHOMA
researchers early on during the advent of cloud revolution. This resulted in developing
generic and instantiated architecture core processes and technologies for the digital trans-
formation of manufacturing. It included research related to all manufacturing value chain
actors such as material processing, component and system developers, manufacturing
organizations, technology and service providers and assembly and system integrating
actors.
Babiceanu points out in [22] the need for developing a framework to be used in
modelling, analysing, and integrating systems that operate in uncertain environments
in which characteristics such as adaptation, self-organization and evolution need to be
addressed. The proposed complex enterprise systems framework combines knowledge
from complex systems science and systems engineering domains, and uses compu-
tational intelligence and agent-based systems simulation methodologies to allow for
behaviour prediction of large-scale, contextual enterprises with cloud systems.
The research reported in [23] introduces a new messaging protocol named Quantum
Lifecycle Management (QLM) - a messaging standard that aims to enable all kinds of
intelligent entities in and between organizations to exchange IoT information in ad hoc,
loosely coupled ways that increase the SOA scope as well as the data exchange enterprise
interoperability in the IoT. Kubler et al. consider in this article that messaging protocols
such as QLM or similar ones (JMS, oBIX) are an essential step to design future SOA
services in the cloud and to enhance product lifecycle management.
Collaborative manufacturing environments integrating diverse information systems
enable the creation of “virtual enterprises” with competencies to effectively and effi-
ciently share their knowledge and collaborate with each other using Internet Web-based
technologies in order to compete in a global market. On the other hand, MAS rep-
resents one of the most promising technological paradigms for the development of
open, distributed, cooperative, and intelligent software systems; the areas of Service
Oriented Computing and Multi-Agent Systems are getting closer; this duality led Giret
and Botti develop a design methodology and framework allowing the development of
cloud service-oriented manufacturing systems [24]. The framework uses ANEMONA,
a MAS methodology for HMS analysis and design based on the Abstract Agent notion -
14 T. Borangiu et al.

embedded in THOMAS - an open-agent environment that uses a service-based approach


to create customizable platforms for intelligent agents grouped in virtual organizations.
Cloud manufacturing positively affected the traditional manufacturing processing
by moving the bulk of order planning, sequencing and scheduling and resource alloca-
tion out of shop-floor computational units through virtualization of the MES. Moving
manufacturing computations into the cloud also provided other benefits such as real-
time monitoring and corrective measures, reduced defects, and capabilities to predict
system degradation. In the research described in [25], Babiceanu and Seker propose
a novel model for the distribution of manufacturing operations in an IIoT environment
where cyber-physical resources participate in a decentralized holonic scheduling process.
Holonic scheduling is employed for shop-floor based resources at the enterprise level
and across shop-floors of remote enterprises. Inter-enterprise data networks are mod-
elled with Software-Defined Networking (SDN), such that manufacturing data packets
are under a centralized logical control reducing the likelihood of data theft, and network
delay or failure.
A new approach for the integration of distributed manufacturing networks in the
Smart Factory is proposed by Pipan et al. in [26] based on the principle of unified node
structure that uses ‘manufacturing nodes’ for its core plug-and-produce (PNP) modules.
The communication layer based on OPC UA is integrated in the proposed distributed
structure within local Ethernet network that assures the entire global communication.
This distributed model uses unified building blocks and connects them in a manufacturing
node having its own decision-making capability, communication protocols, data servers
and connections to manufacturing processes, human-machine interfaces and cloud plat-
forms. Such manufacturing node structures can interconnect enterprises for networked
execution of production orders.
During the 8th SOHOMA edition the manufacturing value chain was discussed in
several articles in the context of enabling technologies for Industry 4.0 that provide the
foundations for cloud-based smart manufacturing.

3 Virtualization and Cyber-Security in CMfg Infrastructure


Design

The second SOHOMA research stage in Cloud manufacturing is represented by the


design of CMfg infrastructures, which was addressed in three ascendant perspectives:
shop floor resource and product virtualization; MES virtualization; secure, highly avail-
able (HA) data transfer and processing in cloud-based dual CC-CMfg control models
and enterprise networking.

3.1 Resource, Product and MES Virtualization. Security and HA of CMfg


Infrastructures

As mentioned in the introductory part, the dual CC-CMfg model was adopted as a generic
solution for high performance computing (CC) and shop floor resource sharing in large
scale, cloud-oriented manufacturing infrastructures and applications.
Cloud Networked Models of Knowledge-Based Intelligent Control 15

Several MES specifications for the migration of workloads in the cloud have been
proposed starting with the SOHOMA 2014 edition. There were defined strategies for
cloud adoption resulting in robust, highly available architectures in which the informa-
tion flow can be synchronized with the material flow, and which are flexible enough to
cope with dynamic reconfigurations of shop floor devices through APIs (Application
Program Interfaces) exposed and SOA choreography [27]. Cloud adoption in manufac-
turing enterprises with ISA-95 organization gained from using a 2-layer public-private
cloud-based software architecture with MES workload virtualization in a private cloud
platform delivering services in the IaaS model and having connectivity with external
organizations and clients for high level business processes, as shown in Fig. 3.

Fig. 3. Dual cloud adoption strategy for manufacturing enterprises and MES virtualization with
programmable infrastructure

The private cloud platform implements in the IaaS model the centralized part of
the MES layer by provisioning computing resources (CPU, storage, I/O) and global
applications. One of these applications, the System Scheduler (SS), uses the HPC capa-
bilities of the cloud for: resource teams configuring, batch planning, product scheduling,
resource allocation, cell and production monitoring. The cloud MES communicates with
its decentralized part in which intelligence is distributed among agentified and virtualized
shop floor resources and intelligent products [28]; the delegate MAS pattern (dMAS)
was used for this decentralized part. The emerging concept of programmable infrastruc-
ture (PI) strongly impacted the virtualized MES design; PI provides a series of APIs to
the cloud software stack, including hypervisor, operating system and application layers
for accurate identification, monitoring, real time (re)configuration and control. At sys-
tem level, redundancy mechanisms that detect and correct failures were implemented.
Morariu et al. describe in [29] a mechanism based on workload monitoring that is able to
detect failures and unexpected events in real time and to process them based on rules in
order to assure smooth execution of the manufacturing operations. The implementation
of such a mechanism requires prior definition of the metadata documents: workload
16 T. Borangiu et al.

redundancy profile, event definitions and recovery rules; this redundancy was evaluated
for the virtualized CoBASA-type resource team reconfiguring in vMES implementation.
Virtualization of shop floor devices, i.e., the creation of a virtualized layer (the
vMES), involves the migration of MES workloads that were traditionally executed on
physical machines to the private cloud infrastructure as virtual workloads. The idea is
to run all the control software in a virtualized environment and keep only the physical
resources (robots, machines, conveyor, etc.) with their dedicated real time controllers on
the shop. From a virtualization perspective, two types of workloads have been considered
in the SOHOMA CMfg developments:

• Shop floor resources: robots, CNC machines, conveyors etc. Their control architec-
ture varies depending on the manufacturer and technology used, but in general the
resource is controlled by a PC-based workstation. The communication between the
control workstation and the physical resource can be either standard TCP/IP based
(the workload is directly virtualized and a virtual network interface, used to control
the resource, is mapped to it) or a proprietary wire protocol (the virtualization process
needs a local controller on the shop floor that provides the physical interface). This
physical interface is virtualized and mapped through a specific driver to the virtualized
workload over the network.
• Intelligent products (IP) that are created temporarily in the production stage by embed-
ding intelligence on the physical order or product carrier (pallet) that is linked to
information and rules governing the way the product will be made (order of oper-
ations, resources assigned to operations, transfer routes, storages). IP virtualization
moves the processing from the intelligence embedded in the product to the virtual
machine (VM) in the cloud using a thin hypervisor on the product carrier and WI-FI
connection, either in a dedicated workload or in a shared workload to make decisions
relevant to its own destiny (Fig. 4).

Fig. 4. Intelligent Product virtualization. Left: a) Intelligence embedded on product carrier, i.e.
on the product during its making; b) IP virtualization mechanism. Right: IP based on mobile
technology and standardized OS (e.g. Arduino, Raspberry PI).

The binding between workload templates and virtualized resources is done using
shop floor profiles, which, in the authors’ view are XML files and contain a partial or
complete definition of the manufacturing system’s virtual layout and mappings [5, 30].
Shop floor profiles are workload centric and contain a list of workload definitions. The
Cloud Networked Models of Knowledge-Based Intelligent Control 17

workload refers to a specific revision of a VM published in the service catalogue, a


number of mapped virtual CPU cores, the amount of RAM memory allocated to the VM
and the amount of disk space; the workload also contains references to a list of mapped
resources, together with parameters passed.
The adoption of cloud computing in platforms for MES implementation raises new
challenges for securing the data retrieved from shop floor devices and sent to resources.
There have been defined four main issues concerning the integration in cloud of shop
floor devices (resources and intelligent products) relative to security requirements: a)
unauthorized access to information, b) theft of proprietary information, c) denial of
service, and d) impersonation. To address these requirements, a policy-based mecha-
nism is proposed in [31] to handle transport security by introducing a real time Public
Key Infrastructure (PKI) platform using certification authorities to generate certificates
on-the-fly and secure socket communication. Also, a document level encryption and
signing mechanism was introduced as a component of the policy for all MES messages
exchanged between intelligent products, shop floor resources and sensors.
The document layer governs the mechanisms used to secure the payload of the
communication, while the transport layer specifies if the socket communication should
be encrypted using SSL/TLS and if so, what kind of ciphers should be used. The principal
advantage of this policy-based approach is that a centralized policy allocation can be
used to automatically enforce security behaviour across the ISA 95 enterprise layers:
business, Cloud MES, and dMAS, instead of individual configuration of each end point
at the edge of the distributed environment [32].
The increasing use in manufacturing control systems of the latest developments in
IC2 T such as embedded systems (local resource control, product intelligence), wire-
less communication (eliminate cables, easy access to remote resources), cloud (HPC,
resource sharing), online identification (simplified product routing, in line quality con-
trol) and holonic manufacturing (product-driven automation, heterarchical control) led
to a major change represented by the large scale usage of distributed control systems in
MAS frameworks.
The research reported in [33] by Răileanu et al. describes in this context a method-
ology for replicating in the cloud software agents associated to the decentralized control
of manufacturing resources. This survivalist approach preserving functionality through
replication of the agents represents an extension of the generic agentification process
which consists in associating software agents to physical entities in order to simplify
the access to the resources’ operations managed as services through standard mes-
sages in multi-agent control frameworks. The developed methodology is validated
using the JADE framework. A JADE agent acts as intermediary between the MAS
framework based on the exchange of standardized FIPA messages and direct resource
communication which is based on exchanging information over a TCP connection.
The HA facility can be implemented at operating system (OS) level or at application
level (at the container level). The problem with running applications that are exchanging
messages in HA clusters is that when a node becomes unavailable and the service is
migrated the session is lost. The approach taken is to offer HA with session preserva-
tion. The proposed solution is based on Docker (https://www.docker.com): JADE agents
are running in separate Docker containers and on separate VMs running on the cloud.
18 T. Borangiu et al.

JADE agents are implemented using Spring Session (https://spring.io/) which offers an
easy way to replace a HTTP session in an application container, and also supports clus-
tered sessions. Docker containers are clustered and managed in Swarm mode, with load
balancing. Further research on CMfg infrastructure design was reported.
Based on this principle, Răileanu et al. developed a 2-layer CC-CMfg semi-
heterarchical cloud architecture for manufacturing control tasks: i) resource team con-
figuring, mixed batch planning and product scheduling, resource allocation, cell and
production monitoring on the upper SS level running in cloud IaaS platform, and ii) exe-
cution and rescheduling of orders running on the lower decentralized, agent-based dMES
level. The choice of having centralized batch optimization and decentralized production
control (which may override the centralized optimization) is justified by the HPC avail-
ability to run on line optimization programs at batch horizon in the cloud while reacting
quickly at unexpected events; this implies decentralizing the control structure on the
shop floor layer, distributing intelligence to products and agentification [34], Fig. 5.

Cloud

Database
Optimization
Optimization engine
D)
data

Supervisor agent:
Update resource
and WIP status 1. Interface for optimization engine
2. GUI for production monitoring (WEB)
3. Control strategy
4. Dispatching orders based on chosen
1. Energy consumption control strategy
2. Resources status

C) PLC (access through


TCP/OPC bridge) TCP communication for:
Dispatch
orders
1. Transport command (&operations)
TCP & digital I/O for (heterarchical operation) and control
operation execution and 2. Pallet location strategy
resource status update
Resourceagent
Resource agent
agent Orderagent
Order
Order agent(JADE)
agent (JADE)
(JADE)
Resource Order
(onIED agent
IED (JADE)
attached toaaa
(JADE)
Resource
(JADE) (on(on
agent
(JADE) (on
(on ACL message (on
Order
(on
(on IED
IED attached
agent (on IED
attached
attached to
to
to a
resourcePC)
resource
resource
resource PC)
PC)
PC)
(operation execution) attached pallet)
pallet)
to a pallet)
pallet)
pallet) A)
TCP connection
B) agent synchronization) To database:
1: Operation execution
time on resources
Resource 2. Orders status
(TCP server) 3. Resources status after
operation execution

Fig. 5. The 2-layer architecture for Cloud-based semi-heterarchical manufacturing control

In order to solve the HA problem extended to real time CMfg monitoring and control
applications with real time update of the cloud databased fed with shop floor data, a
HA cluster is foreseen for database and application availability. The proposed solution
involves the following VMs: 1) two load balancers (VMs) running in a cluster; the Load
Balancer VM is publicly available and is accessed from Internet to get requests for two
types of services: secure https requests and JAVA agent communication, and to forward
these requests to the HA cluster in the internal network; 2) the HA cluster composed
by two nodes (VMs), it offers availability for three services: i) web interface access; ii)
Cloud Networked Models of Knowledge-Based Intelligent Control 19

JAVA agent communication; iii) database access; 3) The MySQL cluster uses four VMs
grouped in two Node Groups amongst which is distributed the 10-table Cloud database.
Recent SOHOMA research is devoted to data collection and aggregation from shop
floor resources and products with embedded intelligence on the Edge Computing layer
of Industrial IoT (IIoT) frameworks, and data transfer in real time to a database located
in the private Cloud. In order to support multiple communication protocols and adapt the
information generated by the sensors/devices to centralized CC tasks, the IoT gateway
principle was extended to the aggregation node concept (Fig. 6) as defined in [35].

Fig. 6. Edge computing architecture for: (a) continuous; (b) operation-based data collection for
database storage and centralised MES tasks in the Cloud

One problem for which solutions were obtained in SOHOMA research is the inte-
gration of an IaaS cloud system (the CC component) with the manufacturing system
sharing multiple resources (the CMfg component) in the dual CC-CMfg control topol-
ogy. The cloud services offered in the IaaS infrastructure can be accessed in two ways:
a) deploying the services inside the cloud system before the production is started, or b)
deploying the services on-demand whenever they are needed.
The first approach offers the services just before the production starts and the man-
ufacturing system can start the production without any delay. The problem is that the
resources must be pre-provisioned and the load of the services should be predicted,
online reconfiguration or deployment of the services adding in some cases downtime to
the services due to the restart of the virtual machines implied in the process. Also, this
approach will use more resources than needed and is less flexible. The second approach
is consuming the resources in an efficient way; however, being an IaaS system, service
provisioning will require time which is translated in production delays.
The solution proposed by Anton et al. in [36] is to use a combination of virtual
machines deployed in cloud before the start of the production, and to run the services
in containers executed on the virtual machines. Thus, the virtual machines accessed as
services in the IaaS cloud will offer the static part of the services, and the containers
which are deployed much faster than virtual machines will cover the dynamic services.
In order to integrate the manufacturing system with the cloud services a middle-
ware application was created; it acts as a communication bridge and protocol translator
between the shop floor devices shared in the CMfg system and the CC management
platform. In order to have an elastic solution which can scale rapidly, a special template
20 T. Borangiu et al.

was added into the cloud offerings. The VM template is based on Red Hat Enterprise
Linux, and is configured to use OpenShift container platform and Ansible automation
for managing the containers.

3.2 Software-Defined Networking Mechanisms for Interoperability, Resilience


and Cyber-Security of Cloud-Based Manufacturing Systems
The 5th SOHOMA edition held in 2015 in Cambridge U.K. introduced for the first time
the concepts of Software-Defined Manufacturing (SDM) and Manufacturing Software-
Defined Networks (Manufacturing SDN). A relatively new concept at that time in cloud
and computer networking domains, SDN was developed to address the limitations of
traditional IP networks such as network congestion and dropped packets. In addition,
SDN implementations offer network scalability and security solutions, which other-
wise would need to be separately implemented. The adoption of SDN in manufacturing
was first proposed as solution for manufacturing product design and operational flow
by Babiceanu and Seker [37]. The logical-only centralization of shop-floor operations
control within the manufacturing shared-cloud gave rise to clusters of virtualized man-
ufacturing networks that could be better controlled given the global view of linked SDN
servers. While not considering the SDN separation of data and control planes of the IP
network architecture, SDM was introduced by Kubler et al. as part of their defined cloud
manufacturing taxonomy [2].
Research on the Manufacturing SDN was subsequently followed up during the 6th
SOHOMA workshop by complementing network application layer virtualization with
additional services such as security and resilience. Both these network characteristics
enhance the Manufacturing SDN through their added value of building a more resilient
and secure network in the event of network vulnerability exploitation by malicious actors,
Babiceanu and Seker [38]. As all other digital networks, digital manufacturing networks
are susceptible to cyberattacks and vulnerability, and risk models need be in place as
response to potential cyber events.
The need and expected benefits of interoperability platforms for Manufacturing SDN
are covered in the 7th SOHOMA workshop, when there were identified the current man-
ufacturing environment characteristics that ask for a paradigm change in manufacturing
networking such as: remote distributed manufacturing locations, existing different legacy
manufacturing systems with limited scalability and flexibility, and proprietary commu-
nication and network management protocols which make security framework adoption
almost impossible. Expected benefits of employing Manufacturing SDN include interop-
erability across the network, data integrity and timely communication, improved network
scalability, and capabilities to implement security measures and updates on a continu-
ous basis. Building on the framework proposed in the previous SOHOMA editions,
Babiceanu and Seker further detailed the data, control, and application layers of the
Manufacturing SDN [39]. A two-manufacturing enterprise case study was considered,
and the network performance was simulated from the perspective of manufacturing data
packet transmitted between the two entities. Under heavy network congestion scenar-
ios, the simulated SDN-based routing profile shows significantly better performance
compared to the output of traditional IP networks.
Cloud Networked Models of Knowledge-Based Intelligent Control 21

SDN-based manufacturing models were considered by Babiceanu and Seker [40] at


SOHOMA’18 with the SDN controller as the backbone for multi-manufacturing enter-
prise network in the same way MES is at the centre of a single manufacturing enterprise.
A case study with six manufacturing resources geographically distributed in different
locations and susceptible to failures was considered. Cyberattacks against the resources,
data packets, and SDN network were simulated and state-machine models were proposed
to build real-time production schedules.
Following up on their previous work Babiceanu and Seker employed the SDN-based
manufacturing concept as the backbone for their Product Design and Manufacturing
as a Service model presented at SOHOMA’19. The proposed system virtualizes every
resource, process, or activity in the product design and manufacturing ecosystem, and
coordinates requests from the customers database with product types database selections
and the product types manufacturing process plans with capabilities of manufacturers in
the entire ecosystem. In addition, the SDN network enables running deep learning models
and data analytics for creating new product design solutions, which are subsequently
added to the product types database [14].
Software-defined networking was also surveyed at SOHOMA’19 by Crăciunescu
et al. as potential technology for managing distributed manufacturing clouds which has
the benefit of a simplified implementation for distributed architectures and provides
software abstraction for network control plane [41].

4 Cloud Models and Services for Optimized, Energy-Aware


Production

This section describes the research and scientific contributions of SOHOMA members in
the development of manufacturing applications that use the dual CC-CMfg infrastructure
and control model. This research refers in principal to Cloud-based optimization of
production planning and control, and energy awareness in allocating shop floor resources
(CNC machines, robots, AGVs, conveyors) to product-making tasks.
The CMfg on-demand resource sharing concept was referred to throughout all nine
SOHOMA editions, which demonstrates the interest to apply Cloud Computing princi-
ples in the operation technology domain; of course, large scale manufacturing systems
are mostly benefiting from transposing CC principles in the industrial field. The Cloud
Computing component showed up after the initial research stage of vertical enterprise
integration adopted for ISA 95 hierarchical organizations, the main reason being the
utilisation of its HPC capability for intensive computational tasks such as: optimization
of mixed product planning, operation scheduling and resource allocation at the farthest
(batch) horizon, balancing resource usage, minimization of energy consumption at shop
floor level, product traceability, etc. All these tasks are typical for the high-level, cen-
tralized MES of production planning and control in semi-heterarchical topology and,
being assigned to the CC platform, will be executed better, faster and in larger number.
Because the semi-heterarchical control topology was correlated permanently with the
holonic manufacturing paradigm and reference architectures (PROSA-1998, CoBASA-
2006, ADACOR-2006, HAPBA-2012, ARTI-2018), the main role of the IaaS CC system
22 T. Borangiu et al.

was to implement the System Scheduler with its complex processing tasks: optimiza-
tion engine, resource state and quality of service (QoS) update, multiple correlation of
signals measured from resources (consumed energy, temperature of motors, vibrations
of mechanical elements, etc.), decision support for control mode switching.
Progressively, the efforts of SOHOMA authors were directed towards the real-time
integration of high level MES workloads in private Cloud IaaS platforms for:

• Analysis of Big Data streams collected at run time from multiple shop floor entry
points (resources, processes, products), aligned in time, grouped using covariance
rules and map-reduced at the edge of the distributed MES layer to be transferred in
the Cloud; CC tasks update the state and QoS of resources to evaluate the necessity
of reconfiguring production plans and resource teams.
• Real-time update of optimized production schedules (operation sequencing and
resource allocation) at pre-planned timing moments: operation- or product complet-
ing and dynamically reconfiguring controls and resource teams based on process and
resource data collected during batch production run time.
• Extending CC tasks to predictive control of manufacturing systems.

During the first years of SOHOMA reported research, ontology-based architectures


for Cloud manufacturing with distributed control have been developed. Răileanu et al.
present in [42] an ontology for the manufacturing domain that standardizes the resource
scheduling, operation execution, traceability and system monitoring processes. This
ontology, modelled in Protégé, manages the standardization of the structure of the com-
municative language (syntax/grammar) through concepts and the meaning of the mes-
sages through agent actions implemented in the JADE software framework. In the same
context, Talhi et al. present in [43] top level concepts of a Cloud manufacturing ontol-
ogy integrated in a platform that allows providers and users of manufacturing systems
to collaborate in the Product Lifecycle Management (PLM) perspective.
Răileanu et al. established in [44] a Cloud-based strategy that uses CC resources to
perform in real time (Fig. 7):

Optimization
Minor change update model
Resource usage cost

Optimization engine
Minor change update
for centralized Updated
scheduling schedule for
Major change orders to be
update inserted

Operation 1
...
Operation 2 Operation n
Trigger rescheduling
only once just before
inserting a new order

Operation types

Expected cost Change /Variability Update QoS Trigger


parameters rescheduling

Fig. 7. Production re-scheduling when dealing with soft changes


Cloud Networked Models of Knowledge-Based Intelligent Control 23

• Update of resource execution time and energy consumption: the resource agents
receive a request for the execution of an operation. This request is forwarded to
the resource controller which starts a counter and reads from a sensor the current
energy consumption of the resource; at operation completion, the controller reads the
execution time from the counter and the energy consumption by subtracting the initial
energy value from the final one. These values update the Cloud database through Java
Database Connectivity. This information is stored for the centralized optimization
model and operations traceability to build up the product execution’s history.
• Adjusting production planning and resource scheduling: by collecting the resources’
state and usage cost at operation / product completion time, the current QoS and cost
of resources are used to update their weights for participation in the optimized task
assignment process. The events that can alter the optimally computed schedule fall into
two categories: hard changes and soft changes. Hard changes are events which alter
in a major way the production (resource/operation failures which cannot be overcome
without batch rescheduling in order to eliminate unavailable resources). Soft changes
consist of small variations of the cost parameters which alter the computed schedule
but do not make it unfeasible. Only when the increase of operation execution time or
energy consumption exceeds a predefined threshold the schedule optimization will be
run in the Cloud, but the new computed schedule replaces the current one only if it
produces significant better results (e.g., energy consumption).

Optimization engines such as ILOG CPLEX were used in the Cloud SS to construct a
solution (allocation of operations on resources) sequentially and continuously improve
it. The advantage of this operating mode is that the time interval in which a feasible
solution for a given search space (shop floor layout, batch size, order complexity) is
computed can be evaluated. This interval of time is used then in real time in conjunction
with the moment when an order is completed to determine when the optimization model
will be verified against the cost parameters updated from the resources.
The conclusion about the real-time capability of the cloud infrastructure and software
to acquire and process multiple data sources for global MES tasks is that data can be
processed both locally on embedded device (instantaneous power monitoring) and on
the Cloud (energy consumption / operation) since the latency permits this option. New
MES designs with private cloud platforms can act in real time on predicted data.
In the complex analysis on emerging IC2 T for smart, safe and sustainable industrial
systems, Trentesaux, Borangiu and Thomas argue that CMfg and MES virtualization will
be included in future “Smart” developments as a networked and service-oriented manu-
facturing model focusing on new opportunities in the field of networked manufacturing
(NM), enabled by the emergence of hybrid CC platforms [45].

5 Cloud - A Cyber-Physical System Layer for Production


in the Factory of the Future
A new research chapter was opened at the 7th SOHOMA edition in Nantes: Machine
learning (ML) techniques have the potential of unlocking a new layer of optimizations
in Cloud-based manufacturing systems. Morariu et al. utter in [46] that the effectiveness
24 T. Borangiu et al.

of ML depends on the real time aspect of the data used in training. The results of these
algorithms are often accurate in a small time horizon in the future, so only a real time
scheduling engine can benefit from these approaches. Two use cases for Cloud MES
were defined: optimizing scheduling from predicted KPIs, and outliner detection.

5.1 Intelligent Decision Making in Cloud Manufacturing Through Big Data


Streaming and Machine Learning

A first example of predictive manufacturing operations was presented by Babiceanu and


Seker at SOHOMA’14 [47]; it provided insights in future resource and facility perfor-
mance and estimation of the main performance measures used in reliability engineering:
the mean time to failure (MTTF) and mean time between failures (MTBF) of shop floor
resources. Using the predicted failure data, changes of the operational status can be made
before occurrence of the actual detrimental thus protecting the resources. The predictive
decision-making process uses Big Data analytics and algorithms implemented in the
R high-level programming language that provides flexible statistical data analysis, and
visualization capabilities [48].
The integration of Cloud services in Cyber-Physical Systems was referred to in
early SOHOMA research; Babiceanu and Seker describe in [49] the framework for
the development of a manufacturing cyber-physical system that includes capabilities
for complex event processing and Big Data analytics, which is intended to move man-
ufacturing domain closer towards the cloud manufacturing system goal. The authors
assert that the cloud manufacturing system forms the cyber world and works as the
digital mirror image of the actual system that comprises the physical world. The cyber
manufacturing system operates in the cloud continuously monitoring the facility and
resource status from shop floor sensor data and edge computing algorithms. Thus, the
CMfg layer provides real-time operational and resource condition status on demand:
production monitoring; complex event processing; operational predictions and recom-
mendations to be followed in collaboration by the physical manufacturing components
to enhance production performance and thwart potential threats directed toward shop
floor resources.
Morariu et al. report in [50] and [51] a 3-year research about Machine Learning
for predictive scheduling and resource allocation, and predictive resource maintenance
in large scale manufacturing systems. The main contribution of this research consists
in developing a hybrid, AI-based control solution that uses Big Data techniques and
ML algorithms to process in real time information streams in large scale manufacturing
systems, focusing on energy consumptions that are aggregated at various MES layers.
The manufacturing system considered uses the CMfg model to share multiple inter-
changeable resources and CC services and allow for dynamic reconfiguring of operation
scheduling and resource allocation based on real time aggregation and processing of
large amounts of shop floor data from which short-term predictions are derived.
The research offers a ML-based approach for reality awareness and efficiency in
cloud manufacturing and proposes two practical applications of machine learning algo-
rithms for global optimization of manufacturing at batch level, robust behaviour at
disturbances and safe utilisation of manufacturing resources:
Cloud Networked Models of Knowledge-Based Intelligent Control 25

• Batch planning (operations scheduling and resource allocation) based on the predic-
tion of energy consumptions for the optimization of global energy cost, and
• Safe resource usage (predictive maintenance and team reconfiguration) based on
resource health monitoring, detecting anomalies and predicting unexpected faults.

The ability to predict accurately in real time the instant power consumed by a resource
in any operation is the core functionality used as building block for predictive planning
and maintenance, and as real time fault detection. The prediction functionality is designed
at the time horizon of individual operations for each shop floor resource.
Processing shop floor data for these two applications involves three tasks: i) aggre-
gating at the right logical levels when data originates from multiple sources; ii) aligning
the data streams in normalized time intervals; iii) extracting insights from real time data
streams. The designed messaging system architecture allows handling large amounts
of real time data originating from various sources and having different formats. The
messaging platform was divided in two separate parameter domains (or topics): for shop
floor resource messages (including instant energy consumption data) and for intelligent
product messages (including energy consumption and execution time per operation).
The overall layout of the information flow for the proposed solution is shown in Fig. 8.
The messages are ordered in time, guaranteed by the messaging system for each topic.
The initial resource stream and intelligent product stream are considered as raw streams
of information, being then joined in application specific streams. Once the information
required for a given application is merged in a joined stream, the next operation is a
map-reduce type operation, also working at micro-batch level and processing message
sets logically to create a reduced information stream aligned in time.
The aggregated data streams can be considered as an endless stream of feature vectors
that can be used to extract insights in real time. The research developed three types of
machine learning applications that can be used for intelligent manufacturing control:

1. Prediction: the insights obtained can be used directly in the business layers for
decision making (optimal planning) or in fault detection systems. Specifically, the
prediction problem is interpreted as the possibility to use a deep learning neural
network to learn patterns and variations in numerical measurements, i.e. energy con-
sumption, and then use the neural network to make forecasts for that measurement.
This is especially useful when the data has temporal patterns that can be learned.
2. Classification: the system tries to determine a class for every feature vector received.
3. Clustering: a set of algorithms try to find similarities in non-labelled, multidimen-
sional data and tag each feature vector accordingly. Examples of applications of the
latter are quality assurance, pattern recognition for part picking, etc.

The prediction problem is a time-based problem, where the numerical measurement


changes value at each time interval. This makes it a logical candidate for big data stream
processing, where the micro-batch interval is the main driver. A sub-class of Recurrent
Neural Network, called long short-term memory (LSTM), was used for prediction for
its good performances in detecting and modelling energy patterns occurring in a time
26 T. Borangiu et al.

Fig. 8. Information flow in real time Big Data-driven prediction scheme with machine learning

sequence. The LSTM model was used to learn the pattern of a single parameter - con-
sumed energy - and predict in time its values one or more steps ahead. Two machine
learning techniques were used in the pilot implementation:

• Resource-based predictors: bound to each shop floor resource and distinct for each
operation the resource is performing. They are implemented using sklearn LSTM
and are running centralized in clusters of cloud VMs;
• Product-based classifiers: bound to each (distinct) product type, the classifiers rate
the overall efficiency of each product completed using a multivariate feature vector
against a pre-trained model.

The scheduling algorithm was implemented in two stages. In the first stage schedul-
ing and allocation are computed based on historical data and initial estimations; in the
second stage a LSTM is trained on each resource/operation pair, during live execution of
Cloud Networked Models of Knowledge-Based Intelligent Control 27

scheduled operations. The LSTM models are then used as input for real-time reschedul-
ing and/or reallocation, whenever the initial estimation starts to differ significantly from
the actual predictions learned on each resource. In this architecture, the ML prediction
module and the optimization module work in parallel being interconnected by a multi-
dimensional buffer which is updated by the prediction module each time the execution
of an operation is recorded, and is read by the optimization module each time a new
schedule needs to be computed. The multi-dimensional buffer represents the forecast of
parameters over a set of time intervals, operations and resources (Fig. 9).
The prediction-based solution described provides better results than the traditional
production planning based either on static, historical data or on current measurements
from resources, because it offers a look-ahead global view at batch horizon, continuously
improved by predictions of enhanced accuracy.

Buffer containing forecasted data used


Resource as input data for optimization module
Offline recorded (array of dimension max(ni)*m*k )
data isolation
Event E triggering
optimization
process
Anomaly Parameters for 1st Op1
detection Parameters for 2nd Op1
module Optimization
Training Data
...

module
Set
Prediction
ons

module using Parameters for n1th Op1 es


ertati

ourc

ML
k res
m op

Online Update the number of


recorded operations for which the
data Update forecasting is realized
prediction
horizon (n)
Recording of Detailed operation
operation Shop-floor schedule of
parameters resources

Fig. 9. Cloud architecture interconnecting the ML-based prediction and optimization modules

The solution uses a private cloud infrastructure that offers high processing power
(needed for big shop floor data streaming, machine learning and optimization in real
time), scalability and fault tolerance. The execution time for predictive production re-
scheduling ranges from 0.38 s for a 100 product-batch to 4.1 s for a 1000 product-batch,
with a 2-core cloud machine running at 2.6GHz with 4GB of RAM.
Another important benefit derived from applying AI techniques to manufacturing
control is the system’s increased reality awareness, which is obtained by: a) mirroring
the shop floor reality through the prediction of behaviours and properties of highly
abstracted entities (operations, products, resources) based on realistic ML models, and
b) predicting unexpected events through classifying, clustering and anomaly detection.

5.2 Strongly Coupled Controls Embedding Digital Twins in Cloud-Based


Manufacturing CPS with Distributed Data Processing

The three last years of SOHOMA research acknowledged as a key enabler for the
advances promised by manufacturing CPS the digital twin (DT) concept – i.e., the cyber
28 T. Borangiu et al.

representation of the physical twin. In their paper [52], Redelinghuys et al. reiterate
Oracle’s point of view that a virtual (or digital) twin is a representation in the cloud of a
physical asset or a device, whose presence in the cloud persists even when its physical
counterpart is not always connected. It is important for backend software to be able to
interrogate the last known status or to control the operating parameters even when the
physical twin is not online/connected. The cloud also provides a convenient mechanism
for sharing a structured database with other devices of the shop floor in a global context.
Further, many apps are becoming available to extract value from data in the cloud [53].
In the manufacturing context, a digital twin is a set of computer models that provide
the means to design, validate and optimize a part, a product, a shop floor resource, the
production facility (the shop floor) or the batch production system in the cyber space.
The authors of [52] present a 6-layer architecture for a manufacturing cell; the DT
allows exchanging data and information between a remote simulation and the cell, for:
validation and optimization of the system’s operation; batch control; remote monitoring
of the cell; simulating future behaviour and predictive analytics (Fig. 10).

Fig. 10. 6-layer architecture of the cloud digital twin of a manufacturing cell (Redelinghuys, [52])

The same authors extend the Six-Layer Architecture for Digital Twins (SLADT)
described in [52] to a reference architecture with aggregation (SLADTA) [54]; this
architecture makes maximum use of vendor-neutral off-the-shelf software, as well as
secure and open protocols for twin-to-twin communication (Fig. 11). The idea of a
digital twin of twins through the aggregation of DTs is therefore considered.
The essential role of Digital Twins in the intelligent manufacturing control of Indus-
try of the Future (IoF) was emphasized at the SOHOMA’18 edition in Bergamo, Italy
by Paul Valckenaers in his paper [55] and presentation about the evolution from the
reference architecture PROSA for Holonic Manufacturing Systems (HMS) to the new
ARTI reference architecture extended to industry and service applications beyond man-
ufacturing. The PROSA+dMAS design is now divided into a reality-reflecting part and
a decision-making part. The importance of the former is maximised, and so the concept
Cloud Networked Models of Knowledge-Based Intelligent Control 29

Fig. 11. Connection architecture of SLADTA aggregating digital twins of SLADT type [54]

of an intelligent being was introduced. In the light of the theory of flexibility, intelli-
gent beings are protected by their corresponding reality; this property makes intelligent
beings right candidates for building an IC2 T infrastructure that is knowledgeable about
an application domain, without adding restrictions to this real world.
The digital twins constitute the blue cubes of the smart ARTI control model. They are
connected to their physical counterparts (resources, processes, products) in a direct, one-
to-one relation. The digital twins - the intelligent beings - have to become the partners of
their corresponding reality. The move to ARTI assures in-depth interoperability, access
to the world-of-interest through digital twins that safeguard real-world interoperability
when connecting to their control systems. ARTI concepts could represent a basis for the
Enterprise Integration and Interoperability strategy [56].
Inspired by the new ARTI vision, several research projects have addressed Cloud-
based DT models for particular manufacturing applications.
In this context, Cardin et al. introduce in [57] a generic framework of an energy-
aware digital twin of an industrial asset. This framework enables the coupling between
multi-physical and behavioural models for both real-time virtualization of the asset and
look-ahead behaviour forecasts for decision making. The energy-awareness relies on
the evaluation of the energetic performance of the resource, which is related both to
the physical process running and to the parameters describing the environment of the
resource. The behavioural model integrates the actual data sensed from the physical twin
and the data of the control system in order to synchronize the activity of the DT and
the physical resource. Several multi-physical models can be used all along the operation
of the asset, because the behaviour of the physics can be completely different from
one product handled by the asset to another (e.g., due to the environment variability,
influencing the asset’s parameters). The models must be switched dynamically in the
DT. These ideas are exemplified in a case study on injection moulding machines.
Anton et al. [58] developed a framework of a reality-aware digital twin for industrial
robots integrated in intelligent manufacturing. The DT framework uses cloud services
performed on two layers: i) a layer distributed among fog computing elements, and
30 T. Borangiu et al.

ii) a centralized cloud IaaS platform. It enables robot virtualization, health monitoring
and anomaly detection, and the coupling between behavioural robot models and multi-
physical processes for real time predictive robot maintenance and optimal allocation
in production tasks. The main functionalities of this digital twin are: monitoring the
current status and quality of services performed by robots working in the shop floor, early
detecting anomalies and unexpected events to prevent robot breakdowns and production
stops, and forecasting robot performances and energy consumption. Machine learning
techniques are applied in the cloud layer of the virtual twin for predictive, customized
maintenance and optimized robot allocation in production tasks. Figure 12 shows the
proposed data stream processing and analysis layer of the robot’ digital twin.
The architecture design and implementing solution for the digital twin of a shop floor
transportation system embedded in the global manufacturing scheduling and control
system are presented by Răileanu et al. in [59].

Digital twin: knowledge


extraction, data analytics, CLOUD
machine learning and
making intelligent decisions

Publish-subscribe network protocol (MQTT)


Aggregation node kernel
(PC/SBC/NUC)
……
Data collection (E) Video camera
IoT gateways

Data collection (A)


with inertial
Data collection and sensors
actuation (B)

TCP connection (D)


Smart sensors (energy, voltage,
current, temperature)

Web connection
(for parameter monitoring)
Smart sensors
(collision, weight,
vibrations)
TCP connection
(for programming)
Data collection (C)

Smart sensors
(vibrations)

Resource Control cable


Client
controller

Fig. 12. Interconnections of the robot DT aggregation node for continuous and operation-based
monitoring of QoS and anomaly detection

The main functionalities of the DT are: mirroring the current stage of the physical
pallet transportation process and the state of the physical conveyor components, pre-
dicting the values of the pallet’s transportation times (tt) along the conveyor’s segments
between any two workstations, applying these values for optimized product scheduling
and resource allocation, and detecting anomalies in the conveyor equipment behaviour
(Fig. 13). Two processes are depicted: the actualization of the transportation times and
the monitoring of the current pallet, each of them being computed for each conveyor
section.
Cloud Networked Models of Knowledge-Based Intelligent Control 31

Fig. 13. Real-time conveyor monitoring, prediction of pallet travel time (tt) and anomaly detection
at conveyor section level

AI techniques (prediction, classification, clustering, detecting major deviations rel-


ative to the standard one - STDEV) are applied in the cloud layer of the virtual twin to
optimally schedule products and early detect conveyor anomalies in the context of pre-
dictive maintenance. The informational model of the DT is implemented software as a set
of tables into a Cloud database that stores the history of transportation times, the current
location of pallets on the transportation system and data related to the communication
between the physical world and in virtual twin (Fig. 14).

Fig. 14. Infrastructure for production data transfer from the conveyor monitored with DT

The data is collected by means of OPC shared variables by an application running


on a cloud system and updates in the production database are realized each time an event
takes places. The monitored events are ‘entrance of a new pallet on a conveyor section’
and ‘transportation time on a conveyor section exceeds a forecasted value’.
In the perspective of future AI-based manufacturing control, Borangiu et al. describe
in [60] their research for smart manufacturing control with cloud-embedded digital twins.
AI is implemented in production planning and control by four technologies:

1. Big data: smart sensors pervasively instrument resources, products, processes and
orders as ‘plug-and-produce’ modules;
2. Platform: hardware aggregation nodes and middleware aligning data streams in
normalized time intervals and transferring map/reduce data in the cloud;
32 T. Borangiu et al.

3. Analytics: the application of statistics and machine learning to detect deviations in


process and resource parameters, state and behaviour patterns, and to predict QoS;
4. Operations: batch production control, global cost optimization, energy saving,
predictive resource maintenance using analytics tools; product-service extension;
product redesign from lifecycle feedback [61].

Figure 15 shows the architecture of the proposed 6-layer digital twin embedded in
the smart control tasks - batch optimization and resource health maintenance. The tasks
are performed using HPC cloud services.
Raising reality awareness for shop floor resources imposes collecting online data
from their physical parts and served process (layer I), aggregate and analyse the data
streams (layer II) and classify states, predict QoS and usage costs / product operation
(layer III). Forecasting unexpected events, operating anomalies and decisions for cus-
tomized predictive maintenance are performed in layer IV. These layers are replicated
for all active resources; the forecasted KPIs are transferred to layer V for predictions on
product topic for real time batch optimization computed in layer VI.

Fig. 15. 6-layer digital twin embedded in the decision-making process of context-aware,
predictive resource allocation in batch orders and maintenance

Recent SOHOMA’19 papers [41, 62] address the new Fog-, Mist- and Edge tech-
nologies and related hardware / software distributed devices acting as smart gateway
that interfaces the IIoT network with the Cloud in large-scale industrial applications. A
Mist-Edge Gateway architecture is proposed in [41]. It brings the computing capabil-
ities of a system with distributed intelligence (Industrial IoT, industrial CPS) as close
as possible to the IoT network, is able to perform data streaming, time alignment and
Cloud Networked Models of Knowledge-Based Intelligent Control 33

aggregation on which small amount of analytics is computed and ad-hoc; urgent deci-
sions may be taken. The rest of big data is transferred to the Cloud where it is stored for
future long-term thorough analyses and AI-based predictive decisions.

6 Conclusions. Manufacturing as a Service: New Added Value


in the Cloud Networked Manufacturing

In the perspective of Industry 4.0, the cloud-based service delivery model for the manu-
facturing industry includes product design, batch planning, product scheduling, real-time
manufacturing control, testing, management and all other stages of a product life cycle.
Nowadays manufacturing processes go beyond the production phases and factory limits.
A real impact can be achieved only via the integration of process lifecycle and product
lifecycles.
In the EU vision of the ‘Industry of the Future’ Cyber-Physical Systems are a break-
through research area for IC2 T in manufacturing and represent the new innovation fron-
tier for accomplishing the EU2020 “smart everywhere” vision. Cloud and Cloud ana-
lytics are defined as highest level layers of CPS in manufacturing, being referred in 9 of
the 14 research priorities for cyber-physical manufacturing [13].
SOHOMA research aligns to the CPS orientation in manufacturing by addressing the
following key challenges: (i) increasing autonomy and intelligence of existing machin-
ery and robots providing them with sensing and reasoning capabilities to recognize their
environment, identify components of material flows, detect unforeseen events and gain
flexibility in their assigned tasks; (ii) adaptation through context awareness and reason-
ing, aiming at making machines and robots aware of their workplace environment so
that they can perceive and obtain information on the unexpected and not programmed
conditions and events, and adapt their behaviour in order to better handle them while
taking into account safety aspects.
The past history and present of SOHOMA research includes examples of developing
multi-layered and decentralized manufacturing control architecture enabling shop floor
assets: intelligent products, orders, smart resources to take autonomous decisions. Vari-
ous applications involving big data analysis and cloud computing have been studied and
proposed, that include real-time monitoring, decentralized intelligence and smart object
networking with the interaction of the real and virtual worlds, representing a crucial new
aspect of the manufacturing production processes.
Time to market is a key success factor for competing at global level while multiple
stakeholder participation to the design, engineering and distribution process is a reality
to cope with, associated to the need for adoption of new business model in selling and
utilizing products (e.g. servitisation). The availability of flexible production technologies
(e.g. Additive, Robotics, 2D and 3D Artificial Vision, Nano Technologies) provide new
opportunity for engineering, manufacturing products and design innovative processes.
IoT technologies providing full knowledge of status and behaviour of assets and products,
as well as strong asset and activity coupling in manufacturing CPS make available a new
possibility to monitor and control the reality inside and outside the plant environment.
The adoption of IoT and CPS as enablers of product servitisation allows tracking the
34 T. Borangiu et al.

products and services along the whole lifecycle and consequently enhance customers’
experiences and satisfaction.
This landscape required more recently a new approach for defining the very concept
of manufacturing; we consider that this approach is represented by ‘Manufacturing as
a Service’ (MaaS). MaaS stands for new models of service-oriented, knowledge-based
smart manufacturing systems optimized and reality-aware, with high efficiency and low
energy consumption that deliver value to customer and manufacturer via Big data ana-
lytics, Internet of Things communications, Machine learning and Digital Twins embed-
ded in Cyber-Physical System frameworks. From product design to after-sales services,
MaaS relies on the servitisation of manufacturing operations that can be integrated into
different manufacturing cloud services such as Design as a Service (DaaS), Predict as a
Service (PraaS) or Maintain as a service (MNaaS). A schematic representation of these
services is given in Fig. 16.
MaaS relies on a layered cloud networked manufacturing perspective, from the fac-
tory low level of CMfg shop floor resource sharing model to the virtual enterprise high
level, by distributing the cost of the manufacturing infrastructure - equipment, software,
maintenance, networking, a.o. - across all customers. Manufacturing as a Service relies
on real-time insight into the status of manufacturing equipment; new sensor technology
will boost the amount of data about their statuses provided to the manufacturing cloud.
This may include data on lifetime manufacturing history, error rates, service histories,
upcoming reservations, manufacturing environmental conditions and more.

Fig. 16. Service initiation and MaaS within the services complex

Manufacturing as a Service was first introduced to SOHOMA community during the


7th workshop in Nantes, France. Addressing the limitations of cloud computing as enabler
for cloud manufacturing, Coullon and Noyé define the concept of Cloud Anything [8].
Cloud Anything abstracts the resource specificities from the common control building
blocks responsible for low-level resource management and defines them as the lowest
resource-management-centric layer for cloud manufacturing. Adding functionalities to
Cloud Networked Models of Knowledge-Based Intelligent Control 35

this layer of non-IT resources gives rise to a manufacturing infrastructure similar to the
known cloud-based IaaS, which the authors identify as Manufacturing as a Service.
SOHOMA19 came with one of the first MaaS models in the literature. Babiceanu
and Seker proposed in [14] a combined Product Design and Manufacturing as a Service
model (PDMaaS) in which customers, new or existing, with a product need in mind,
are offered the opportunity to either select an existing product type from a database or
being assisted by a deep learning engine to design their own product. This represents the
PDaaS part of the combined model. Geographically distributed manufacturers, having
similar and/or different production capabilities, are linked between them and to logistics
providers through a Software-Defined Network service infrastructure. Once an order is
placed the MaaS part of the model is employed and the selected part type is produced
and delivered to the customer.
Recent literature review provides insights for the SOHOMA community on how to
further develop the MaaS concept. Kusiak proposed the concept of Service Manufactur-
ing [63] which includes design, manufacturing, supply, distribution, maintenance and
optimization activities, all of them being offered in the “as a service” option. Wang et al.
provided a MaaS framework in the context of manufacturing service allocation in the
cloud, which features users with preferences generating the task demand and manufac-
turing providers supplying the tasks with their services, all within a Cloud Manufac-
turing platform [64]. Lastly, Liu et al. proposed a PLM framework [65] which, besides
manufacturing services, includes product design, logistics, maintenance, and end-of-life
recycling services, all integrated into a blockchain-based model.
This year, the SOHOMA’20 event returns to Paris. The workshop theme is “Man-
ufacturing as a Service - Virtualizing and encapsulating manufacturing resources and
controls into Cloud Networked Services”. It is expected that the participants will bring
a convergence of innovations in Cloud-based factory and product lifecycle management
with cyber-physical organisation and applied AI. Together, these advances will offer
new sustainable business models in the manufacturing value chain.

References
1. Wu, D., Rosen, D.W., Wang, L., Schaefer, D.: Cloud-based design and manufacturing: a new
paradigm in digital manufacturing and design innovation. Comput. Aided Des. 59, 1–4 (2014)
2. Kubler, S., Holmström, J., Främling, K., Turkama, P.: Technological theory of cloud manu-
facturing, service orientation in Holonic and multi-agent manufacturing. In: Proceedings of
the OHOMA 2015, Studies in Computational Intelligence, vol. 640, pp. 267–276. Springer
(2016)
3. Thomas, A., Trentesaux, D., Valckenaers, P.: Intelligent distributed production control. J.
Intell. Manuf. 23(6), 2507–2512 (2011)
4. Morariu, C., Morariu, O., Borangiu. T.: Volunteer-based search engine for holonic manufac-
turing systems, service orientation in Holonic and multi-agent manufacturing. In: Proceedings
of the SOHOMA 2011, Studies in Computational Intelligence, vol. 402, pp. 293–306. Springer
(2012)
5. Borangiu, T., Trentesaux, D., Thomas, A., Leitão, P., Barata, J.: Digital transformation of
manufacturing through cloud services and resource virtualization. Comput. Ind. 108(2019),
150–162 (2019)
36 T. Borangiu et al.

6. Anton, F.D., Borangiu, T., Anton, S., Răileanu, S.: Deploying on demand cloud services
to support processes in robotic applications and manufacturing control systems. In: Pro-
ceedings of the 23rd International Conference on System Theory, Control and Computing
(ICSTCC 2019), 9–11 October 2019 (2019). https://doi.org/10.1109/ICSTCC.2019.8885712,
IEEE Xplore Digital Library
7. Babiceanu, R.F., Seker, R.: Software-defined networking-based models for secure interoper-
ability of manufacturing operations, service orientation in holonic and multi-agent manufac-
turing. In: Proceedings of the SOHOMA 2017, Studies in Computational Intelligence, vol.
762, pp. 243–251. Springer (2018)
8. Coullon, H., Noyé, J.: Reconsidering the relationship between cloud computing and cloud
manufacturing, service orientation in holonic and multi-agent manufacturing. In: Proceedings
of the SOHOMA 2017, Studies in Computational Intelligence, vol. 762, pp. 217–228. Springer
(2018)
9. Răileanu, S., Anton, F.D., Borangiu, T., Anton, S., Nicolae, M.: A cloud-based manufac-
turing control system with data integration from multiple autonomous agents. Comput. Ind.
102(2018), 50–61 (2018). https://doi.org/10.1016/j.compind.2018.08.004
10. Răileanu, S., Anton, F.D., Borangiu, T.: High availability cloud manufacturing system inte-
grating distributed MES agents, service orientation in holonic and multi-agent manufactur-
ing. In: Proceedings of the SOHOMA 2016, Studies in Computational Intelligence, vol. 694,
pp. 11–23. Springer (2017)
11. Morariu, C., Morariu, O., Răileanu, S., Borangiu, T.: Machine learning for predictive schedul-
ing and resource allocation in large scale manufacturing systems. Comput. Ind. 120, 103244
(2020). https://doi.org/10.1016/j.compind.2020.103244,Elsevier
12. International Electrotechnical Commission: Factory of the future, White paper, ISBN 978–2–
8322–2811–1, Geneva (2018). https://www.iec.ch/whitepaper/pdf/iecWP-futurefactory-LR-
en.pdf
13. sCorPiuS: Future trends and Research Priorities for CPS in Manufacturing, White Paper,
EuroCPS Project (2017). https://www.eurocps.org/wp-content/uploads/2017/01/sCorPiuS_
Final-roadmap_whitepaper_v1.0.pdf
14. Babiceanu, R.F., Seker, R.: Cloud-enabled product design selection and manufacturing as a
service, service oriented, holonic and multi agent manufacturing systems for the industry of
the future. In: Proceedings of the SOHOMA 2019, Studies in Computational Intelligence,
vol. 853, pp. 210–219. Springer (2020)
15. Morariu, C., Morariu, O., Borangiu, T.: Customer order management in service oriented
holonic manufacturing. J. Comput. Ind. 64(8), 1061–1072 (2013)
16. Gerber, T., Bosch, H.-C., Johnsson, C.: Vertical integration of decision-relevant production
information into IT systems of manufacturing companies, service orientation in holonic and
multi-agent manufacturing. In: Proceedings of the SOHOMA 2012, Studies in Computational
Intelligence, vol. 472, pp. 263–278. Springer (2013)
17. Morariu, O, Morariu, C., Borangiu, T.: Resource, service and product: real-time monitoring
solution for service oriented holonic manufacturing systems, service orientation in holonic and
multi-agent manufacturing. In: Proceedings of the SOHOMA 2013, Studies in Computational
Intelligence, vol. 544, pp. 47–62 (2014)
18. Gartner: Keep the Five Functional Dimensions of APM Distinct, Gartner Research ID
Number: G00206101), September 16, 2010 (2010)
19. Morariu, C.: © 2009 IBM Manufacturing Integration Framework, Presentation UT Brasov
(2012). https://slideplayer.com/slide/9975956/
20. Moore, W., Collier, J., Mount, J., Spiteri, C., Whyatt, D.: Using BPEL Processes in Web-
Sphere Business Integration Server Foundation Business Process Integration and Supply
Chain Solutions. IBM Redbooks, IBM Press (2004)
Cloud Networked Models of Knowledge-Based Intelligent Control 37

21. Morariu, C., Morariu, O., Borangiu, T., Răileanu, S.: Manufacturing service bus integration
model for highly flexible and scalable manufacturing systems, service orientation in holonic
and multi-agent manufacturing. In: Proceedings of the SOHOMA 2012, vol. 472, pp. 19–40.
Springer (2013)
22. Babiceanu, R.F.: Complex manufacturing and service enterprise systems: modeling and com-
putational framework. In: Proceedings of the SOHOMA 2012, Service Orientation in Holonic
and Multi-Agent Manufacturing, vol. 472, pp. 197–212. Springer (2013)
23. Kubler, S., Madhikermi, M., Buda, A., Främling, K.: QLM messaging standards: introduc-
tion and comparison with existing messaging protocols, service orientation in holonic and
multi-agent manufacturing and robotics. In: Proceedings of the SOHOMA 2013, Studies in
Computational Intelligence, vol. 544, pp. 237–256. Springer (2014)
24. Giret, A., Botti, V.: ANEMONA-S + Thomas: a framework for developing service-oriented
intelligent manufacturing systems, service orientation in Holonic and multi-agent manufac-
turing, In: Proceedings of the SOHOMA 2014, Studies in Computational Intelligence vol.
594, pp. 61–69. Springer (2015)
25. Babiceanu, R.F., Seker, R.: Cyber-physical resource scheduling in the context of industrial
internet of things operations, service orientation in holonic and multi-agent manufacturing.
In: Proc. SOHOMA 2018, Studies in Computational Intelligence, vol. 803, pp. 399–411.
Springer (2019)
26. Pipan, M., Protner, J., Herakovič, N.: Integration of distributed manufacturing nodes in smart
factory, service orientation in holonic and multi-agent manufacturing. In: Proceedings of the
SOHOMA 2018, Studies in Computational Intelligence, vol. 803, pp. 424–435. Springer
(2019)
27. Morariu, O., Morariu, C., Borangiu, Th.: vMES: virtualization aware manufacturing execution
system. Comput. Ind. (67), 27–37 (2015)
28. Morariu, O., Borangiu, Th., Răileanu, S.: Redundancy mechanisms for virtualized MES work-
loads in private cloud, service orientation in holonic and multi-agent manufacturing. In: Pro-
ceedings of SOHOMA 2014, Studies in Computational Intelligence, vol. 594, pp. 147–156.
Springer (2015)
29. Morariu, O., Morariu, C., Borangiu, T.: Shop-floor resource virtualization layer with private
cloud support. J. Intell. Manuf. 16 (2014). https://doi.org/10.1007/s10845-014-0878-7
30. Morariu, O., Morariu, C., Borangiu, Th.: Adopting virtualization technologies in robotized
manufacturing, In: Proceedings of the 22th International Workshop on Robotics in Alpe-
Adria-Danube Region RAAD 2013, September 11–13, vol. 22, No. 1, Porto Roz, Slovenia
(2013)
31. Morariu, O., Morariu, C., Borangiu, T.: Policy-based security for distributed manufacturing
execution systems. Int. J. Comput. Integr. Manuf. 31(3), 306–317 (2018)
32. Morariu, O., Morariu, C., Borangiu, Th.: Security issues in service oriented manufacturing
architectures with distributed intelligence, service orientation in holonic and multi-agent man-
ufacturing. In: Proceedings of SOHOMA 2015, Studies in Computational Intelligence, vol.
640, pp. 243–263. Springer (2016)
33. Răileanu, S., Anton, F.D., Borangiu, T., Anton, S.: Design of high availability manufacturing
resource agents using JADE framework and cloud replication, service orientation in holonic
and multi-agent manufacturing. In: Proceedings of SOHOMA 2017, Studies in Computational
Intelligence, vol. 762, pp. 201–215. Springer (2018)
34. Răileanu, S., Anton, F., Borangiu, Th.: High availability cloud manufacturing system inte-
grating distributed mes agents, service orientation in holonic and multi-agent manufacturing.
Proc. of SOHOMA 2016, Studies in Computational Intelligence, vol. 694, pp. 11–23. Springer
(2017)
38 T. Borangiu et al.

35. Răileanu, S., Anton, F., Borangiu, T., Morariu, O., Iacob, I.: An experimental study on the
integration of embedded devices into private manufacturing cloud infrastructures, service
orientation in holonic and multi-agent manufacturing. In: Proceedings of SOHOMA 2018,
Studies in Computational Intelligence, vol. 803, pp. 171–182. Springer (2018)
36. Anton, F., Borangiu, T., Anton, S., Răileanu, S.: Deploying on demand cloud services to
support processes in robotic applications and manufacturing control systems. In: Proceedings
of the 23rd International Conference on System Theory, Control and Computing (ICSTCC
2019), 9–11 October 2019, pp. 537–542 (2019). https://doi.org/10.1109/ICSTCC.2019.888
5712, IEEE Xplore Digital Library
37. Babiceanu, R.F., Seker, R.: Secure and resilient manufacturing operations inspired by
software-defined networking, service orientation in holonic and multi agent manufactur-
ing. In: Proceedings of SOHOMA 2015, Studies in Computational Intelligence, vol. 640,
pp. 285–294. Springer (2016)
38. Babiceanu, R.F., Seker, R.: Cybersecurity and resilience modelling for software-defined
networks-based manufacturing applications, service orientation in holonic and multi agent
manufacturing. In: Proceedings of SOHOMA 2016, Studies in Computational Intelligence,
vol. 694, pp. 167–176. Springer (2017)
39. Babiceanu, R.F., Seker, R.: Software-defined networking-based models for secure interoper-
ability of manufacturing operations, service orientation in holonic and multi- agent manufac-
turing. In: Proceedings of SOHOMA 2017, Studies in Computational Intelligence, vol. 762,
pp. 243–252. Springer (2018)
40. Babiceanu, R.F., Seker, R.: Cyber-physical resource scheduling in the context of industrial
internet of things operations, service orientation in holonic and multi agent manufacturing. In:
Proceedings of SOHOMA 2018, Studies in Computational Intelligence, vol. 803, pp. 399–411.
Springer (2019)
41. Crăciunescu, M., Chenaru, I., Dobrescu, R., Florea, G., Mocanu, S.: IIoT Gateway for Edge
Computing Applications, Service Oriented, Holonic and Multi Agent Manufacturing Systems
for the Industry of the Future, Studies in Computational Intelligence, vol. 853, pp. 53–66.
Springer (2020)
42. Răileanu, S., Borangiu, T., Rădulescu, S.: Towards an ontology for distributed manufacturing
control, service orientation in holonic and multiagent manufacturing and robotics. In: Pro-
ceedings of the SOHOMA 2013, Studies in Computational Intelligent, vol. 544, pp. 97–109.
Springer (2014)
43. Talhi, A., Huet, J.-C., Fortineau, V., Lamouri, S.: Toward an ontology-based architecture
for cloud manufacturing, service orientation in holonic and multi agent manufacturing. In:
Proceedings of SOHOMA 2014, Studies in Computational Intelligence, vol. 594, pp. 187–195.
Springer (2015)
44. Răileanu, S., Anton, F., Borangiu, T., Anton, S., Nicolae, M.: A cloud-based manufacturing
control system with data integration from multiple autonomous agents. Comput. Ind. 102,
50–61 (2018)
45. Trentesaux, D., Borangiu, T., Thomas, A.: Emerging ICT concepts for smart, safe and
sustainable industrial systems. J. Comput. Industry 81(2016), 1–10 (2016)
46. Morariu, O., Morariu, C., Borangiu, T., Răileanu, S.: Manufacturing systems at scale with big
data streaming and online machine learning, service orientation in holonic and multi-agent
manufacturing. In: Proceedings of SOHOMA 2017, Studies in Computational Intelligent, vol.
762, pp. 253–264. Springer (2018)
47. Babiceanu, R.F., Seker, R.: Manufacturing operations, internet of things, and big data: towards
predictive manufacturing systems. In: Proceedings SOHOMA 2014, Studies in Computational
Intelligence, vol. 594, pp. 157–164. Springer (2015)
48. Adler, J.: R in a Nutshell: A Desktop Quick Reference, 2nd edn. O’Reilly Media Inc.,
Sebastopol (2012)
Cloud Networked Models of Knowledge-Based Intelligent Control 39

49. Babiceanu, R.F., Seker, R.: Manufacturing cyber-physical systems enabled by complex event
processing and big data environments: a framework for development. In: Proceedings of the
SOHOMA 2014, Studies in Computational Intelligence, vol. 594, pp. 165–173. Springer
(2015)
50. Morariu, C., Răileanu, S., Borangiu, T., Anton, F.: A distributed approach for machine learning
in large scale manufacturing systems, service orientation in holonic and multi-agent manu-
facturing. In: Proceedings of SOHOMA 2018, Studies in Computational Intelligence, vol.
803, pp. 41–52. Springer (2018)
51. Morariu, C., Morariu, O., Răileanu, S., Borangiu, T.: Machine learning for predictive schedul-
ing and resource allocation in large scale manufacturing systems, J. Comput. Ind. 120
(2020)
52. Redelinghuys, A., Basson, A., Kruger, K.: Six-layer digital twin architecture for a manu-
facturing cell, service orientation in holonic and multi-agent manufacturing. In: Proceedings
of SOHOMA 2018, Studies in Computational Intelligence, vol. 803, pp. 412–423. Springer
(2019)
53. Oracle: Digital Twins for IoT Applications: A Comprehensive Approach to Implementing
IoT Digital Twins, Redwood Shores (2017)
54. Redelinghuys, A., Kruger, K., Basson, A.: A six-layer architecture for digital twins with
aggregation, service orientation in holonic and multi-agent manufacturing systems for indus-
try of the future. In: Proceedings of SOHOMA 2019, Studies in Computational Intelligence,
vol. 853, pp. 171–182. Springer (2020)
55. Valckenaers, P.: ARTI reference architecture – PROSA revisited, service orientation in holonic
and multi-agent manufacturing. In: Proceedings of SOHOMA 2018, Studies in Computational
Intelligence, vol. 803, p. 19. Springer (2019)
56. Borangiu, T., Cardin, O., Babiceanu, R.F., Giret, A., Kruger, K., Răileanu, S., Weichhart, G.:
Scientific discussion: open reviews of “ARTI Reference Architecture - PROSA Revisited”, ser-
vice orientation in holonic and multi-agent manufacturing. In: Proceedings of SOHOMA’18,
Studies in Computational Intelligence, vol. 803, pp. 20–37. Springer (2018)
57. Cardin, O., Castagna, P., Couedel, D., Plot, C., Launay, J., Allanic, N., Madec, Y., Jegouzo,
S.: Energy-aware resources in digital twin: the case of injection moulding machines, service
orientation in holonic and multi-agent manufacturing. In: Proceedings of SOHOMA 2018,
Studies in Computational Intelligence, vol. 803, pp. 183–194. Springer (2020)
58. Anton, F., Borangiu, T., Răileanu, S., Anton, S.: Cloud-based digital twin for robot integration
in intelligent manufacturing systems, advances in service and industrial robotics. In: Proceed-
ings of RAAD 2020, Mechanisms and Machine Science, vol. 84. Springer (2020). https://doi.
org/10.1007/978-3-030-48989-2_60
59. Răileanu, S., Borangiu, T., Ivănescu, N., Morariu, O., Anton, F.D.: Integrating the digital
twin of a shop floor conveyor in the manufacturing control system, service orientation in
holonic and multi-agent manufacturing systems for industry of the future. In: Proceedings
of SOHOMA 2019, Studies in Computational Intelligence, vol. 853, pp. 134–145. Springer
(2020)
60. Borangiu, T., Anton, S., Răileanu, S., Anton, F.: Smart manufacturing control with cloud-
embedded digital twins. In: Proceedings of 24th International Conference on System Theory,
Control and Computing, 8–10 October 2020, Sinaia, Romania. IEEE Xplore Digital Library
(2020)
61. Borangiu, T., Oltean, E., Răileanu, S., Anton, F., Anton, S., Iacob, I.: Embedded digital
twin for ARTI-type control of semi-continuous production processes, service orientation in
holonic and multi-agent manufacturing systems for industry of the future. In: Proceedings of
the SOHOMA 2019, Studies in Computational Intelligence, vol. 853, pp. 113–133. Springer
(2020)
40 T. Borangiu et al.

62. Mihai, V., Popescu, D., Ichim, L., Drăgana, C.: Fog computing monitoring system for a flex-
ible assembly line, service orientation in holonic and multi-agent manufacturing systems for
industry of the future. In: Proceedings SOHOMA 2019, Studies in Computational Intelligence,
vol. 853, pp. 197–209. Springer (2020)
63. Kusiak, A.: Service manufacturing: basic concepts and technologies. J. Manuf. Syst. 52,
198–204 (2019)
64. Wang, T., Lia, C., Yuan, Y., Liu, J., Adeleke, I.B.: An evolutionary game approach for man-
ufacturing service allocation management in cloud manufacturing. Comput. Ind. Eng. 133,
231–240 (2019)
65. Liu, X. L., Wang, W. M., Guo, H., Barenji, A. V., Li, Z., Huang, G.: Industrial blockchain
based framework for product lifecycle management in industry 4.0. Robotics and Computer
Integrated Manufacturing, vol. 63, art. no. 101897 (2020)
About the Applicability of IoT Concept
for Classical Manufacturing Systems

Carlos Pascal(B) , Doru Pănescu, and Cătălin Dosoftei

Department of Automatic Control and Applied Informatics, “Gheorghe Asachi” Technical


University of Iasi, D. Mangeron 27, 700050 Iasi, Romania
{cpascal,dorup,cdosoftei}@ac.tuiasi.ro

Abstract. This paper discusses about introducing IoT principles into manufac-
turing systems so that different devices expose their state and talk to each other.
The IoT device protocol, including published events and received commands, was
adapted for various manufacturing equipment. Thus, through a cloud application,
the event published by a device is converted to a command toward other devices,
and the receiver is able to interpret the message as a command or an event. The
proposed experiment involves disabling the local I/O interaction among devices
and the use of an IoT messaging solution through cloud. One concern of IoT
applicability in production systems regards the delay measurement of a round trip
message. Our experiments revealed that such delays are mainly produced by the
external interaction protocols, as these are provided by manufacturers of devices.
Though this weakness exists, the advantage of the IoT concept can be under-
lined: collaboration among IoT devices is allowed without the need to extend the
hardware configuration. This approach is suitable to smoothly enlarge classical
manufacturing systems with new equipment and functionalities.

Keywords: Manufacturing system · IoT · Cloud computing · Round-trip delay

1 Introduction

Many areas of Artificial Intelligence have made a significant progress lately; methods
and techniques of this field are applicable in industry and participate in the so-called
Industry 4.0. On the information technology side, this trend is accelerated in areas such
as cyber-physical systems (CPS), cloud computing (CC) and Internet of Things (IoT).
A complete review [1] over the period 2014–2017 on the three approaches and their
influence on manufacturing sector indicates: (1) a continuous doubling of the number
of publications in ISI journals, (2) a large number of conceptual methods and few case
studies, simulations and experiments, (3) a lack of researches on the human-machine
interface and the interaction between industrial equipment, and (4) there are no studies
presenting the cost/effort of implementation versus the advantages presented in literature.
Adoption of these methods and techniques has a slow pace in robotic production systems
due to the difficult way for this technology to penetrate such heterogeneous, complex
environments, which are highly dependent on manufacturers of industrial solutions.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 41–52, 2021.
https://doi.org/10.1007/978-3-030-69373-2_2
42 C. Pascal et al.

By applying the IoT principles and solving some aspects of integration and resource
interaction, today’s industry can overcome certain challenges. Among them, there are:
real-time data collection from existing equipment, expanding sensory systems, smooth
integration with new devices and functionalities, improving coordination and collab-
oration methods by applying techniques from distributed artificial intelligence. From
a technological and socio-economical point of view, manufacturing systems will ben-
efit, through simple integration into cloud computing, of optimization and prediction
mechanisms at production, maintenance, energy consumption and human staff levels.
As highlighted, the lack of case studies or experiments in literature [1] makes it
hard to anticipate the implementation effort; some elements of difficulty are: current
physical resources offer limited support for the IoT-specific communication protocol, a
production system based on cloud introduces overloading of the communication network
and delays in data transmission (concerning this, systems for fog and edge computing
can be considered), and, moreover, coordination and collaboration between resources
under IoT and determining false or inaccurate states are difficult, too.
The paper is organized as follows. Related work is presented in Sect. 2. Then, in
Sect. 3 the considered manufacturing cell is described. The architecture of the IoT
solution and the developed Node-RED application, facilitating communication between
IoT devices, are detailed in Sect. 4. A way of analyzing communication delays is revealed
in the following Sect. 5. The paper ends with some results and conclusions.

2 Related Work
A preliminary study on the IoT reference architecture and its major components appears
in a European FP7 project [2], which has influenced the analysis and application in
many areas (transport and logistics, medicine, social activities and so-called intelligent
environments). IoT concept finds a distinct notion in manufacturing as Industrial IoT
(IIoT). The overview presented in [3] highlights the issues regarding connectivity in
IIoT and the challenges for this solution, too. Security and privacy requirements are
usually brought into discussion where a breach may have a negative impact in production
ecosystems. On the other hand, IIOT can be the needed glue so that many technologies
should be able to extend and improve even the closed manufacturing systems at the cell
level.
The authors present in [4] a complex IoT system consisting of two micro-assembly
manufacturing cells, a full suite of sensors and cameras, software modules and monitor-
ing applications, a cloud platform and a virtual assembly environment (a virtual reality
simulator). Inter-connectivity between IoT and cloud manufacturing is demonstrated
in [5], where the dynamics of a production system can be determined in real-time and
analyzed in cloud to achieve flexible resource management. More specifically, in [6]
there is presented an IoT architecture with five levels (resources, perception, network,
services and applications) to expose in real time the states of interconnected resources,
similar to an SCADA solution. In [7], the underlined advantages of IIoT regard the way
precision machining can be achieved using embedded sensors and intelligent algorithms
that can detect in real time deviations from nominal values during the manufacturing
processes. A method to integrate an industrial robot into an IoT system is detailed in [8].
About the Applicability of IoT Concept for Classical Manufacturing Systems 43

Authors show that the integration process gets simpler when the manufacturing system
is developed as an IoT one.
The most common connectivity frameworks considered in IoT applications are Dis-
tributed Device Service (DDS), OPC Unified Architecture (OPC UA), Robot Operating
System (ROS), and Message Queuing Telemetry Transport (MQTT). An overview of
these protocols and their performance are detailed in [9]. The last one, MQTT, was
designed for low bandwidth environments and low power consumption, particularly for
sensors and mobile devices in unreliable networks. System latency depends on data-
transfer over the network and on computing at the edge device and cloud application
[10]. Cloud solutions may have latency issues when an increased volume of data is
transferred through the cloud. An advantage is obtained with fog and edge computing
by going as close as possible to devices and sensors. These techniques facilitate IoT
communication without cloud and only preprocessed information is sent towards cloud
services. Several benefits of fog computing for manufacturing are presented in [11].
The study described in [12] highlights the strong points of using edge computing at the
robot level. Lately, the digital twin concept is requested to mirror a physical process
with one or more simulated environments, leading to a strong event-based connectivity
with sensors and actuators. The development of digital twin for a shop floor conveyor is
presented in [13].
In all mentioned projects [4–7] the basic mechanism of IoT systems is adopted to
facilitate the transfer of information from sensors and embedded devices to cloud areas
(for analysis and decision making) and the transfer of commands from cloud manufac-
turing to resources. Furthermore, the design of IoT concepts considers a collaborative
work among resources and between workers and resources, too. To the authors’ knowl-
edge, the issue of collaboration in a robotic cell under the IoT principles was still not
addressed.

3 The Considered Classical Manufacturing System


To be closer to the needed transition of present manufacturing systems towards IIoT
paradigm, a research on a current production environment including industrial devices
is a mandatory issue. The aim is to preserve the functionality of an existing manufacturing
cell when disabling the local IO communication among devices where it is possible, and
to integrate devices through an IoT platform. This approach is near a fully disconnected
system that should be interconnected only through several communication protocols via
cloud services.
Figure 1 shows the experimental environment; the considered laboratory system
contains two industrial robots (6 DoF), a machine-tool, a computer vision system, a
conveyor and more storage devices with inductive and optical sensors. This system
was integrated to solve manufacturing goals using two working areas for producing,
transferring, inspecting, and assembling a customer product with several types of parts.
Through a centralized application, a human operator adds manufacturing commands
and monitors the production. It is to notice that these devices are integrated through I/O
local connections, which define several strong constraints. For example, the conveyor is
directly linked only to the robot R1, meaning that the second robot, namely R2, has to
44 C. Pascal et al.

communicate with robot R1 to work with the conveyor. Moreover, the communication
protocol is limited by the IO interface; namely, one digital output of a robot is linked
to a digital input of the other robot. States of sensors/devices are directly exposed to
the digital/analogical inputs of dependent devices; for instance, this is the case for the
interface between robot R2 and CNC machine 5. In another cases, states are obtained via
serial, TCP/IP, and other protocols; the vision inspection system (marked 4 in Fig. 1) gives
the state of working table for the robot R1 through a serial communication, meaning
the operator application (10 in Fig. 1) gets this vision state via robot R1 - no direct
link is supported. Pneumatic actuators of robot grippers (labeled with 8 and 9) are
directly commanded by robots through local connections. Extending the sensorial system
and/or the actuators for a robot involves to solve first the local integration constraints.
Commands are also sent by means of I/O outputs and communication protocols. These
kinds of dependences limit the fast adaptability and define a rigid system.

I/O local integration


5 10 3
6

R2 R1 8
9 7 4 4
Vision
inspection
system

Industrial
Robot
R1

Sensors from
storage
8
6

5 9 3
CNC Mill Conveyor

7
Sensors from Application for
Industrial R2 storage manufacturing
Robot commands and
10
monitoring

Fig. 1. Manufacturing cell to be enhanced according to the IoT concept

Some other remarks are important to consider. Common industrial devices have
specialized and limited programming capabilities which do not allow developing an IoT
protocol. As the authors pointed out in [8] and [12], an industrial robot has its own private
protocol provided by the manufacturer to facilitate the communication with a computer
through a custom API. Other equipment, such as the visual system and CNC Mill, expose
services only by the serial ports for limited clients. The diversity of protocols requires
support of an integrator, and usually the number of possible new devices and sensors
that can be used in the system is limited by the old equipment; for example, the cost of
integration of a new camera is higher than the device itself.
About the Applicability of IoT Concept for Classical Manufacturing Systems 45

Some parts of the above presented environment can be simulated in RobotStudio


(both robots are ABB). In this case, no interconnectivity is possible with the physical
process. Our IoT–based solution enables a new way to develop the digital twin concept.

4 IoT-Based Solution
IoT is based on the publish/subscribe pattern, where the sender and receiver are loosely
coupled through a broker server. A message published on a specific topic by a client is
routed by the server to all clients that are subscribed to that topic. MQTT is the most used
IoT messaging solution over TCP/IP protocol. IoT clients are separated into devices and
applications. IoT devices are able to publish their states/events and receive commands.
IoT applications are developed to receive states/events and publish commands toward
devices. At this point, the direct interaction among devices is not supported. Without
changing the IoT approach, the proposed solution includes an IoT application with the
role of changing the type of message from event to command. Thus, a published event
can be also a command, and the received command can be an event, too; the difference
is made when the content of messages is interpreted.
The classical manufacturing systems already have some means of external com-
munication. None of our manufacturing devices supports implementation of an MQTT
client. Thus, we have to develop IoT adapters where it is possible, considering the exter-
nal interaction supported by manufacturer. It is to notice that a general architecture for
an industrial IoT adapter follows the scheme from Fig. 2. An industrial device enables
external communication through services and/or digital/analog channels and has an
application program interface (API) or protocol to interact with. Implementation of an
adapter is restricted by the client interface: the operation system and API supported by
the programming language. For those devices that have only digital/analog interactions,
some types of embedded devices with WiFi options can be used. An IoT adapter for
an ABB industrial robot is detailed in [8]. Other implemented adapters for real/virtual
controllers, visual system and CNC machine are sketched in Fig. 3.

Industrial IoT Device


services
Industrial Connection IoT
device digital Client Adapter
analog I/O Direct link/ Interface (MQTT)
Local Network

Fig. 2. General scheme of an Industrial IoT Device

Figure 4 illustrates the resulting IoT architecture for the considered production sys-
tem (see Fig. 1) based on an IBM Watson IoT broker [14]. In this stage, some sensors
and equipment are indirectly connected to the cloud through their dependent device; for
instance, the conveyor (marked with 3) is controlled by the robot.
46 C. Pascal et al.

ABB Robot IoT Device


Robot WebWare
ABB Application Interlink
S4C(+) API
Protocol TCP/IP Service
controller (RAP) (RAP client) MQTT
Local Network
Server Adapter (C#)

ABB Robot IoT Device

ABB IRC5 service ABB PC


Robot API
(virtual RS) PC TCP/IP Communication
controller Interface Runtime MQTT
Local Network
Adapter (C#)

Vision IoT Device


service
Sensor
Control Robot RS232 Implemented
AB Interface Client MQTT
(OptiMaster) Interface Adapter
digital I/O

CNC Mill IoT Device


service

EMCO DNC RS232 Implemented


PC MILL 55 Interface Client MQTT
Interface Adapter
digital I/O

Fig. 3. Developed adaptors for robots, visual system, and CNC device

As mentioned earlier, two or more IoT devices can talk one to another by a developed
Node-RED [15] cloud application (see Fig. 5). The received events are converted in
commands by the EventToCmd function node. This mechanism broadcasts every event
to all devices, excepting the sender. A limitation can be made by filtering an event
according to its content. In this case, two rules are applied: (1) if the event con-

7 R2 IBM Cloud IoT Cloud Apps


(Node-RED)
9
5 IBM Watson IoT
broker
MQTT
10
local I/O 4 8 6 IoT applications
IoT devices (python, java, ...)
sensors/devices R1 3
Fig. 4. IoT architecture for a classical manufacturing system
About the Applicability of IoT Concept for Classical Manufacturing Systems 47

tains the attribute “to” equal with *, then all devices receive the command, (2) when a
name of device is specified in attribute, the indicated device receives the information.
Not all events must be shared among devices; in such a case, they are ignored by not
using the attribute “to”.
Monitoring or debugging the manufacturing process at the cloud level is possible
too. Each device’s task can be individually tested with the insertion of commands from
Node-RED, by linking inject, function, and ibmiot nodes. Collecting, analyzing, and
visualizing data may be easily made by taking information from published events. For
example, measuring the time between sending a command and receiving the result is
carried out by two function nodes, start and stop, in Fig. 5.
Two issues can be observed: (1) the manufacturing system produces a lot of events
and data that are transported to cloud; (2) the delay of messages can be a drawback in
some manufacturing scenarios.

Fig. 5. Node RED IoT application for supporting device to device communication

5 Measurement of IoT-based Communication


Sharing data and/or states among manufacturing devices through an IoT system will
expand collaboration inside the production system. Time-related performance of com-
munication is the key aspect in the adoption of this technology, and it depends on the
private protocols provided by manufacturers, as one can see in this section.
The proposed experimental test bench includes two IoT devices, with the first one
publishing a number that will be increased by the second device. Figure 6 illustrates
the entire interaction protocol between devices, considering the Node-RED cloud appli-
cation, too. Increasing a number is the cheapest operation and easy to achieve on any
48 C. Pascal et al.

manufacturing device. Measurement of round-trip delay (RTD) involves developing a


program, at the device level, to measure and save the time between publishing an event
and receiving the command. Another option is to measure, at the cloud level, the time
from receiving an event from the IoT device till sending a command to it (see the double
headed arrow with label time). This method does not contain the cost of publishing an
event, receiving a command, and of measuring instructions. The count-based method is
also used to check the loss of messages in the network or in the IoT system.

Fig. 6. Interaction protocol to measure delays

Several RTD measurements have been made by considering pairs of IoT devices
developed according to Fig. 3. Table 1 summarizes the results obtained with different
types of IoT devices: a simple software application (labeled with software in Table 1),
a robot controller, and a small embedded device with WiFi connection (Wemos D1
WiFi mini). For the robot IoT device, we had the possibility to use several types of
controllers from ABB: S4Cplus (Robot 1), S4C (Robot 2), IRC5 (Robot 3), and virtual
IRC5 controllers from RobotStudio (Robot 4). Network latency with the IoT platform
located in another country was between 52–55 ms, measured over TCP/IP protocol;
Quality of Service (QoS) for MQTT was set to 0, meaning fast delivering without
guarantees of message reachability. The QoS level and the network latency have given
us a good overview of RTD.
The first measurement from Table 1, which is for two applications developed in
python with the module ibmiotf , shows an average time around 111 ms. In this case,
the device1 measures the time in accordance with the protocol presented in Fig. 6. One
can observe that RTD is near the latency of network (2 round trips * 52 ms: device1 →
cloud → device2 → cloud → device1 ). In the second experiment, the device1 only sends
messages and the measure is made at the cloud level. We expected to be around 55 ms
as in the first experiment, but the average is a bit higher, namely with 10 ms. A simple
explication is that the effort of both devices is different. Another experiment was to use
a 4G network (see line 3 in Table 1). As we anticipated, the average time increased due
to the latency of the network. So far, the delay of IoT platform was directly dependent
on the network, meaning that if one brings the IoT system closer to the manufacturing
network, the RTD can be relatively acceptable.
About the Applicability of IoT Concept for Classical Manufacturing Systems 49

Table 1. Time measurement – round trip delay

No Device1 Device2 Min(ms) Max(ms) Average(ms) Median(ms)


1 software software 108 132 111 110
2 - software 63 96 66 64
3 software* software* 158 283 196 195
4 software Robot 1 344 425 352 350
5 software Robot 1** 351 1130 689 685
6 software Robot 1*** 352 627 475 556
7 software Robot 2 1199 1295 1213 1211
8 Robot 1 Robot 2 1200 1610 1400 1400
9 software Robot 3 64 353 99 86
11 software Robot 4 60 450 140 80
12 software D1 mini 139 374 198 167
13 - D1 mini 88 582 139 103
* using 4G network, **/*** aggressive/relaxing waiting time, - cloud based measure

Knowing the average time delay between two simple software applications without
physical parts, the developed IoT adapters for ABB controllers can be analyzed. The
experiments 4–6 contain as device2 an S4Cplus controller. The polling rate of adapter
is set to 200 ms, representing the minimum value offered by manufacturer for RAP
protocol (see Fig. 3). This parameter influences the notification time when something
has changed for the controller (e.g., when the controller updates the received counter
– see Fig. 6). Another aspect of the developed experiment is the running program on the
controller that must be maintained in a big loop which should contain a waiting time.
This temporizing instruction allows external connections to modify some variables of
the program through the RAP protocol. In our case, the adaptor updates the received
counter in one persistent variable and waits the updated value in another variable (see
Appendix). Thus, in experiment 4, the waiting time is set as the polling rate (200 ms);
using a waiting time having the value 0, the controller limits the external interaction
and the delay gets bigger (see case 5); by increasing the waiting time to 300 ms, the
difference is found in the round trip delay (case 6). Thus, one can see that the best waiting
time is equal to the polling rate. By comparing experiment 1 with 4, one can see that
the average RTD is increased in the last case with 241 ms, which can be explained by
the limitation of the RAP solution. Another comparison for S4C controller (Robot 2)
which is older than S4Cplus shows that the round-trip delay is higher (see rows 4 and 7
in Table 1).
In the presented manufacturing process, robots 1 and 2 interact through an I/O
digital interface. Switching communication through IoT, the average delay is around
1400 ms, which is more than we expected. Thus, we must admit that a 20 years old
controller is not designed for external interactions. By contrast, the new version of
controller, namely IRC5 (Robot 3), was used in experiment 9. The entire architecture
50 C. Pascal et al.

for external interaction was changed (no polling rate was used), and the results are better
than in the first experiment. Similar results were obtained using a virtual controller from
RobotStudio (see row 10 in Table 1).
The last two lines of Table 1 contain results for a small device (Wemos D1 WiFi
mini) supporting an MQTT client. The two measures at device1 and cloud level are
showed in Fig. 7, according to the interaction protocol presented in Fig. 6. Similar plots
were obtained with different devices. One can observe two aspects: the delays are mainly
produced by the communication supported by of the industrial equipment and this limit
can be overcome only with local integration. In conclusion, the proposed approach opens
an easy way to develop interaction among old and new devices without a high cost of
implementation.

Fig. 7. Comparison of cloud based (blue) and device based (red) measures using Wemos D1 as
device2

6 Conclusions

The presented approach strengthens the vision of IoT concept by allowing device to
device sharing information. This improves coordination and collaboration by endorsing
solutions from distributed artificial intelligence. Time-related performance of round-
trip message depends on the external communication supported by each manufacturer.
However, this issue can be overcome by locally connecting external IoT devices with I/O
capabilities and by using the fast MQTT protocol for devices with considerable delays;
for example, a robot can publish events through its I/O system.
Without changing the current manufacturing equipment, a production system can
be closer to Industry 4.0 by adopting/developing IoT adapters. Generally speaking, it
will not be the case to entirely remove the local I/O interaction in order to obtain a full
About the Applicability of IoT Concept for Classical Manufacturing Systems 51

IoT system, as we did in this research. The local connections existing in the classical
manufacturing systems can be kept, but when there is a need for trying new hardware
and software solutions, without disrupting the system configuration, the proposed IoT
approach can be of great help. Preventive maintenance, context-aware systems, digital
twin and digital product are challenges that can be integrated into classical manufacturing
systems. Our experiments revealed that a digital twin is achievable by introducing IoT
digital twin devices in the system.
Moving the entire communication through cloud is a complication for the proposed
solution. About this, fog computing and IoT gateway have to be further considered.

Acknowledgement. This work was supported by a research grant of “Gheorghe Asachi”


Technical University of Iasi (TUIASI), project number GnaC2018_186/2019.

Appendix

Program for ABB robots to measure RTD


1: module Robot1
2: pers num counter:= -100; ! changed by IoT adapter
3: pers bool event:=false; ! changed by IoT adapter when it finishes to update all information
4: pers num feedback:=-100; ! changed by robot
5: proc main()
6: while true do
7: if event then ! if received an IoT message
8: event:=false;
9: feedback:= counter + 1;
10: endif
11: waittime 0.2; ! waiting time for the loop
12: endwhile
13: endproc endmodule

References
1. Kamble, S.S., Gunasekaran, A., Gawankar, S.A.: Sustainable Industry 4.0 framework: a sys-
tematic literature review identifying the current trends and future perspectives. Process Safety
Environ. Protect. 117, 408–425 (2018)
2. Atzori, L., Iera, A., Morabito, G.: The internet of things: a survey. Comput. Netw. 54(15),
2787–2805 (2010)
3. Mumtaz, S., Alsohaily, A., Pang, Z., Rayes, A., Tsang, K.F., Rodriguez, J.: Massive Internet
of Things for industrial applications: addressing wireless IIoT connectivity challenges and
ecosystem fragmentation. IEEE Ind. Electron. Mag. 11(1), 28–33 (2017)
4. Lu, Y.J., Cecil, J.: An Internet of Things (IoT)-based collaborative framework for advanced
manufacturing. Int. J. Adv. Manuf. Technol. 84(5–8), 1141–1152 (2016)
5. Qu, T., Lei, S.P., Wang, Z.Z., Nie, D.X., Chen, X., Huang, G.Q.: IoT-based real-time produc-
tion logistics synchronization system under smart cloud manufacturing. Int. J. Adv. Manuf.
Technol. 84(1–4), 147–164 (2016)
52 C. Pascal et al.

6. Tao, F., Zuo, Y., Da Xu, L., Zhang, L.: IoT-based intelligent perception and access of manu-
facturing resource toward cloud manufacturing. IEEE Trans. Ind. Inform. 10(2), 1547–1557
(2014)
7. Luvisotto, M., Tramarin, F., Vangelista, L., Vitturi, S.: On the use of LoRaWAN for indoor
industrial IoT applications. Wirel. Commun. Mob. Comput. 2018 (2018)
8. Pascal, C., Raveica, L.O., Pănescu, D.: Robotized application based on deep learning and
Internet of Things. In: Proceedings of 22nd International Conference on System Theory,
Control and Computing (ICSTCC), Sinaia, Romania, October 2018, pp. 646–651 (2018).
https://doi.org/10.1109/ICSTCC.2018.8540714
9. Profanter, S., Tekat, A., Dorofeev, K., Rickert, M., Knoll, A.: OPC UA versus ROS, DDS,
and MQTT: performance evaluation of Industry 4.0 protocols. In: Proceedings of the IEEE
International Conference on Industrial Technology (ICIT) (2019)
10. Ferrari, P., Flammini, A., Sisinni, E., Rinaldi, S., Brandão, D., Rocha, M.S.: Delay estimation
of Industrial IoT applications based on messaging protocols. IEEE Trans. Instrum. Meas.
67(9), 2188–2199 (2018)
11. Aazam, M., Zeadally, S., Harras, K.A.: Deploying fog computing in industrial Internet of
Things and Industry 4.0. IEEE Trans. Ind. Inform. 14(10), 4674–4682 (2018)
12. Răileanu, S., Anton, F., Borangiu, T., Morariu, O., Iacob, I.: An experimental study on the
integration of embedded devices into private manufacturing cloud infrastructures. In: Pro-
ceedings of 8th Workshop on Service Orientation in Holonic and Multi-Agent Manufacturing,
Bergamo, Italy, 11–12 June 2018 (2018)
13. Răileanu, S., Borangiu, T., Ivănescu, N., Morariu, O., Anton, F.: Integrating the digital twin of a
shop floor conveyor in the manufacturing control system. In: Proceedings of the International
Workshop on Service Orientation in Holonic and Multi-Agent Manufacturing, Studies in
Computational Intelligence, pp. 134–145. Springer, Cham (2019)
14. IBM IoT platform. https://www.ibm.com/cloud/watson-iot-platform. Accessed 2020
15. Node-RED. https://nodered.org/. Accessed 2020
An Open-Source Machine Vision Framework
for Smart Manufacturing Control

Silviu Răileanu(B) , Theodor Borangiu, and Florin Anton

Department of Automation and Applied Informatics, University Politehnica of Bucharest,


Bucharest, Romania
{silviu.raileanu,theodor.borangiu,florin.anton}@cimr.pub.ro

Abstract. The paper describes the design, implementation, testing and validation
of an open-source machine vision framework based on OpenCV (Open Source
Computer Vision) library. This framework was developed for smart manufacturing
control. Material conditioning and handling processes involving industrial robots
are the processes that benefit from the proposed solution. The solution offers
the following functionalities: acquisition of video streams from multiple sources,
image analysis, object recognition, localization and interaction with industrial
equipment using standard, open communication protocols. The paper covers sev-
eral design aspects: system architecture, data acquisition and standardization of
the image representation to be used by the analysis algorithms and object recogni-
tion module, input/output interaction protocols, camera-robot calibration. Results
are reported for an implementation of the framework using a commercial image
acquisition device and an industrial robot.

Keywords: Machine vision · Interaction protocol · Industrial robot ·


Open-source · OpenCV · TCP

1 Introduction
Given the importance of robotized solutions in manufacturing [1], the research and devel-
opment of vision systems used both in robot guidance and workplace monitoring have
continuously intensified during the last decade [2]. Such systems work in unstructured
shop floor areas containing parts randomly located usually in 2D environments, and
recently in 3D environments, where machine vision assists industrial robots to handle
components of the material flows [3]. There are also applications in which vision systems
are used to perform quality control based on part shape analysis and/or routing products
according to their type [4].
In the current economic context, a high number of manufacturing enterprises needing
automation cite high investment and operating costs as a major obstacle while digitaliza-
tion is perceived as inaccessible (expensive and complex) by many companies [5, 6]. The
scope of this research is to demonstrate that an open-source machine vision framework
for manufacturing with industrial robots can be easily developed from available open-
source general vision processing libraries and commercial image acquisition devices. It

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 53–65, 2021.
https://doi.org/10.1007/978-3-030-69373-2_3
54 S. Răileanu et al.

will be demonstrated that this solution has similar characteristics with a commercial one:
accuracy, speed and connectivity with a wide range of sensors and computing resources.
The realization of this machine vision framework aims at lowering the total cost of
robot-vision projects by reducing to zero the cost of the machine vision application and
the allowing to use alternative, possibly non-industrial image acquisition devices which
are much cheaper than their industrial counterparts. Currently there are commercial
applications offering similar functionalities (part detection, recognition and location)
such as Cognex Vision Pro (cognex.com), MVTec Halcon (mvtec.com), Omron Adept
SmartVision MX (omron247.com) but their drawbacks are the high price and the fact that
they use specific video input and a limited number of input devices. Concerning the open-
source solutions, to the best of our knowledge there are only applications dedicated to
image handling and processing like ImageJ (https://imagej.nih.gov/) or Micro-Manager
(https://micro-manager.org/) for laboratory equipment (e.g., control of microscopes)
and generic libraries (OpenCV, Accord) or applications (Matlab) that perform standard
computations on images. In this context the developed solution will offer: the needed
functionality for part recognition and location based on its body / contour shape and
features (area, perimeter, number of holes, blob moments), extension and standardization
of input sources and a standardized protocol for interaction with industrial equipment
(e.g., robot controller, PLC).
Some external preliminary results consist of existing image processing frameworks
(OpenCV [7], Accord.NET [8] as an extension of AForge.NET) which facilitate the
operation with different image and video formats, are able to apply standard filters and
offer a set of useful functionalities like operation with blobs and shapes (polylines).
These preliminary results represented the starting point of the solution considered in
order to improve its design and accelerate its implementation. Concerning the novelty,
this consists in the open-source nature of the project with all the associated advantages,
and in the adopted object modelling technique that uses a combination of features (e.g.,
of moments invariant to translation, rotation and scaling) and contour shape.
The article is structured as follows: Sect. 2 details the components of the machine
vision framework, their interconnection and the location of each component. Section 3
presents the interaction protocol that controls the remote operation of the vision frame-
work with types of industrial equipment. Section 4 describes the image acquisition and
calibration processes. Section 5 presents a set of experiments for object handling using
vision and a comparison with a commercial software. Section 7 formulates conclusions
and defines future research and development directions.

2 Structure of the Machine Vision Framework


The proposed machine vision framework performs the following tasks: image acquisi-
tion from multiple sources (ranging from widely available inexpensive WiFi cameras to
industrial cameras which offer information in different formats), basic image processing
(e.g., region of interest (ROI) selection, parameter selection, a.o.), advanced image pro-
cessing (training and identifying object models and features, object locating), and inter-
connection with other control applications using standardized industrial communication
protocols. The framework was designed to work together with any type of industrial con-
trol device to which it provides information concerning the handled object’s dimension
An Open-Source Machine Vision Framework for Smart Manufacturing Control 55

(measurement), type (detection and recognition), position and orientation (locating).


The industrial control devices considered are robot controllers, Programmable Logic
Controllers (PLCs), embedded control devices and industrial PCs. The research will
be validated in conjunction with an industrial robot for material handling tasks. The
structure of the generic machine vision framework is shown in Fig. 1. It consists of an
artificial vision application (for image processing), a video server providing the input
to the artificial vision application and a network communication module for integration
with control equipment in manufacturing applications.

Configure parameters & Other video monitoring and


trigger image acquisition guiding applications
USB&&
USB Standard
USB & Link Lo
Camera
Camera Link cal image stream
Camera Link
cameras stre Image Vision Trigger vision
cameras am sequence Industrial
cameras acquisition application Industrial
Industrial
equipment
and camera Standard (VA) equipment
a m Scene equipment
r e configuration
k st camera interface
Definition of information
GIGE, WiFi Networ
GIGE,WiFi server
GIGE,
& WiFi
security new image
& security processing
& security
cameras
cameras methods
cameras
Configure parameters &
trigger image acquisition Vision tools library

Fig. 1. Machine vision framework structure and information exchange between its components

The main characteristics of the video server are: i) streaming: consists in the conver-
sion of generic streams and image formats into a standardized and open format accepted
by the artificial vision application, and ii) standardization of camera interface: identifica-
tion, open/close stream, adjust image size, trigger image acquisition. Industrial cameras
have dedicated drivers which limit the number of concurrent clients to one, guarantee-
ing thus a high frame rate acquisition. By inserting this middle module (vision server),
it will be possible to connect multiple clients (e.g., multiple artificial vision guidance
applications in order to compare image processing performance, and/or multiple video
surveillance and monitoring applications). The stream standardization will not restrict
the resolution of the original stream / image, which can be modified through the camera
interface protocol. Besides the above characteristics it will be possible to adjust at video
server level the region of interest (ROI) from the original input image. Thus, by limiting
the dimension of the input data structure (a matrix), the image processing time decreases;
a lower processing time is desired in real-time applications such as visual servoing of
robot manipulators.
The advantage of being able to operate on a variety of video streams is that it
will be possible to compare the performance of the part detection, identification and
locating functions and to recommend significant cheaper hardware for the same process
automation project. Another advantage of this architecture is that the vision application
can be located on a cloud infrastructure offering vision services (based on standardized
interaction protocols) to manufacturing resources eliminating the cost with dedicated
hardware and making it easy to replicate the framework for additional vision projects.
56 S. Răileanu et al.

3 Machine Vision Framework-Resource Interaction Protocol


The interaction protocol [9] provides a standardized set of primitives (Table 1), each
one with a well-defined semantics. It is implemented as a socket-based synchronous
communication between the vision application and the resource controllers needing
information from the visualised scene. In this configuration the vision application is the
server, waiting incoming connections, and the resources are the clients. Depending upon
the received command a specified process is executed, and the information is returned to
the client in the form of a response. The commands and responses exchanged between the
vision system and a robot controller are defined in Table 1. The commands were chosen
for a workstation containing an industrial robot using vision to identify and locate parts
in order to handle them with an impactive gripper.

4 Camera and Robot-Camera Calibration


In order to pass from the camera measurement system (pixels) to the robot measure-
ment system (millimetres) and further from the robot coordinate system to the camera
coordinate system a set of two calibrations must be computed. It must be mentioned
that:

a) Vision algorithms were designed to work with black and white images obtained from
grayscale images after applying an adaptive threshold;
b) Objects are seen as dark spots (blobs) on a light background;
c) The image plane is parallel with the XOY plane of the robot;
d) The camera plane (physical mounting) is also parallel with the XOY plane of the
robot and
e) The end effector is perfectly aligned with the robot’s tool control point so that the
robot points to the same object location and orientation as the vision system.

The two calibration procedures are described below.

4.1 Camera Calibration


Under the assumption that lens distortions do not greatly influence the system accuracy
and considering that a pixel is rectangular, the outputs of the camera calibration are
the dimensions of the two sides of the pixel’s rectangle. In order to keep it simple a
round calibration target with a known radius was used. After the target identification
and computation of its bounding rectangle the target radius was divided by the number
of pixels on X and Y obtaining the two components of the camera calibration.

4.2 Robot-Camera Calibration


Robot-camera (or hand-eye) calibration is the problem of computing the transformation
between the robot base frame and the image frame. This transformation is computed
offline and used online for determining an object location. Under the assumptions that
An Open-Source Machine Vision Framework for Smart Manufacturing Control 57

Table 1. Vision application network interface

Functionality Type Description


Locate part Command The Locate Parts command is issued by the robot
controller in order to start an image acquisition,
followed by the identification and localization of a
specified part type which was learnt offline (see Sect. 5)
Example:
Locate 1 // Locate the parts which belong to type 1
Response This message is sent by the vision application to the
robot controller. It consists of a sequence of characters
representing the number of parts located followed by
their handling location and orientation. This sequence
will be analyzed by the robot controller in order to pick
the parts
Example:
number of recognized objects, for each
recognized object: location on X,
location on Y, rotation about the Z axis
Add calibration point Command This command is issued for each point that goes into the
calibration process. The command is followed by the
location of the point in the robot coordinate system.
Following this command, the vision systems searches
the predefined calibration target and adds the tuple
consisting of the coordinates of the calibration point in
the robot frame and in the vision frame to the set of
calibration points. It is assumed that a single calibration
target is in the image plane
Example:
Store, target position on X in vision
frame, target position on Y in vision
frame
Response This message is used to inform the robot controller
whether the calibration tuple could be added
successfully. If no target or more than one target are
found an error is issued
Examples:
OK // The command was executed successfully
ERROR 1 // No target found
ERROR 2 // More than one target found
Delete calibration point Command This command requests the elimination of a specified
calibration tuple
Example:
Delete 1 //Delete tuple with ID 1
(continued)
58 S. Răileanu et al.

Table 1. (continued)

Functionality Type Description


Response The message is sent in response to a delete command
confirming the deletion of the specified tuple
Compute calibration Command This command is issued by the robot controller for the
vision system to compute the transformation linking the
robot coordinate system to the vision coordinate system
(see Sect. 4)
Response This message is sent by the vision application to the
robot controller. It contains the 3 elements of calibration
transformation, namely a translation from the robot
plane to the vision plane (distance along X, distance
along Y) followed by a rotation about the Z axis. The Z
component will be added by the robot by overriding it
with the value of the calibration points in the robot
coordinate system
Example:
Relative location on X, relative
location on Y, relative rotation about
the Z axis
List calibration points Command This command requests the listing of all the calibration
tuples
Example:
List
Response The message is sent in response to a list command
Example
For each tuple in the calibration set list
target position on X in vision
coordinate system, target position on Y
in vision coordinate system, target
position on X in robot coordinate
system, target position on Y in robot
coordinate system

the robot XOY plane, the image plane and the camera plane are parallel, and using a
scenario where the camera is fixed relative to the robot base, the transformation has only
4 elements: a translation of the robot frame (dx, dy, dz) followed by a rotation about the
Z axis. By knowing the location of the reference point (represented by the calibration
target) in both coordinate systems, finding the transformation is similar to solving a
2D puzzle avoiding thus complex matrix computations [10]. Thus, the robot-camera
calibration can be performed in a two-step process using simple geometrical formulas:
first the location of the vision plane relative to the robot frame is computed (Fig. 2 left)
and then the rotation about the Z axis of the robot is determined (Fig. 2 right).
An Open-Source Machine Vision Framework for Smart Manufacturing Control 59

Origin of the
vision plane
Z correction
Vision
coordinate theta_P_VA
system
D3
alfa

D2
D1
Robot coordinate
system
Positions of the calibration target in robot
coordinate system
theta_VA_R
Possible locations for the origin of the
vision plane Robot coordinate system

Fig. 2. Finding the location (left) and rotation (right) of the vision plane relative to the robot base

Knowing the location of the reference point in both robot and vision coordinate
systems we can plot a set of possible locations for the origin of the vision plane (Fig. 2
left). These possible locations are located on circles whose centres are the locations of
the associated calibration target in the robot coordinate system and the radius is equal
to the distance from the origin of the vision system to the location of the target in the
vision frame. Each two circles have two intersection points and theoretically all the
circles should have a common intersection point. In practice this is not true because
the information comes from measurements affected by errors (e.g., caused by different
lighting) resulting thus a zone in space where it is most likely that the vision system’s
origin is located (see Fig. 5). For a set of N calibration points, (N-1)*N intersection
points are computed. From these computed intersections, those that are grouped are
chosen and their average location on X and Y will be the origin of the vision coordinate
system. After establishing the origin of the vision frame, we can apply the cosine law in
the triangle from Fig. 2 right and compute the Z rotation correction. The Z offset will
be the Z location of the calibration target.
The minimum set of points (learnt in robot and vision coordinate system) is three
in order to obtain a system of three equations with three unknowns (offset X, offset Y,
rotation Z). In order to improve the accuracy of the system this minimum number is
increased choosing the points as far away from each other as possible. Due to lighting
issues, four vision samples rotated with 90 degrees are taken for each robot calibration
point and an average location in vision coordinate system is computed as the average of
the intersections on X and Y.
The output of the vision system is a unique transformation for each object type
and for each grasping style (an object can be grasped in several ways). This calibration
translates and rotates the coordinate system attached to the object into the coordinate
system attached to the gripping position. Also, for a given object there can be defined
60 S. Răileanu et al.

different grasping styles in order to avoid collisions if several objects are close to each
other and there isn’t enough space to access them with the current robot gripper.

5 Recognition, Localization and Detecting Orientation of Objects

In industrial applications the shape of the object is of interest for its recognition [4].
In this respect there are two recognition techniques: a) matching contours operating
on greyscale images, and b) matching blobs operating on binary - black and white -
images obtained from greyscale images after thresholding/binarization [11]. A common
mathematical approach when working with binary images is the use of moment-based
analysis [12]. This method offers information about the area, centre of gravity and
orientation of a blob, and the derived image Hu moments [13]. Since these features
are invariant at image translation, scaling and rotation, they can be used to describe the
shape which is used for object recognition. These characteristics are of interest for object
identification (what type of object), location (where it is located) and association to a
fixed coordinate system (how is it rotated) allowing to compute a unique transformation
needed by the robot to handle correctly the object.
For a discrete binary image, the moment of order “p + q” (1) and the central moment
(2) are defined as follows:

Mpq = xp yq I (x, y) (1)
x y
 − p − q
μpq = (x− x) (y− y ) I (x, y) (2)
x y

where I (x, y) is the pixel intensity at coordinates x, y.


Orientation is defined as the angle relative to the X-axis of an axis through the centre
of gravity of the object that gives the lowest moment of inertia (AIM). This value is
computed using the formula (3) based on the second order central moments:
1 −1 2μ11
θ= tan (3)
2 μ20 − μ02
This formula gives the object’s direction, which will be taken as X axis. In order to
associate a direction to the X axis, we adopt the convention that X is along the longest
path of the AIM through the object as depicted in Fig. 3.

AIM Center of
gravity
X

X axis

Fig. 3. Object-attached frame for general use


An Open-Source Machine Vision Framework for Smart Manufacturing Control 61

Derivatives of the image moments which have invariant features with respect to
translation and rotation (as is the case of objects situated on a vision plane) can be
constructed. Examples of such expressions are given in the work of Hu [14], where
7 such invariants are proposed which can also detect if an object is mirrored.
From the above formulas, of interest are: M00 (4) representing the blob’s area, the
first order moments that are used to calculate the coordinates of the blob’s centre of
gravity (5), the second order central moments which are used to calculate the major and
minor axes of the blob and thus its orientation (6), and the Hu invariants which together
with the blob area are used for the definition of an object prototype.

Area = M00 (4)

XCG = M10 /M00 YCG = M01 /M00 (5)

Orientation = θ if μ20 > μ02 , θ + 90 otherwise (6)

The object’s prototype is defined as the distance between the Hu invariants (7) rep-
resenting the desired shape combined with the 7 th Hu invariant which indicates whether
the object is mirrored (essential in the case of robot applications), and the object area.
Tolerances are defined for each class in order to discriminate between existing objects.


7
shape1 shape2
D(shape1, shape2) = |Hu_invariant i − Hu_invariant i | (7)
i=1

The computed Hu invariants have a large dispersion being not comparable in


magnitude. They were brought in the same range for experiments using Eq. (8) [14].

Hi = −sign(Huinvariant i ) ∗ log 10 |Huinvariant i | (8)

The membership of an object (seen in real-time) to a trained prototype will be done


by computing conformity and verify percentage for each component of the model. The
computation of the object’s position and orientation will be done by analysing the image’s
spatial moments. The object’s location and orientation are of utter importance because
they influence the repeatability of the robot-vision system since the grasping point will be
computed starting from its location (centre of gravity) in the directions of the coordinate
system.
As stated in the introduction, to the best of our knowledge there is no complete
framework which provides all of the functionalities presented above which are needed
in order to handle objects using industrial robots, but there are computer-vision libraries
which implement a wide set of basic functions for handling binary images. Thus, we
started from the open-source non-commercial OpenCV library which offers methods to
rapidly compute the image moments (1–7) in order to accelerate the development of the
framework and implement the architecture depicted in Fig. 1.
62 S. Răileanu et al.

6 Experiments
Experiments have been done using an industrial robot from Adept (Cobra s850) working
with the proposed machine vision framework and with the Adept Sight commercial vision
software in order to compare the two vision solutions. The proposed vision system was
fed with images from a smartphone while the commercial application took images from
an industrial camera, both devices observing the same scene. We evaluated the accuracy
of the calibration processes and the robot-vision systems, and the runtimes of the vision
application.
A circular target has been used for camera calibration in the proposed machine vision
framework producing a pixel (width, length) = (0.8, 0.8), while the dedicated camera
calibration of the commercial application a pixel (width, length) = (0.6, 0.6).
The second experiment concerned the robot-camera calibration described in Sect. 4.
A set of 4 points from the vision plane have been located 4 times (each location with
the object rotated with 90°). Table 2 left gives the calibration points in robot and vision
frames, while Table 2 right presents the possible locations of the vision frame. Each infor-
mation is presented in Fig. 4. The computed deviation for the robot-camera calibration
process is 0.42mm (on X axis), 0.82mm (on Y axis), 0.531º (about the Z axis).

Fig. 4. Robot-camera calibration points in robot coordinate system

The third experiment concerned the accuracy in the process of determining the loca-
tion of a known point in robot coordinate system using the location of the object in the
vision system and the robot-camera transformation determined in the previous experi-
ment. Using a set of 100 randomly generated points in the vision plane the robot placed
the testing object at the designated position and its location was computed using the
vision system. The average deviation is 0.71(on the X axis), 0.33 (on Y axis). In order
to correctly handle the objects this deviation/positioning error should be less than the
gripper opening which was the case for the testing scenario (10 mm gripper opening).
In the fourth experiment we tested how the components of the prototype model
(as presented in Sect. 5) vary between different model classes, different locations and
different positions. The results for an offline measurement are offered Table 3. These
An Open-Source Machine Vision Framework for Smart Manufacturing Control 63

Table 2. Set of calibration points (left) and candidate vision coordinate system points (right)

Robot coordinate Vision coordinate Vision coordinate


system system system
X (mm) Y(mm) X (mm) Y(mm) X (mm) Y(mm)
79 -586.545 248 -33 328.9 -613
79 -586.545 250 -33 328.9 -560
79 -586.545 249 -32 328.6 -615.4
79 -586.545 250 -32 -170.5 -615.4
79 -586.545 249 -32 328.8 -613.7
279 -586.545 48 -29 207.2 -370.3
279 -586.545 49 -29 328.5 -613.8
279 -586.545 48 -29 229.5 -613.8
279 -586.545 49 -29 329.7 -613.3
79 -486.545 247 -132 329.7 -359.7
79 -486.545 247 -132
79 -486.545 249 -132
79 -486.545 249 -132
279 -486.545 46 -129
279 -486.545 46 -128
279 -486.545 46 -128
279 -486.545 47 -129

measurements (area in pixels and Hu_i representing the ith Hu invariant) have been made
for the image presented in Fig. 5 which contains both objects from different classes and
objects from the same class.
By analysing the results in Table 3 it can be concluded that the membership of an
object to a trained prototype can be established by computing the conformity and verify
percentage values for each component of the model.
Concerning the runtime, the average processing time for an image containing several
objects of interest is 70msec for the open-source machine vision framework. The same
image plane foreground was processed by the commercial application in 50msec.
64 S. Răileanu et al.

Fig. 5. Test image for computing prototype model components

Table 3. Blob characteristics for objects in Fig. 5

blob ID type area Hu0 Hu1 Hu2 Hu3 Hu4 Hu5 Hu6
13 I 126787 0.52 1.18 4.35 4.78 9.35 5.40 -10.20
17 r 124837 0.67 2.05 2.41 3.61 -6.71 -4.69 -6.87
18 r 113107 0.65 1.98 2.37 3.56 -6.55 -4.56 -7.01
263 circle 171358 0.80 4.79 5.78 9.34 -16.91 -11.74 -17.70
286 T 221521 0.56 2.26 1.74 3.23 5.72 4.37 6.39
291 nonL 150324 0.49 1.32 1.93 2.78 5.81 4.21 5.14
392 I 117118 0.51 1.16 5.12 5.54 10.88 6.14 -11.71
396 T 202846 0.55 2.28 1.68 3.12 5.52 4.26 -6.40
406 L 167974 0.50 1.34 1.96 2.79 5.56 3.94 -5.20
452 rest 131626 -0.82 -1.64 -0.94 -0.94 -1.88 -1.76 -2.94

7 Conclusions and Future Research

The objective of the paper was to propose an open-source machine vision framework
which operates with off-the-shelf equipment in manufacturing tasks and to demonstrate
that the accuracy of the system is similar with a commercial vision application. Mathe-
matical formulas for image processing - object recognition and locating along with the
calibration principle were presented and their results were detailed in the experiments
section. The advantage of this open-source framework is that it reduces the implementa-
tion costs of industrial applications by eliminating the cost of the software and offers the
possibility to use cheaper cameras without affecting accuracy. The framework can be
An Open-Source Machine Vision Framework for Smart Manufacturing Control 65

customized for different types of control devices (like robot controllers, PLCs and indus-
trial computers) that build up the automation layer of manufacturing systems by using
an interaction protocol dedicated to each class of these partner resources, embedded in
the TCP standard communication protocol.
Future work will be oriented towards extending the machine vision framework from
2D to 3D object recognition and locating, and defining new interaction protocols with
PLCs controlling the material- and operations flows in manufacturing work places.

References
1. International Federation of Robotics. https://ifr.org/. Accessed Apr 2020
2. Ford, M.: Rise of the Robots: Technology and the Threat of a Jobless Future, ISBN 978–
0465059997 (2015)
3. Kang, S., Kim, K., Lee, J., Kim, J.: Robotic vision system for random bin picking with dual-
arm robots, MATEC Web of Conferences 7, 07003, ICMIE 2016 (2016). https://doi.org/10.
1051/matecconf/20167507003 (2016)
4. Borangiu, T.: Intelligent Image Processing in Robotics and Manufacturing, pp. 1–650.
Romanian Academy Press, Bucharest (2004). ISBN 973–27–1103–5
5. Saam, M., Viete, S., Schiel, et al.: Digitalisierung im Mittelstand: Status Quo, aktuelle
Entwicklungen und Herausforderungen (‘Digitalisation in SMEs: status quo, current trends
and challenges’ - our translation, in German only), research project of KfW Group (2016)
6. McFarlane, D., Ratchev, S., Thorne, A., Parlikad, A.K., Silva, L., Schonfuss, B., Hawkridge,
G., Tlegenov, Y.: Digital manufacturing on a shoestring: low cost digital solutions for SMEs,
Service Oriented, Holonic and Multi-agent Manufacturing Systems for Industry of the Future.
SOHOMA 2019, Studies in Computational Intelligence, Vol. 853, Springer (2020)
7. Open source computer vision – OpenCV. https://opencv.org/. Accessed Apr 2020
8. The Accord.NET Image Processing and Machine Learning Framework. https://accord-fra
mework.net/index.html. Accessed Apr 2020
9. Bellifemine, F., Carie, G., Greenwood, D.: Developing multi-agent systems with JADE, Wiley
(2007). ISBN 978–0–470–05747–6
10. Sharifzadeh, S., Biro, I., Kinnell, P.: Robust hand-eye calibration of 2D laser sensors using a
single-plane calibration artefact. Robot. Comput. Integrat. Manuf. 61, 101823 (2020). https://
doi.org/10.1016/j.rcim.2019.101823
11. Anton, F.D., Borangiu, T., Anton, S., Raileanu, S.: Using cloud computing to extend robot
capabilities. In: 26th International Conference on Robotics in Alpe-Adria-Danube Region,
RAAD 2017, Turin, Italy, 21–23 June 2017 (2017)
12. Korta, J., Kohut, P., Uhl, T.: OpenCV based vision system for industrial robot-based assembly
station: calibration and testing. PAK 60 (1/2014) (2014)
13. Huang, Z., Leng, J.: Analysis of Hu moment invariants on image scaling and rotation. Pro-
ceedings of 2nd International IEEE Conference on Computer Engineering and Technology
(ICCET’10), pp. 476–480, Chengdu, China (2010)
14. Mallick, S., Bapat, K.: Shape Matching using Hu Moments (C++/Python) (2018). https://
www.learnopencv.com/shape-matching-using-hu-moments-c-python/. Accessed Apr 2020
Using Cognitive Technologies as Cloud Services
for Product Quality Control. A Case Study
for Greenhouse Vegetables

Florin Anton(B) , Theodor Borangiu, Silvia Anton, and Silviu Răileanu

Deptartment of Automation and Applied Informatics, University Politehnica of Bucharest,


Bucharest, Romania
{florin.anton,theodor.borangiu,silvia.anton,
silviu.raileanu}@cimr.pub.ro

Abstract. The last years showed a trend of integrating smart technologies in


agriculture, starting from irrigation control based on weather forecast, and ending
with automated greenhouse control for the entire plant lifecycle using robots. The
paper presents a solution for quality control and monitoring vegetables in green-
houses. During the lifecycle of a plant in automated greenhouses, the control and
monitoring of the environment (soil humidity, temperature, ventilation, etc.) is
not sufficient; the plants’ condition is very important and in most cases it can give
more valuable feedback than the environment. This paper presents a solution for
monitoring the health state of tomato plants in greenhouses, which allows detect-
ing diseases in order to prevent their spread and also the removing or isolating
affected tomatoes during the harvest. The paper gives information about the tech-
nologies which have been used and the system architecture; experimental results
are reported and potential extensions of the proposed solution are described.

Keywords: Cognitive engine · Machine vision · Cloud services · Quality


control · Vegetables · Greenhouse automation

1 Introduction
Quality control of manufactured parts, agriculture products (vegetables, fruits), food
products (meat, eggs) and prepared food represents an important stage in the production
value chain because it allows certifying the desired characteristics of products (e.g. the
shape, degree of finishing, surface texture, correct and complete mounting of parts, the
size and aspect of food products and the composition of prepared food), assuring the
safety of consumers, validating the timing of various processing stages and authorizing
the start of new operations (e.g., termination of assembly, of finishing, establishing
the proper time of harvesting), and classifying products according to quality indicators
expressed through size, shape and colour.
The quality control of vegetables is repeated at short time periods (days, hours) due
to their short growing cycles. Tomatoes are among the most popular vegetables grown
in greenhouses. If they receive proper temperature and sufficient light, they can be even

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 66–77, 2021.
https://doi.org/10.1007/978-3-030-69373-2_4
Using Cognitive Technologies as Cloud Services for Product Quality Control 67

harvested twice a year; however indoor conditions require in general more rigorous
control to assure successful pollination of flowers and prevent the spread of diseases.
Tomatoes are also among the most consumed vegetables. According to statistics
from the Food and Agriculture Organization (FAO), around 170 million tons of fresh
and processed tomatoes were produced worldwide in 2014 [1]. The area covered with
tomato cultivation was 5 million hectares. Global tomato production has grown steadily
since 2000, by more than 54% from 2000 to 2014. China is the largest tomato producer
followed by the United States and India. Other major producers in the market are the
European Union and Turkey. Together these major tomato market producers account
for around 70% of global production. Mexico is the largest exporter of tomatoes in the
world followed by the Netherlands and Spain [2]. In 2016, Mexico, Netherlands and
Spain held respectively 25.1% ($ 2.1 billion), 19% ($ 1.6 billion), and 12.6% ($ 1.1
billion) of total tomato exports [2].
In the United States, approximately 16 million tons of tomatoes were produced in
2015; about 8% of total production was represented by fresh tomatoes that have much
higher prices than processed tomatoes. In 2015, the total values of fresh and processed
tomatoes in the United States were respectively $ 1.22 billion and $ 1.39 billion [3].
Florida and California have about two-thirds of U.S. fresh tomato production [4], while
California has about 95% of processed tomato production [5].
The United States is one of the leaders in the production of fresh tomatoes. In
2015, 1.35 billion kilograms of fresh tomatoes were produced here. Domestic produc-
tion accounted for about 40% of total domestic demand for fresh tomatoes; the rest of
the demand was met by imports, mostly from Mexico and Canada. Since 2000, the pro-
duction of fresh tomatoes in the United States has generated a downward slope. Total
production of fresh tomatoes decreased from 1.6 billion kilograms in 2000 to 1.35 billion
kg in 2015 (Fig. 1) [6]. One main reason was the increase in competition from Mexico.

Fig. 1. Tomato production in US between 2000 and 2015

As far as the European Union is concerned, the production of fresh tomatoes in


the period 2006–2019 reaches around 6 million t, while the production of tomatoes for
68 F. Anton et al.

processing varies from an average of 9 million t to a peak of 11 million t, the main


producers of fresh tomatoes being Spain, with over two million tons followed by the
Netherlands and Italy which produce on average around 750,000 tons of tomatoes [7].
As for imports of fresh tomatoes in Romania, they come mainly from Morocco,
between 300,000 and 450,000 tons of tomatoes being imported annually. The rest of
the tomato imports, somewhere under 200,000 t come from other countries the main
supplier being Turkey [8]. At the end of 2019, tomatoes were grown in the European
Union on an area of approximately 235,000 ha (Fig. 2) [9].

Fig. 2. Cultivation area for main EU tomato producers (1000 ha)

Regarding the production of tomatoes in greenhouses (production which is intended


especially for fresh tomatoes), the planting area in 2019 worldwide was less than 500,000
ha. Even so, there is a special interest in the automation of greenhouses for unassisted
production.
From the point of view of the scientific literature one can identify several areas that
have been treated in various articles. In the field of greenhouse automation, the following
research directions are noted:

• Energy management
• IoT systems for monitoring and control
• Image processing in agriculture
• Robotics in greenhouses

The articles in the field of energy management focus on proposing solutions for the
efficiency of systems serving greenhouses in order to reduce energy consumption [10]
or the use of renewable energy sources such as photovoltaic panels [11].
The field of IoT solutions for greenhouses is the most active with a large number
of works dealing with these issues [12]. The main areas of interest are: the monitoring
Using Cognitive Technologies as Cloud Services for Product Quality Control 69

of the parameters inside the greenhouse [13, 14], the control of these parameters with a
low energy consumption [15–18], and the automatic irrigation of plants [19–22].
The field of artificial vision in agriculture is approached in order to monitor plant
growth [23–26], monitoring that is usually done using satellite images or images acquired
with drones; these methods do not involve accurate identification of the plant but rather
identification of planted areas. Artificial vision is integrated with robotics in automatic
harvesting solutions using robots [27]. Robots are also used to transport materials to
predetermined locations inside greenhouses [28].
From the point of view of complete greenhouse automation, an interdisciplinary app-
roach [29] is required to integrate management, control, forecasting, artificial intelligence
and other technologies so that the resulting solution allows the creation of independent
greenhouses.
This paper proposes a solution for monitoring and quality control of tomatoes in
greenhouses using advanced cognitive technologies accessed through Cloud services.
The monitoring system is mounted on a mobile platform that allows the navigation
inside the greenhouse, locating its position in the greenhouse and acquiring images that
are then processed by a cognitive image recognition system. This system has the ability
to differentiate the fruit from the plant in order to classify the quality of the fruit and to
identify the diseases that affect the plant. The system can be used both to monitor the
development of plants inside the greenhouse and to check the quality of the fruits when
they are harvested.
The paper is structured in five sections: an introduction and previous research in
the field of greenhouse automation; cognitive systems; development of vision-based
recognition models and their system integration; presenting the system architecture;
experimental results, conclusions and possible further developments.

2 Cognitive Computing and Watson Services


IBM Watson is IBM’s artificial intelligence platform that enables a new partnership
between humans and computers. Watson combines six basic capabilities:

• Accelerates research and discovery by conducting rigorous, field-specific, faster


research and discovering new opportunities by extracting information from various
data sources.
• Improves interaction by understanding and communicating with human beings
responding to their needs through personalized dialogue and adaptive experiences.
• Anticipates critical situations by monitoring the condition of systems and equipment,
thus allowing the detection of potential defects before they lead to larger problems
that are solved with significant costs.
• Makes recommendations with a high level of confidence, that are obtained by pro-
cessing a wide set of information and by understanding the nuances of the current
context.
• Scale expertise and learning by collecting know-how from various data sources
creating a broad knowledge base that can be used later.
• Detects weaknesses and minimizes risk by understanding written language from
reports, rules, laws, and new discoveries.
70 F. Anton et al.

Watson can understand all forms of data, can interact naturally with people, and
can learn and reason on a certain scale (Fig. 3). Data, information and expertise create
the foundation for working with Watson. Figure 3 below shows examples and data that
Watson can analyse and learn from and then generate new conclusions and observations
that have not been stated before.

Fig. 3. Watson is based on data and information collections

Watson is available as a set of application programming interfaces (APIs) and soft-


ware as a service solutions (SaaS). Watson is available in IBM Cloud, a system that
provides a platform where application vendors can use resources to develop applica-
tions or services that can then be made available to customers. Developers may combine
Watson services with other services that are available in the IBM Cloud or that are pro-
vided by other vendors. These services can be combined into applications that provide
additional logic to create AI applications (Fig. 4).

Fig. 4. Creating Artificial Intelligence (AI) solutions using Watson services in IBM Cloud

3 Watson Visual Recognition


The Watson visual recognition service enables the creation of solutions that can quickly
mark, categorize and learn accurately visual content by classification based on machine
Using Cognitive Technologies as Cloud Services for Product Quality Control 71

learning techniques. It uses deep learning algorithms to analyse scenes, objects, faces,
and other content in images. The results include keywords that provide content informa-
tion. There is a set of predefined models that provide results with high accuracy without
the need for training; it is also allowed to train custom models to create specialized
classes. By creating a custom classifier, one can use the visual recognition service to
recognize images that are not available using the pre-trained classification.
Watson’s visual recognition service can learn from example images that can be
uploaded to create a new classifier. Each example file is trained by comparison with all
the other uploaded files when the classifier is created, and positive examples are stored
as classes. These classes are grouped to define a single classifier while returning their
own scores. Figure 5 shows the process of Watson visual recognition using a custom
classifier.

Fig. 5. The visual recognition process using a custom classifier

A new custom classifier can be trained using several archived files, including files
that contain examples of positive or negative images. Images can be in jpg or png format.
At least two compressed files must be used, either two files with positive examples or
one file with positive examples and one with negative examples.
Compressed files that contain positive examples are used to create classes that define
the new classifier. The prefix specified for each positive example is used as a class name
in the new classifier. There is no limit to the number of positive example files that can
be uploaded in a single session. The compressed file containing negative examples is
not used to create a class inside the classifier but specifies what the new classifier is
not. Negative examples should contain images that do not depict the subject of any of
the positive examples. Only one file with negative examples can be specified in a work
session.
The use of compressed files is recommended when using the Watson Studio interface
to define the classifier; this method allows faster file upload. However the system is not
72 F. Anton et al.

limited to the use of compressed files but allows also the upload of image files not
found in an archive; this allows the creation of classifiers or the modification of existing
classifiers through an external program.
Figure 6 shows the steps for creating and learning a specialized classifier for visual
recognition.

Fig. 6. Creating a classifier using Watson machine learning-based visual recognition

Figure 6 presents the following steps:

1. Preparing data for training. Uploaded image files are used as positive and negative
examples for the training process.
2. Creating and training the classifier. In order to create the custom classifier the location
of training images must be specified and the visual recognition API is called.
3. Testing the custom classifier. In this step image classification is performed using the
new custom classifier and the classifier’s performance is measured.

4 The Proposed Quality Control and Monitoring System


Architecture
The developed system is based on a mobile platform on which an Aptina camera [30] is
mounted. The camera is connected to a Raspberry system that allows the acquisition of
images, determining the position of the platform and transmitting this information via
Wi-Fi to a server running an application that is connected with the Watson service.
The mobile platform can navigate independently in the greenhouse and the camera
can be oriented using a pan/tilt mechanism so that it can take multiple images to cover the
entire length of the plants. When an image acquisition is made, the Raspberry system
Using Cognitive Technologies as Cloud Services for Product Quality Control 73

sends to the application on the server the acquired image, the position of the mobile
platform and a timestamp to identify the time of acquisition.
The application on the server (written in C++ Builder) further contacts the Watson
visual recognition service to determine or classify the objects in the image. To do this in
Watson Visual recognition two classifiers were learned: a classifier for fruit recognition
and a classifier for plant recognition. Each of the two classifiers has classes for the
healthy plant as well as for the plant affected by various diseases.
When it is detected that there is a sick plant or fruit in the image, the acquired image,
the position of the platform and the timestamp are saved by the application on the server
in a database containing alarm events. This allows an operator to inspect the image and
validate whether the recognition was successful or not. If the recognition was successful,
human intervention in the greenhouse is required to remove or treat the affected plant. If
it is decided that the plant will be treated (there may be a larger area of affected plants)
the mobile platform system can be programmed so that for a specified period of days
it does not take pictures from that area; this will avoid multiple reporting of the same
problem. If the recognition was not successful and therefore the visual recognition system
reported false information, the image that was falsely classified can be reintegrated into
the classifier which can be retrained so that it can improve its performance using the
new image. Figure 7 below shows how the system works.

Fig. 7. The data exchange between system components

The flow chart in Fig. 8 summarizes the main activities of the image processing
service. The image is sent to the Watson visual recognition service and processed as
follows:

1. First step: the image is received.


74 F. Anton et al.

Fig. 8. Flow diagram for the image processing operations

2. The second step has two activities:

(a) Call the Watson service in order to classify the objects.


(b) Call the Watson service in order to detect the position of the object in the image.

3. Step three has two activities:

(a) VisualClassification contains the JSON (JavaScript Object Notation


– used to interchange data) representation of the classified objects.
(b) DetectedTomato contains the JSON representation of the tomatoes detected
in the image.

4. Generate keywords and the associated score.

5 Experimental Results, Conclusion and Future Work


The tests were performed using a single tomato variety, namely round tomatoes. The
image database was a created using public images from Internet, containing 296 different
images (single tomato or cluster, from which about 192 images representing diseases).
Two classifiers were built, one for fruit and one for plants. Each classifier was created
based on positive images for the healthy plant, images with negative examples, and also
images that represented different diseases for both fruits and the plant as follows.
The following diseases were considered for the fruit, the number in the parentheses
represent the number of images used for training:

• Blossom end rot (77)


Using Cognitive Technologies as Cloud Services for Product Quality Control 75

• Fruit cracks
• Sunscald
• Catfacing
• Bacterial canker
• Antchracnose
• Viral diseases

The following diseases were considered for the plant:

• Leaf roll (85)


• Early blight
• Septoria leaf spot
• Fusarium wilt
• Verticillium Wilt
• Powdery mildew

For each disease 15 test images have been used. It has been found from tests that the
recognition of healthy plants or fruits is achieved with a percentage higher than 75% if
at least 50 positive images are used; however this score is relative and depends on the
quality of the images used both to learn the classifier and in the process of recognition
(resolution, the position of the affected part of the plant in the image). For the set of
images used for training (77 and 85 for diseases and 50 and 50 for good fruits/plants)
the score was over 90%.
Figure 9 below shows an example of recognition for a healthy tomato and a tomato
affected by blossom end rot.

Fig. 9. The results generated by Watson Visual Recognition service

We can conclude that the system provides accurate results if there are at least 50
positive quality images for defining the healthy plant as well as for the diseases to be
detected that are used to learn the classifier.
76 F. Anton et al.

As for further development, the system could withstand a greater number of types of
diseases (currently only one type of disease for the fruit and plant have been integrated
due to the lack of images) that could be detected. Future research will be also carried
out with the AI-based vision system to classify and locate fruits for automatic harvest.

References
1. FAO (Food and Agriculture Organization of the United Nations): FAOSTAT Database (2017).
http://faostat3.fao.org/
2. CIA (Central Intelligence Agency): The World Factbook, Field Listing: Exports – Com-
modities, CIA, Washington, DC (2017). https://www.cia.gov/library/publications/the-world-
factbook/fields/2049.html
3. USDA-AMS (United States Department of Agriculture, Agricultural Marketing Service):
Tomatoes, USDA-AMS, Washington, DC (2017). http://www.agmrc.org/commodities-pro
ducts/vegetables/tomatoes
4. Wu, F., Guan, Z., Suh, D.H.: The effects of tomato suspension agreements on market price
dynamics and farm revenue, applied economic perspectives and policy. forthcoming (2018).
https://doi.org/10.1093/aepp/ppx029
5. USDA-ERS (United States Department of Agriculture, Economic Research Service): Toma-
toes, USDA-ERS, Washington, DC (2017). https://www.ers.usda.gov/topics/crops/vegeta
bles-pulses/tomatoes.aspx
6. USDA-NASS (United States Department of Agriculture, National Agricultural Statistics Ser-
vice): Data and Statistics, USDA-NASS, Washington, DC (2016). https://www.nass.usda.
gov/Data_and_Statistics/index.php
7. European Commission, https://ec.europa.eu/info/sites/info/files/food-farming-fisheries/far
ming/documents/tomato-dashboard_en.pdf
8. European Commission, https://ec.europa.eu/info/sites/info/files/food-farming-fisheries/far
ming/documents/tomatoes-trade_en.pdf
9. European Commission, https://ec.europa.eu/info/sites/info/files/food-farming-fisheries/far
ming/ Documents / tomatoes-production_en.pdf
10. Iddio, E., Wang, L., Thomas, Y., McMorrow, G., Denzer, A.: Energy efficient operation and
modeling for greenhouses: a literature review. Renew. Sustain. Energy Rev. 117, p. 109480,
January 2020 (2020)
11. Yano, A., Cossu, M.: Energy sustainable greenhouse crop cultivation using photovoltaic
technologies. Renew. Sustain. Energy Rev. 109, 116–137, July 2019 (2019)
12. Jha, K., Doshi, A., Patel, P., Shah, M.: A comprehensive review on automation in agriculture
using artificial intelligence. Artif. Intell. Agric. 2, 1–12, June 2019 (2019)
13. Alper Akkaş, M., Sokullu, R.: An IoT-based greenhouse monitoring system with Micaz motes.
Procedia Comput. Sci. 113(2017), 603–608 (2017)
14. Postolache, O., Pereira, J.M., Girão, P.S., Monteiro, A.A.: Greenhouse environment: air and
water monitoring. In: Mukhopadhyay, S. (ed) Smart Sensing Technology for Agriculture and
Environmental Monitoring. Lecture Notes in Electrical Engineering, vol. 146, pp. 81–102.
Springer, Berlin, Heidelberg (2012)
15. Drakulić, U., Mujčić, E.: Remote monitoring and control system for greenhouse based on
IoT. In: Avdaković, S., Mujčić, A., Mujezinović, A., Uzunović, T., Volić, I. (eds) Advanced
Technologies, Systems, and Applications IV, Proceedings of the International Symposium on
Innovative and Interdisciplinary Applications of Advanced Technologies (IAT 2019), Lecture
Notes in Networks and Systems, vol 83, pp. 481–495. Springer, Cham (2020)
Using Cognitive Technologies as Cloud Services for Product Quality Control 77

16. Wu, Y., Li, L., Li, M., Zhang, M., Sun, H., Sygrimis, N., Lai, W.: Remote-control system for
greenhouse based on open source hardware. IFAC-PapersOnLine 52(30), 178–183 (2019)
17. Suryawanshi, S., Ramasamy, S., Umashankar, S., Sanjeevikumar, P.: Design and implemen-
tation of solar-powered low-cost model for greenhouse system. In: SenGupta, S., Zobaa, A.,
Sherpa, K., Bhoi, A. (eds.) Advances in Smart Grid and Renewable Energy. Lecture Notes in
Electrical Engineering, vol. 435. Springer, Singapore (2018)
18. Reka, S.S., Chezian, S.S., Chandra, B.: A novel approach of IoT-based smart greenhouse
farming system. In: Drück, H., Pillai, R., Tharian, M., Majeed, A. (eds.) Green Buildings
and Sustainable Engineering, pp. 227–235. Springer Transactions in Civil and Environmental
Engineering book series, Springer, Singapore (2019)
19. Carvajal-Arango, R., Zuluaga-Holguín, D., Mejía-Gutiérrez, R.: A systems-engineering app-
roach for virtual/real analysis and validation of an automated greenhouse irrigation system.
Int. J. Interact. Des. Manuf. 10, 355–367 (2016). https://doi.org/10.1007/s12008-014-0243-2
20. Sivagami, A., Hareeshvare, U., Maheshwar, S. et al.: Automated irrigation system for green-
house monitoring. J. Inst. Eng. (India): Series A, 99, 183–191(2018). https://doi.org/10.1007/
s40030-018-0264-0
21. Mohandas, P., Sangaiah, A.K., Abraham, A., Anni, J.S.: An automated irrigation system based
on a low-cost microcontroller for tomato production in South India. In: Abraham A., Falcon
R., Koeppen M. (eds) Computational Intelligence in Wireless Sensor Networks. Studies in
Computational Intelligence, vol. 676, pp. 49–71. Springer, Cham (2017)
22. Joshi, V.B., Goudar, R.H.: IoT-based automated solution to irrigation: an approach to control
electric motors through android phones. In: Sa, P., Bakshi, S., Hatzilygeroudis, I., Sahoo, M.
(eds.) Recent Findings in Intelligent Computing Techniques, Advances in Intelligent Systems
and Computing, vol. 707, pp. 323–330. Springer, Singapore (2019)
23. Hatou, K., Sugiyama, T., Hashimoto, Y., Matsuura, H.: Range image analysis for the green-
house automation in intelligent plant factory. In: IFAC Proceedings Volumes, vol. 29, no. 1,
pp. 962–967, June–July 1996 (1996)
24. Tian, H., Wang, T., Liu, Y., Qiao, X., Li, Y.: Computer vision technology in agricultural
automation - a review. Inf. Process. Agric. 7(1), 1–19, March 2020
25. Yang, I.C., Hsieh, K.-W., Tsai, C-Y., Huang, Y.-I., Chen, Y.-L., Chen, S.: Development of an
automation system for greenhouse seedling production management using radio-frequency-
identification and local remote sensing techniques. Eng. Agric. Environ. Food 7(1), 52–58,
February 2014 (2014)
26. McCarthy, C.L., Hancock, N.H., Raine, S.R.: Applied machine vision of plants: a review
with implications for field deployment in automated farming operations, Intell. Serv. Robot.
3, 209–217 (2010). https://doi.org/10.1007/s11370-010-0075-2
27. Tejada, V.F., Stoelen, M.F., Kusnierek, K., et al.: Proof-of-concept robot platform for exploring
automated harvesting of sugar snap peas. Precision Agric. 18, 952–972 (2017). https://doi.
org/10.1007/s11119-017-9538-1
28. Li, X.Y., Chiu, Y.J., Mu, H.: Design and analysis of greenhouse automated guided vehicle. In:
Krömer, P., Zhang, H., Liang, Y., Pan, J.S. (eds) Proceedings of the 5th Euro-China Conference
on Intelligent Data Analysis and Applications (ECC 2018), Advances in Intelligent Systems
and Computing, vol. 891, pp. 256–263. Springer Cham (2019)
29. Koleva, K., Toteva-Lyutova, P.: Greenhouses automation as an illustration of interdisciplinar-
ity in the creation of technical innovations. Procedia Manuf. 22, 923–930 (2018)
30. On Semiconductors: Image Sensors and Processors (2020). https://www.onsemi.com/pro
ducts/sensors/image-sensors-processors. Accessed May 2020
Digital Twins in Manufacturing
and Beyond
Past and Future Perspectives on Digital Twin
Research at SOHOMA

K. Kruger1(B) , A. J. H. Redelinghuys1 , A. H. Basson1 , and O. Cardin2


1 Department of Mechanical and Mechatronic Engineering, Stellenbosch University,
Stellenbosch 7600, South Africa
kkruger@sun.ac.za
2 LS2N, UMR CNRS 6004, Université de Nantes, IUT de Nantes, 44 470 Carquefou, France

Abstract. The concept of the Digital Twin has attracted notable research atten-
tion in recent years and has emerged as a prominent theme at recent editions of the
SOHOMA workshop. This paper aims to provide perspectives on past and future
Digital Twin research within the SOHOMA context. The paper describes the evo-
lution of the Digital Twin concept over the past decade of SOHOMA workshops
and reviews the contributions in terms of functions, architectures and implemen-
tation technologies. Considering the future of Digital Twin research within the
SOHOMA context, the paper identifies key enabling factors and challenges, and
proposes a strategic research focus to promote future impact.

Keywords: Digital twin · Industry 4.0 · Holonic manufacturing systems

1 Introduction
One of the first introductions to the elements of a Digital Twin was by Dr. Michael
Grieves in 2002 in a University of Michigan presentation to industry for the formation
of a Product Lifecycle Management (PLM) centre [1]. The term Digital Twin was first
introduced as the “Conceptual Ideal for PLM” and consisted of all the currently accepted
elements of a DT – real space, virtual space and a connection with data/information flow
between the virtual and real space. Around 2011, the term Digital Twin was introduced
in Virtually Perfect: Driving Innovative and Lean Products through Product Lifecycle
Management and was attributed to John Vickers of NASA [2]. However, some believe
that DT technology has its roots in a concept practiced ever since the 1960s, where
NASA would use basic twinning ideas by creating physically duplicated systems at
ground level to match the system in space. A well-known example was the Apollo 13 in
1970. Since those early beginnings, DTs became one of the top strategic technological
trends in 2017 [1] and was named one of Gartner’s Top 10 Strategic Technology Trends
for the past couple of years. The Digital Twin has been identified as a key enabler for
Industry 4.0, as it constitutes a cornerstone for the development and effective integration
of cyber-physical production systems.
The most widely-accepted definition of a DT, as introduced by NASA in [3], is: “an
integrated multi-physics, multi-scale, probabilistic simulation of a system that uses the

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 81–98, 2021.
https://doi.org/10.1007/978-3-030-69373-2_5
82 K. Kruger et al.

best available physical models, sensor updates, fleet history, etc. to mirror the life of
its flying twin”. Grieves and Vickers (2017) distinguishes between DT instances and
aggregates. A DT instance (DTI) mirrors its physical twin during its entire lifespan.
A DT aggregate (DTA), on the other hand, is not directly associated with a physical
counterpart, but is the aggregation of some DTIs and other DTAs. While a DTI can be
an independent structure, a DTA cannot. DTIs can, for example, be interrogated by a
DTA for their current state [4].
A DT ideally creates a highly accurate digital model of the physical system in
cyberspace. Through the quality and fidelity of information, the DT can accurately
replicate and simulate the behaviour of the physical system [2, 5]. According to Tao
et al. [6], a DT can also provide a digital footprint of products by integrating geometry,
structure, behaviour, rules and functional properties. In the context of designing, setting
up and configuring an automation system for manufacturing, a DT is a set of computer
models that provide the means to design, validate and optimize a part, a product, a man-
ufacturing process or a production facility in the cyberspace. A DT enables flexibility in
manufacturing by reducing the required time for product design, manufacturing process
design, system planning design and production facility design [7].
The combination of the physical production system and its corresponding DT are the
fundamental building blocks of fully connected and flexible systems that can learn and
adapt to new demands. Ideas about the role of the DT are still developing at this stage.
Some of the roles postulated in recent literature are [7–10]: remote monitoring; predictive
analytics; simulating future behaviour; optimization and validation; documentation and
communication; and connection of disparate systems.
DTs have attracted significant research interest in various domains, also beyond
production and manufacturing. The importance and potential of the DT concept has been
reflected in the increasing number of DT-related contributions to the recent editions of
the SOHOMA workshops.
This paper reviews the research that has been presented at SOHOMA in the past
in papers that have considered or developed the DT concept. The review investigates
the needs and objectives for which DTs are intended, the architectures that have been
developed, and the implementation technologies that have been used. The paper then
turns it focus to the future – aiming to understand and predict the trajectory that DT
research might (or should) follow in future editions of SOHOMA.
The paper presents the past and future perspectives of DT research at SOHOMA in
two sections: Sect. 2 reviews the DT-related contributions in past SOHOMA workshops,
and Sect. 3 considers a future trajectory for DT research within the SOHOMA workshops
and community. A conclusion is offered in Sect. 4.

2 Review of Past Contributions


This section reviews the DT related research contributions that were presented at
SOHOMA to date. The section first investigates the evolution of DT research at
SOHOMA, before offering a summary of the DT functions, architectures and imple-
mentation technologies that were considered in the past papers.
Past and Future Perspectives on Digital Twin Research at SOHOMA 83

2.1 Evolution of Digital Twin Contributions

While only emerging recently, DT research has garnered notable research interest in the
SOHOMA community. This section quantifies this emergence through an analysis of
DT related contributions, and provides a chronological description of the development
of DT research at past editions of the SOHOMA workshop.

Analysis and Overview


DT research has only emerged recently in the SOHOMA community, with the concept
only mentioned for the first time in the proceedings of SOHOMA 2017. However, the
concept has gained notable traction within the research community and has subsequently
been mentioned in, and has been the primary focus of, several contributions to recent
SOHOMA workshops – as is shown in Fig. 1. Similarly, the percentage of papers at
recent editions of SOHOMA that mention DTs has increased notably – as is shown in
Fig. 2.

Fig. 1. DT-related papers at recent editions of SOHOMA

The growth of DT research, as is evident in Fig. 1 and Fig. 2, is the result of contribu-
tions from several research teams from different countries. Teams are based in France,
Portugal, Romania, Slovenia, South Africa and the United Kingdom. While there has
been a strong focus on applications within the manufacturing domain, DT develop-
ments and implementations for other domains (e.g. maritime, and building and asset
management) have also been presented.

2011–2017: Developing the Digital Twin idea


Upon review of papers of the early editions of SOHOMA, it is evident that, while the
term Digital Twin was not yet used, some research already started to develop the idea.
84 K. Kruger et al.

Fig. 2. The increase in the percentage of papers that refer to DTs in SOHOMA editions

The DT idea was presented in the form of a “digital mirror” in [11] and an “observer”
in [12], and as “objects in the real world are linked with the virtual world” in [13].
In the earliest mentions, DT is considered closely integrated, and even synonymous,
with discrete event simulation. In retrospect, the session at SOHOMA 2016 on Virtu-
alization and Simulation in Computing-Oriented Industry and Services (see [14–17])
focused strongly on this aspect of the DT (under the gaze of different terminology). The
term DT first appears in the proceedings of SOHOMA 2017, as used by both [18] and
[19]. In both papers, the function of mimicking the behaviour of the physical counterpart
is highlighted.

2018: The Introduction of ARTI


Prior to the 2018 edition, it was still unclear how the DT concept could be integrated
with, and supported by, the holonic systems approach that is so central to SOHOMA.
However, the role of the DT within the holonic systems paradigm was clearly defined in
the keynote presentation and subsequent paper by Paul Valckenaers at SOHOMA 2018
[20]. Organized jointly with IFAC INCOM 2018 conference, this plenary session can
be considered as one important vector of visibility for the workshop series since then.
The keynote presented the ARTI architecture – a reconsideration of its well-established
predecessor, PROSA – and described the role of DTs in providing software systems (e.g.
for control and planning) with access to the real world (or, at least, the world-of-interest)
[21].
This architecture, without actually mentioning explicitly the notion of the DT, clearly
exhibited for the first time in the community the difference between beings and agents.
Beings are meant to replicate how it is (i.e. reflect reality), while agents are meant
to determine how it will be (i.e. make decisions). Therefore, it showed at the same
time that this notions were already considered when building multi-agent architectures,
but that decoupling implied many benefits. In a way, when control relies on the data
Past and Future Perspectives on Digital Twin Research at SOHOMA 85

available in the DT, the DT is not only used by control and can have many more benefits.
ARTI, pointing out the difference between beings and agents, also implies that computer
agents are not necessarily the development primitives of DTs. This role of the DT, and
its importance as an enabler for cyber-physical systems within Industry 4.0, is supported
and further evaluated by a collection of reviews from the SOHOMA community [22].
Furthermore, many of the proposals of the following years, detailed in next section,
inherit from this position.

2018–2020: Digital Twin Developments and Early Applications


Apart from the ARTI discussion, the 2018 edition of SOHOMA included three more
papers that considered or extended the role of the DT [23–25]. The DT is considered to
evolve with the physical twin, as a non-static virtual model that is updated with sensor
data over the lifetime of the physical component. Furthermore, the DT is envisioned to
play an important role in supporting local and global decision making, through mech-
anisms such as fault detection and diagnosis. [24] was the first paper at SOHOMA to
present an implementation architecture for the DT. The paper presented the Six-Layer
Architecture for Digital Twins (SLADT), which makes provision for the connection of
the DT to the physical twin, the conversion of local data to information on the cloud,
and for emulation and simulation tools within the DT.
SOHOMA 2019 saw a large growth in the number of papers that focus on DTs. These
papers considered the role of DTs in diverse contexts beyond manufacturing, ranging
from building information management to maritime vessels. The 2019 edition of the
workshop also hosted the first session dedicated to DTs for cyber-physical systems,
which included six papers. Furthermore, two papers within the Digital Transformation
in Construction, Building Management and Smart City Services session also had a strong
focus on DTs.
DTs have continued to attract major interest in the 2020 edition of SOHOMA. The
dedicated special session received 7 submissions of papers directly focusing on the topic,
whereas 46% of all submissions refer to the concept of DT. This edition also showed five
different research teams concurrently implementing a tentative DT on their own research
platforms, which is a clear mark of interest for the concept. New considerations, such
as the place of humans in the DT [26], are also gaining interest.

2.2 Functions of Digital Twins


Upon review of past DT publications at SOHOMA, the set of envisioned functions of
DTs can be formulated. [20] postulated that the primary function of DTs should be to
reflect the reality of their physical counterpart as a single source of truth. This then
represents the value of DTs, because the accurate and up to date reflection of reality is
extremely valuable and plays a critical role in effective control, scheduling and planning
[27]. In order to perform this primary function, DT architectures and implementations
must provide key supporting functions, such as:

• Support data and information exchange between physical and digital worlds [24].
• Gather and aggregate data from the physical world, from multiple sources [28].
• Couple the virtual representation to their physical counterpart [27].
86 K. Kruger et al.

• Store historical data of the physical twin over its entire lifespan [28].

As a result of these supporting functions, DTs can offer several benefits:

• Enhanced visibility of (and insight into) performance/operation of the physical


twin [27].
• Integration of the physical twin, and supported interoperability for collaboration with
different digital systems and services [27, 29].
• Organization of systems to manage complexity [30].

Building on the above-mentioned supporting functions and benefits, the papers fre-
quently refer to the high-level functions, or roles, that DTs are envisioned to fulfil. From
[24, 28, 31] these roles can be summarized as follows: remote monitoring, predictive
maintenance, simulation of “what-if” scenarios, and planning and optimization.
Three regimes can be distinguished in the above four roles: Firstly, some roles require
an emulation of the physical twin (i.e. remote monitoring that reflects the current opera-
tion). Secondly, some roles rely on a simulation model of the physical twin to predict its
future behaviour, either using historical information (e.g. predictive maintenance) or a
combination of historical information and chosen scenarios (e.g. planning and “what-if”
simulations). The third regime, control, is also focused on the future, but is aimed at
affecting the physical twin’s behaviour (e.g. planning and optimization). The simulation
regime contains the roles that most significantly distinguish a DT from a supervisory
control and data acquisition (or SCADA) system. There is some disagreement amongst
researchers whether the simulation and/or control regimes should be considered to be
part of the DT, as will be pointed out below.

2.3 Digital Twin Architectures

The advancement of DT research and the wide adoption of DT solutions benefit greatly
from the development of reference architectures to guide the design and implementation.
In recent editions of SOHOMA, several papers presented such reference architectures.
As mentioned in Sect. 2.1, the introduction of the ARTI architecture presented a blueprint
for the creation and integration of DTs within holonic systems. ARTI requires that every
system component be classified along three dimensions: Activities or Resources, Types
or Instances, and Intelligent Beings or Intelligent Agents. While the classification in the
first two dimensions were present in ARTI’s predecessor, PROSA, the last classification
represents an important change in approach. The classification allows for the separation
of decision-making and the reflection of reality. Intelligent Agents should encapsulate
the decision-making functionality, while Intelligent Beings should reflect the reality of
their corresponding element in the world-of-interest. Intelligent Beings thus represent
DTs [20, 21].
The architectures developed in [24, 27–29] have a shared characteristic – these archi-
tectures aim to encapsulate the functionality of the DT in layers. This clear encapsulation
is arguably the result of the holonic systems influence in their design. The architectures
developed in [24] and [28] (as illustrated in Fig. 3 and Fig. 4, respectively) have clearly
Past and Future Perspectives on Digital Twin Research at SOHOMA 87

defined functionality for each of the architecture layers. A similar, yet more detailed and
possibly more context-specific architecture was developed in [27].

Fig. 3. Six-Layer Architecture for Digital Twins [24]

The architectures developed in [24] and [28] propose very similar functionality, as
encapsulated within each of the defined layers of the DT. At the lowest level of these
architectures are the interfaces to the physical twin, where data is gathered through smart
sensors, embedded devices, low-level controllers and data acquisition systems. In both
cases, Open Platform Communications Unified Architecture (OPC UA) (discussed in
Sect. 2.4) is proposed for the communication of this gathered data to the cyber levels
of the architecture – in essence, Layer 3 in [24] and the Data transmission layer in
[28] are equivalent in their functionality. Both architectures emphasize the need for data
aggregation, as achieved in Layer 4 in [24] and the Data update and aggregation layer
in [28]. This function aims to convert raw sensed data into contextualized information
and, in the process, reduces the amount of data that must be processed and analysed
within the DT. Database storage for historical information is used to support the highest
level of functionality in both architectures. While [28] directly specifies this level in
the architecture as Analysis and decision-making, [24] infers this function by providing
decision-makers with access to emulation and simulation functions that build on the DT
data.
When comparing these architectures with the regimes mentioned in the previous
section, it is notable that the top layer in [28] emphasises the simulation regime, but also
mentions the control regime in the top layer and explicitly indicates the functionality
to implement control. The sixth layer in [24] explicitly mentions the emulation and
88 K. Kruger et al.

Fig. 4. Digital Twin control architecture [28]

simulation regimes, but indirectly implies that a control regime also is provided through
bidirectional information flow.
Another architecture for the DT, developed in [32], was guided by the ARTI architec-
ture. This architecture, developed in the context of enabling energy-aware resources, is
illustrated in Fig. 5. The architecture makes a clear distinction between the functions of
reality reflection and decision making. Reality reflection is achieved through Intelligent
Beings, which in this case constitutes the DT. Decision-making functionality is mapped
to the Intelligent Agents. In this architecture, the DT is focused solely on the emulation
regime, with the simulation and control regimes present, but outside the DT.
The Six-Layer Architecture for Digital Twins (SLADT) developed in [24] was
extended to support aggregation in [30]. The SLADT with aggregation (SLADTA)
allows for the creation of a hierarchical (or even hybrid) system of DTs, as is shown in
Fig. 6. SLADTA aims to support the scalability of DT solutions, by reducing complexity
through encapsulation and modularity.

2.4 Implementation Technologies


Several of the reviewed DT papers have made valuable contributions through case study
implementations of their developed DT architectures. These implementations provide
valuable insight into the technologies that have been, and may still be, considered for
realizing the various functions of the DT. According to these functions, the technologies
Past and Future Perspectives on Digital Twin Research at SOHOMA 89

Fig. 5. Energy-aware resource framework for Digital Twin [32]

Fig. 6. Six-Layer Architecture for Digital Twins with Aggregation [30]

are identified for realizing communication, data storage, and simulation and analysis in
DT implementations.

Communication
Upon inspection of the presented DT implementations, it is clear that communication
within these implementations generally occur over two interfaces: the interface between
the real and virtual worlds (i.e. between the physical twin and DT), and the interface
90 K. Kruger et al.

between digital systems/processes that are local and those that reside on the internet
(or cloud). [33] argued that at both these communication interfaces there exists a need
for heterogeneous communication to support the vast variety of devices, software, and
legacy systems.
For the data exchange between the physical twin and DT, Open Platform Communica-
tions Unified Architecture (OPC UA) has been used. [24] identified the vendor-neutrality
and security of OPC as a valuable characteristic for DT implementation within the con-
text of Industry 4.0. [27] and [32] also reported using Modbus TCP for communication
at this level in their implementation, which is also vendor neutral. Such a bus communi-
cation would be particularly appropriate if the DT interfaces directly with the sensors,
and not through a controller in the physical twin.
For the communication between the local components of the DT digital implemen-
tation and the internet or cloud, several technologies appear to be suitable. [27] indicated
that the Devices Profile for Web Services (DPWS) and Representational State Transfer
(REST) interfaces were used in their case study implementation, with data formatted in
the JavaScript Object Notation (JSON). [33] identified Message Queue Telemetry Trans-
port (MQTT) as a suitable communication protocol, while [24] used the Structured Query
Language (SQL) to communicate with a cloud-based database.

Data Storage
There are many suitable cloud-based database options available to the developers of
DTs. Among these options, [24] used the Google Cloud Platform and [28] selected the
IBM CloudBurst private cloud platform.

Simulation and Analysis


It is expected that the simulation and analysis functionality of DT implementation would
typically be specific to the application context and, as such, the selection of technology
would be dictated by the needs of the application. However, the papers presenting DT
implementation in the manufacturing context utilized similar technology for simulation
and analysis. For simulation (and, in some cases, emulation) several discrete event sim-
ulation software packages were used in the DT implementations: Siemens Tecnomatix
PlantSim [24], Rockwell Arena [32] and FlexSim [33]. In some cases, the discrete event
simulation software is also used for basic analysis. To accomplish optimization within
their analysis, [27] utilized the IBM ILOG CPLEX optimizer engine.

3 The Future Trajectory of Digital Twin Research

Shifting the focus from the past to the future, this section aims to describe a possible
trajectory of DT research for the coming editions of SOHOMA. It is expected that
DT research has notable momentum and opportunities to build on; however, several
challenges must still be addressed. The section thus highlights the existing enablers and
challenges to DT research, and offers a strategy for ongoing and future DT research
endeavours.
Past and Future Perspectives on Digital Twin Research at SOHOMA 91

3.1 Enablers

Future DT research is expected to investigate further architectures and applications.


The SOHOMA community’s background in holonic concepts will provide a strong base
for these developments because of the many parallels between holons and DTs: a DT
and its physical twin could be seen as a resource-type holon, while the aggregation
principle behind a DTA is characteristic of holonic systems. As more DT applications
are researched, one can expect one or two architectures to become commonly accepted,
as PROSA and ADACOR had become in holonic systems. One expected trend is that
DTs for increasingly complex systems will be researched, where the concepts in ARTI
can play an important role.
While building on well-established principles of preceding holonic architectures,
the introduction of ARTI managed to reignite the discussion on the development and
implementation of holonic systems. The architecture formally identifies and integrates
the concept of the DT within its description. In effect, the presence of the DT within
ARTI indicates that DT research in fact represents a continuation, rather than a disruption,
within the SOHOMA context. As such, ARTI is expected to enable DT research through
not just an architectural blueprint, but also as a basis for leveraging the valuable preceding
research of SOHOMA.
An important enabler for DT research is the SOHOMA community and the
SOHOMA workshop itself, which represents one of the foremost events for the discus-
sion of DTs between so many international teams. The annual workshops have proven
to be invaluable as opportunities for scientific discussion and collaboration. As men-
tioned in Sect. 2.1, early development of the DT concept occurred in discussions during
workshop sessions. Furthermore, these workshops played a role in the understanding
and refinement of the ARTI architecture. The SOHOMA workshops of the future are
expected to continue to support the development, evaluation and collaboration on DT
research.
In the early years, the SOHOMA workshop series was created around research
teams sharing a strong activity of platform development, aiming at implementing the
holonic principle on size 1 demonstrators. As of 2020, about a dozen platforms have been
developed across the world by regular SOHOMA participants. These laboratory scale
systems are mainly built around automated transfer loops conveying products on pallets
around 3 to 12 automated or manual workstations. The intelligence is either embedded on
the pallet or container (e.g. RFID tag, embedded CPU, Arduino chips, WSN mote, etc.)
or in the product (e.g. RFID tag embedded in wood or textile raw material). For many
years, contributions to SOHOMA were mainly focused on the design of innovative
control and the associated decision-making techniques. However, as prevalent in the
ARTI discussion in Sect. 2.1, it appears that these platforms also constitute ideal support
for the development of innovative DT architectures and proofs of concepts.
Lastly, in addition to their expertise in architecture and platform development, the
research teams involved in the SOHOMA workshop series have a strong industrial back-
ground that enables them to interact effectively with industrial partners – among which
are Bombardier, French Postal service and Airbus Group, to name a few. This strong
connection with industry already leads the community to integrate, step by step, the
92 K. Kruger et al.

classical concepts of the workshop series (holonic control, for example) in the organi-
zation of these companies. DTs are currently following the same path, with some early
industrial implementations expected to appear in the next couple of years and represent
one of the future directions of this topic in SOHOMA.

3.2 Challenges
A key challenge exists in the development of DT software. DTs are globally meant to
become middleware interfacing all sorts of applications with the manufacturing system.
This middleware is also meant to integrate reconstruction and forecasting models [32] to
enhance the quality and quantity of available data. However, all these objectives, added
to a context of high variability of architecture and technologies of targeted applications,
tend to increase the complexity of software development. This complexity might become
one of the main barriers to achieving actual DT implementations on industrial scales. The
SOHOMA workshop series is a precursor in a domain that should gain major interest in
the next few years and integrate researchers from communities more oriented towards
software development.
Further challenges relate to achieving real-world impact through industrial appli-
cations of DTs. In this context, the first challenge is the validation and evaluation of
DT functionality and performance. A particular challenge in evaluating alternative DT
architectures and implementations, is the lack of widely accepted benchmarks and stan-
dards. The different life-cycle phases of the DT itself, as well as the underlying system
it mirrors, will require different benchmarks. For example, the benchmarks in the initial
development phase of a DT, where the immediate development context (e.g. the team’s
expertise and tools) can be influenced, will be different from the maintenance and support
phase where the future context is less certain. Also, some figures of merit are difficult
to quantify in a research context (e.g. availability) and others are highly dependent on
the expertise and experience of the persons doing the research (e.g. customizability and
maintainability).
In addition to a lack of benchmarks and standards, there is currently still a shortage of
real-world, industrial case studies and applications. [34] mention that the development
of the DT is still at its infancy, as literature (at the time) mainly presented conceptual
ideas and architectures without concrete case studies. Although there exist many papers
on the DT for a manufacturing system, there is still little concrete evidence of DT imple-
mentation and evaluation in real world settings. Many existing solutions and platforms
already provide communication in one direction for monitoring and analytical purposes,
but there is insufficient evidence of bidirectional communication.
Considering the potential for industrial applications, a key concern is that of cyberse-
curity. With more and more devices migrating towards an interconnected environment,
where devices in the real world are connected to/through the cyberspace – the risk of infil-
tration through cyber-attacks increases. [35] mentioned that “CEOs see cybersecurity as
the number one threat to the global economy over the next five to ten years”. [36] further
mentions that 80% of enterprises are not equipped with cybersecurity prevention and
mitigation plans. Security threats that Industry 4.0 may introduce can be broadly grouped
as [37]: data loss or corruption, intellectual property breaches, and Denial-of-Services
(DoS). It is therefore sensible to integrate cybersecurity best practices when developing
Past and Future Perspectives on Digital Twin Research at SOHOMA 93

DTs. An interesting development, initiated by General Electric, was the combination of


a DT and industrial control systems to form a Digital Ghost to prevent cyber-attacks.
This initiative aimed to use physics to detect and prevent attacks by sensing anomalies
in processes [38].
Finally, Industry 4.0 and its driving technologies are challenged with the effective
integration of humans. While DT research has made advancements in the development
of architectures, communication frameworks, and analysis tools, there has been a lack
of focus on human factors. These human factors should be considered in two aspects:
how should humans be integrated with DTs, and how can humans be integrated through
DTs. The first aspect relates to the development of suitable interfaces between DT and
human decision makers, while the second explores the DT as a mechanism for making
humans an integral part of intelligent systems as both activity performers and decision
makers. While some advances in this area have been made – especially in the sessions
on human factors at recent editions of SOHOMA – continued research effort is required
to address these important aspects.

3.3 Strategic Focus and Impact


In order to develop architectures, frameworks and methodologies that are suitable for a
wide range of applications, it is necessary to inherently separate the elements that are
context-specific from those that are generic. This separation of concerns is emphasized
by the ARTI architecture and is also evident in each of the architectures reviewed in
Sect. 2.3. The elements in the architecture that are generic contribute to the creation of
the infrastructure of any given application – in the case of DTs, this primarily refers to
the accurate reflection of reality (i.e. the physical twin and the WOI). However critical
this infrastructure may be, it generally does not (strive to) offer much value to a spe-
cific enterprise. The real value-addition is achieved by integrating the context-specific
elements, as adapted to suit the given application context. The question then arises:
should SOHOMA researchers focus on the development of DT infrastructure, or on
value-adding elements?
Unfortunately, this question does not seem to have an “either/or” answer. Develop-
ment of a generic DT infrastructure, through architecture, platform and communication
specification, is crucial for the researcher. However, this may not be enough to convince
the industrial client who must focus on value and results. Alternatively, the strong focus
on addressing individual client requirements through context-specific solutions and tools
will hardly support continued research momentum. Furthermore, this strategy may not
result in achieving the change in thinking that may be desired in industry.
The (hollow and, probably, unsurprising) answer then seems to be for researchers
to strive to find a balance between the development of generic and context-specific ele-
ments within their DT research. This strategy, as has been followed by most of the papers
reviewed here, involves developing a generic architecture, framework or methodology,
and implementing, adapting and extending it in a specific case study. While the appli-
cation may be specific to the case study, it can be used to validate and gain insight into
DT applications in general.
As DTs for increasingly complex systems are researched, the value in collaboration
amongst researchers and with industry will increase. If research groups can share generic
94 K. Kruger et al.

functional elements of DTs, it would enable groups to focus more on the novel aspects
in new architectures and applications. Some agreement on the architectures (particu-
larly the generic functional units), as well as communication protocols and ontologies,
are necessary precursors for such collaboration. To gain maximum value out of collab-
oration with industry, it is necessary that research teams aim to build on the existing
technology and expertise of industry partners and ensure that developed architectures
support integration with existing tools and solutions. In fact, the realization and mainte-
nance of complex real-world DT applications will be extremely challenging without the
support of industry. To this end, the SOHOMA community, with its network of diverse
industry partners, is an environment conducive to developing the foundation for effective
collaboration.
On a broader scope, SOHOMA community can drive major evolutions in the concept
and implementation of DTs. Currently, a loose consensus is emerging on the manufac-
turing aspect, with applications on automated production cells. Many research trends
emerging in the community would benefit from integrating, or being integrated into, the
notion of DT. Among others, the notion of cloud manufacturing, for example, can be
highlighted. Being able to enhance the visualization of the actual state of the systems,
in real-time over the internet, would bring about major changes in the way the systems
are controlled in the future. Another interesting research trend is the relationship with
humans: should humans use the DT, or are they modelled inside the DT, or both? What
are the best augmented-reality visualization tools that can be connected to the DT for
providing decision support in real-time to production managers and operators? Simi-
larly, sustainable manufacturing (especially from an energy point of view) could also be
greatly influenced by the notion of DT: are we able to do real-time modelling, predic-
tion and evaluation of the energy consumption of every element of our system? These
research questions are interesting, challenging and have great potential for impact, and
should continue to stimulate DT research in the SOHOMA community.
Finally, in support of the outlined strategy, the research community should retain
a predisposition of constructive scepticism and place high value on the evaluation of
DT applications. A strong focus on validation and evaluation will serve to address the
challenge of not having any effective mechanisms for evaluating DT architectures and
implementations. It is also crucial for the researchers to communicate their development
and applications to the research community at events like SOHOMA and engage in the
discussions on key issues.

4 Conclusion

The Digital Twin (DT) has emerged as a key enabler for cyber-physical systems and,
as such, Industry 4.0. Considering the recent editions of SOHOMA, DT research has
gained notable traction and resulted in an increasing number of contributions. While
several papers in these editions recognize the DT concept as a critical aspect of Industry
4.0 and CPPS landscape, truly valuable contributions have been made by papers which
developed reference architectures for the design and implementation of DTs.
Past and Future Perspectives on Digital Twin Research at SOHOMA 95

This paper provided a review of the DT related papers at SOHOMA, which focused
on the functions and roles of DTs, the presented reference architectures, and the most
prominent implementation technologies. The review is followed by a discussion of the
future trajectory of DT research in the context of SOHOMA. The discussion highlights
some aspects that may enable and challenge DT research in the future, and attempts to
outline a research strategy for the SOHOMA community.
The enabling factors for DT research are summarized as follows:

• The SOHOMA community has a strong culture of architecture development


• ARTI serves as a valuable blueprint for DT development
• The SOHOMA workshops are one of the foremost platforms for scientific discussion
and collaboration on DT research
• The SOHOMA community have developed important demonstrators to support DT
implementations
• As was the case for holonic systems, the strong industry connection will support the
industrial adoption of DT research

The following challenges are identified:

• DT software is expected to become increasingly complex


• There exists a shortage of benchmarks and standards to support the evaluation of DT
architectures and implementations
• There is a need for real-world DT applications and validations
• There are issues concerning cybersecurity in DT application
• More research focus on human factors are required

The paper recommends that the DT research community continues with a strategy
that balances the development of generic architectures and platforms, with context-
specific applications, which can simultaneously support research collaboration and
industry impact. Several interesting research questions are identified, which are expected
to continue to stimulate DT research for future editions of the SOHOMA workshop.

References
1. Miskinis, C.: The history and creation of the digital twin concept (2019). https://www.challe
nge.org/insights/digital-twin-history/. Accessed 28 May 2020
2. Grieves, M.: Digital Twin: Manufacturing Excellence through Virtual Factory Replication.
Melbourne (2014)
3. Shafto, M., Conroy, M., Doyle, R., Glaessgen, E.: Modeling, simulation, information
technology & processing roadmap. Technology Area (2010)
4. Grieves, M., Vickers, J.: Digital twin: mitigating unpredictable, undesirable emergent behav-
ior in complex systems. In: Kahlen, F.J., Flumerfelt, S., Alves, A. (eds.) Transdisciplinary
Perspectives on Complex Systems. Springer, Cham (2017)
5. Vachalek, J., Bartalsky, L., Rovny, O., Sismisova, D., Morhac, M., Loksik, M.: The digital
twin of an industrial production line within the industry 4.0 concept. In: Proceedings of the
2017 21st International Conference on Process Control, pp. 258–262 (2017)
96 K. Kruger et al.

6. Tao, F., Cheng, J., Qi, Q., Zhang, M., Zhang, H., Sui, F.: Digital twin-driven product design,
manufacturing and service with big data. Int. J. Adv. Manuf. Technol. 94(9–12), 3563–3576
(2018)
7. Feuer, Z., Weissman, Z.: The value of the digital twin (2017). https://community.plm.automa
tion.siemens.com/t5/Digital-Transformations/The-value-of-the-digital-twin/ba-p/385812.
Accessed 6 June 2020
8. Marr, B.: What is digital twin technology – and why is it so important?
(2017). https://www.forbes.com/sites/bernardmarr/2017/03/06/what-is-digital-twin-techno
logy-and-why-is-it-so-important/#26203f1c2e2a. Accessed 6 June 2020
9. Martin, J.: The value of automation and power of the digital twin (2017). https://newsignat
ure.com/articles/value-automation-power-digital-twin/. Accessed 6 June 2020
10. Oracle: Digital twins for IoT applications: A comprehensive approach to implementing IoT
digital twins (white paper). Redwood shores (2017)
11. Van Belle, J., Philips, J., Ali, O., Saint Germain, B., Van Brussel, H., Valckenaers, P. A service-
oriented approach for holonic manufacturing control and beyond. In: Borangiu, T., Thomas,
A., Trentesaux, D. (eds.) Service Orientation in Holonic and Multi-Agent Manufacturing
Control. Studies in Computational Intelligence, vol. 402, pp. 1–20 (2011)
12. Cardin, O., Castagna, P. Myopia of service oriented manufacturing systems: Benefits of data
centralization with a discrete-event observer. In: Borangiu, T., Thomas, A., Trentesaux, D.
(eds.) Service Orientation in Holonic and Multi-Agent Manufacturing Control. Studies in
Computational Intelligence, vol. 402, pp. 197–210 (2011)
13. Thomas, A., Trentesaux, D.: Are intelligent manufacturing systems sustainable? In Service
Orientation in holonic and multi-agent manufacturing and robotics. Springer Stud. Comput.
Intell. 544, 3–14 (2013)
14. Dobrescu, R., Merezeanu, D.: Simulation platform for virtual manufacturing systems. In:
Borangiu, T., Trentesaux, D., Thomas, A., Leitao, P., Barata Oliviera, J. (eds.) Service Orien-
tation in Holonic and Multi-Agent Manufacturing, Proceedings of SOHOMA 2016. Studies
in Computational Intelligence, vol. 694, pp. 395–404. Springer, Cham (2017)
15. Rocha, A., Barroca, P., Dal Maso, G., Barata Oliviera, J.: Environment to simulate distributed
agent based manufacturing systems. In: Borangiu, T., Trentesaux, D., Thomas, A., Leitao,
P., Barata Oliviera, J. (eds.) Service Orientation in Holonic and Multi-Agent Manufacturing,
Proceedings of SOHOMA 2016. Studies in Computational Intelligence, vol. 694, pp. 405–416.
Springer, Cham (2017)
16. Rocha, A., Rodrigues, M., Barata Oliviera, J.: An evolvable and adaptable agent based smart
grid management – a simulation environment. In: Borangiu, T., Trentesaux, D., Thomas,
A., Leitao, P., Barata Oliviera, J. (eds.) Service Orientation in Holonic and Multi-Agent
Manufacturing, Proceedings of SOHOMA 2016. Studies in Computational Intelligence, vol.
694, pp. 417–426. Springer, Cham (2017)
17. Kruger, K., Basson, A.: Validation of a holonic controller for a modular conveyor system
using an object-oriented simulation framework. In: Borangiu, T., Trentesaux, D., Thomas,
A., Leitao, P., Barata Oliviera, J. (eds.) Service Orientation in Holonic and Multi-Agent
Manufacturing, Proceedings of SOHOMA 2016. Studies in Computational Intelligence, vol.
694, pp. 427–436. Springer, Cham (2017)
18. Zupan, H., Zerovnik, J., Herakovik, N.: Local search with discrete event simulation for the
job shop scheduling problem. In: Borangiu, T., Trentesaux, D., Thomas, A., Cardin, O. (eds.)
Service Orientation in Holonic and Multi-Agent Manufacturing, Proceedings of SOHOMA
2017. Studies in Computational Intelligence, vol. 803, pp. 371–380. Springer, Cham (2018)
19. Derigent, W., Thomas, A.: Situational awareness in product lifecycle information systems. In:
Borangiu, T., Trentesaux, D., Thomas, A., Cardin, O. (eds.) Service Orientation in Holonic
and Multi-Agent Manufacturing, Proceedings of SOHOMA 2017. Studies in Computational
Intelligence, vol. 803, pp. 127–136. Springer, Cham (2018)
Past and Future Perspectives on Digital Twin Research at SOHOMA 97

20. Valckenaers, P.: ARTI reference architecture – PROSA revisited. In: Borangiu, T., Trente-
saux, D., Thomas, A., Cavalieri, S. (eds.) Service Orientation in Holonic and Multi-Agent
Manufacturing, Proceedings of SOHOMA 2018. Studies in Computational Intelligence, vol.
803, pp. 1–19. Springer, Cham (2019)
21. Valckenaers, P.: Perspective on holonic manufacturing systems: PROSA becomes ARTI.
Comput. Ind. 120, 103226 (2020)
22. Borangiu, T., Cardin, O., Babiceanu, R., Giret, A., Kruger, K., Raileanu, S., Weichhart, G.:
Scientific discussion: open reviews of “ARTI reference architecture – PROSA revisited”. In:
Borangiu, T., Trentesaux, D., Thomas, A., Cavalieri, S. (eds.) Service Orientation in Holonic
and Multi-Agent Manufacturing, Proceedings of SOHOMA 2018. Studies in Computational
Intelligence, vol. 803, pp. 20–37. Springer, Cham (2019)
23. Pipan, M., Protner, J., Herakovic, N.: Integration of distributed manufacturing nodes in smart
factory. In: Borangiu, T., Trentesaux, D., Thomas, A., Cavalieri, S. (eds.) Service Orienta-
tion in Holonic and Multi-Agent Manufacturing, Proceedings of SOHOMA 2018. Studies in
Computational Intelligence, vol. 803, pp. 424–435. Springer, Cham (2019)
24. Redelinghuys, A.J.H., Basson, A.H., Kruger, K.: A six-layer digital twin architecture for a
manufacturing cell. In: Borangiu, T., Trentesaux, D., Thomas, A., Cavalieri, S. (eds.) Service
Orientation in Holonic and Multi-Agent Manufacturing. Proceedings of SOHOMA 2018.
Studies in Computational Intelligence, vol. 803, pp. 412–423. Springer, Cham (2019)
25. Selma, C., Tamzalit, D., Mebarki, N., Cardin, O., Bruggeman, L., Theriot, D.: Industry 4.0
and service companies: the case of the French postal service. In: Borangiu, T., Trentesaux,
D., Thomas, A., Cavalieri, S. (eds.) Service Orientation in Holonic and Multi-Agent Manu-
facturing. Proceedings of SOHOMA 2018. Studies in Computational Intelligence, vol. 803,
pp. 436–447. Springer, Cham (2019)
26. Berdal, Q., Pacaux-Lemoine, M., Bonte, T., Trentesaux, D., Chauvin, C.: A benchmarking
platform for human-machine cooperation in Industry 4.0. Submitted to SOHOMA (2020)
27. Borangiu, T., Oltean, E., Raileanu, S., Anton, F., Anton, S., Iacob, I.: Embedded digital twin
for ARTI-type control of semi-continuous production processes. In: Borangiu, T., Trentesaux,
D., Leitao, P., Giret Boggino, A., Botti, V. (eds.) Service Oriented, Holonic and Multi-agent
Manufacturing Systems for Industry of the Future. SOHOMA 2019. Studies in Computational
Intelligence, vol. 853, pp. 113–133. Springer, Cham (2020)
28. Raileanu, S., Borangiu, T., Ivanescu, N., Morariu, O., Anton, F.: Integrating the digital twin
of a shop floor conveyor in the manufacturing control system. In: Borangiu, T., Trentesaux,
D., Leitao, P., Giret Boggino, A., Botti, V. (eds.) Service Oriented, Holonic and Multi-agent
Manufacturing Systems for Industry of the Future. SOHOMA 2019. Studies in Computational
Intelligence, vol. 853, pp. 134–145. Springer, Cham (2020)
29. Lu, Q., Xie, X., Heaton, J., Parlikad, A., Schooling, J.: From BIM towards digital twin:
strategy and future development for smart asset management. In: Borangiu, T., Trentesaux,
D., Leitao, P., Giret Boggino, A., Botti, V. (eds.) Service Oriented, Holonic and Multi-agent
Manufacturing Systems for Industry of the Future. SOHOMA 2019. Studies in Computational
Intelligence, vol. 853, pp. 392–404. Springer, Cham (2020)
30. Redelinghuys, A., Kruger, K., Basson, A.: A six-layer architecture for digital twins with
aggregation. In: Borangiu, T., Trentesaux, D., Leitao, P., Giret Boggino, A., Botti, V. (eds.)
Service Oriented, Holonic and Multi-agent Manufacturing Systems for Industry of the Future.
SOHOMA 2019. Studies in Computational Intelligence, vol. 853, pp. 171–182, Springer,
Cham (2020)
31. Taylor, N., Human, C., Kruger, K., Bekker, A., Basson, A.: Comparison of digital twin devel-
opment in manufacturing and maritime domains. In: Borangiu, T., Trentesaux, D., Leitao, P.,
Giret Boggino, A., Botti, V. (eds.) Service Oriented, Holonic and Multi-agent Manufacturing
Systems for Industry of the Future. SOHOMA 2019. Studies in Computational Intelligence,
vol. 853. Springer, Cham (2020)
98 K. Kruger et al.

32. Cardin, O., Castagna, P., Couedel, D., Plot, C., Launay, J., Allanic, N., Madec, Y., Jegouzo, S.:
Energy aware resource in digital twin: the case of injection moulding machines. In: Borangiu,
T., Trentesaux, D., Leitao, P., Giret Boggino, A., Botti, V. (eds.) Service Oriented, Holonic
and Multi-agent Manufacturing Systems for Industry of the Future. SOHOMA 2019. Studies
in Computational Intelligence, vol. 853, pp. 183–194. Springer, Cham (2020)
33. Andre, P., Azzi, F., Cardin, O.: Heterogenous communication middleware for digital twin
based cyber manufacturing systems. In: Borangiu, T., Trentesaux, D., Leitao, P., Giret Bog-
gino, A., Botti, V. (eds.) Service Oriented, Holonic and Multi-agent Manufacturing Systems
for Industry of the Future. SOHOMA 2019. Studies in Computational Intelligence, vol. 853,
pp. 146–157. Springer, Cham (2020)
34. Kritzinger, W., Karner, M., Traar, G., Henjes, J., Sihn, W.: Digital twin in manufacturing:
a categorical literature review and classification. IFAC-PapersOnLine. 51(11), 1016–1022
(2018)
35. Taylor, C.: Cybersecurity is the biggest threat to the world economy over the next decade,
CEOs say (2019). https://www.cnbc.com/2019/07/09/cybersecurity-biggest-threat-to-world-
economy-ceos-say.html. Accessed 03 June 2020
36. Bocetta, S.: 10 Most Urgent Cybersecurity Issues in 2019 (2019). https://www.csoonline.com/
article/3501897/10-most-urgent-cybersecurity-issues-in-2019.html. Accessed 28 May 2020
37. Redelinghuys, A.J.H., Basson, A.H., Kruger, K.: Cybersecurity considerations for indus-
trie 4.0. In: D. Dimitrov, D. Hagedorn-Hansen, K. von Leipzig (eds.) International Con-
ference on Competitive Manufacturing (COMA 19). Knowledge Valorisation in the Age of
Digitalization. Stellenbosch, pp. 266–271 (2019)
38. Dignan, L.: GE aims to replicate Digital Twin success with security-focused Digital
Ghost (2017). https://www.zdnet.com/article/ge-aims-to-replicate-digital-twin-success-with-
security-focused-digital-ghost/. Accessed 23 May 2018
Decision Support Based on Digital Twin
Simulation: A Case Study

Flávia Pires1(B) , Matheus Souza1 , Bilal Ahmad2 , and Paulo Leitão1


1
Research Centre in Digitalization and Intelligent Robotics (CeDRI), Instituto
Politécnico de Bragança, Campus de Santa Apolónia, 5300-253 Bragança, Portugal
{fpires,pleitao}@ipb.pt, matheussouza.ipb@gmail.com
2
University of Warwick, Warwick Manufacturing Group (WMG), Coventry, UK
B.Ahmad@warwick.ac.uk

Abstract. The significance of Digital Twins is considered vital in the


reshaping of the manufacturing field with the emergence of the fourth
industrial revolution. The potential of applying the Digital Twin tech-
nology is being studied extensively as a key enabler of engineering cyber-
physical systems. However, it is still in its infancy, and only a few sci-
entific papers are describing its applicability in case studies, prototypes
or industrial systems. Bearing this in mind, this paper presents a com-
prehensive overview of Digital Twins in the manufacturing domain and
defines a conceptual architecture that considers simulation capabilities
to support the optimisation of production processes. The designed app-
roach is applied to a proof of concept case study that considers a flexible
production cell and uses the simulation of the system to dynamically sup-
port decision making to optimise the production processes when changes
occur in the real production system.

1 Introduction
Industry 4.0 is changing the manufacturing industry landscape, considering the
digitisation and the value of data as its foundations. Most of the companies that
consider adopting the Industry 4.0 paradigm have to bear in mind the applica-
tion of, amongst others, Cyber-Physical Systems (CPS), Artificial Intelligence
(AI) and Internet of Things (IoT) [5]. In the manufacturing environment, the
implementation of CPS comprises the digitisation of systems, merging the real
and virtual worlds. This characteristic has provided the opportunity to the Dig-
ital Twin to emerge as one of the key enabling technologies.
The concept of the Digital Twin was proposed by M. Grieves in 2002, by
defining features as the real space, the virtual space and the connection between
them [7]. With the 4th industrial revolution, the rapid evolution of certain tech-
nologies, e.g., IoT, simulation, Big Data and Machine Learning allowed to boost
this approach, making its application in the manufacturing domain a reality
[1,2].
The scientific and industrial world have been directing their attention towards
the Digital Twin technology. According to [10], the interest and research about
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 99–110, 2021.
https://doi.org/10.1007/978-3-030-69373-2_6
100 F. Pires et al.

Digital Twin technology have not only grown in the academic field but also
among industry practitioners. In 2017, a study conducted about the Digital
Twin market showed that it is expected to reach $15.66 billion by 2023 [15]. A
new study conducted in 2019 showed that the Digital Twin market would reach
$35.8 billion by 2025 [16].
Although there has been a growing interest of the scientific community in the
Digital Twin, there is still a lack of applications that include the decision support
functionality [10,12], mainly using simulation and what-if engines. Bearing this
in mind, the main goal of this paper is to reduce the gap that exists in the cur-
rent research literature related to Digital Twin applications in the manufacturing
domain, including decision support capabilities. The main scientific contribution
of this paper is the development of a conceptual Digital Twin architecture that
considers simulation capabilities to support decision-making and its application
in a case study for a proof of concept Digital Twin providing decision support
in the manufacturing domain. The presented case study is a flexible production
cell with monitoring and decision support obtained the Digital Twin based sim-
ulation. The experimental results allowed to verify the applicability of using the
Digital Twin to support the production managers in decision-making when a
change in conditions occurs.
The rest of the paper is organised as follows. Section 2 presents the Dig-
ital Twin concept in the manufacturing sector, and Sect. 3 reviews the deci-
sion support approaches based on Digital Twin concept and introduces the pro-
posed system architecture. Section 4 describes the implementation of the Digital
Twin simulation architecture to the case study and analyses the achieved results.
Finally, Sect. 5 rounds up with the conclusions and points out some future work.

2 Digital Twin in the Manufacturing Domain

The manufacturing domain has evolved since the 1st industrial revolution, with
the invention of the steam engine as a new source of energy. Today, the world
finds itself in the fourth industrial revolution [3,4].

2.1 Digital Twin: The Concept Evolution

The German government launched the Industry 4.0 initiative to drive the digital
revolution in the manufacturing industry [5]. According to [5], the manufacturing
environment compliant with the Industry 4.0 principles comprises the implemen-
tation of CPS, requiring the digitisation of systems and the convergence between
the real and digital worlds. Bearing this in mind, the digitisation of the manu-
facturing environment has been the main focus of both academia and industry
in the last few years. In this context, the Digital Twin concept has emerged
and received attention in the scientific community as a promising new field of
investigation for the digitisation of the manufacturing environment [6].
Grieves proposed the foundations of the Digital Twin technology in 2002.
At the time, the concept, called “Mirrored Spaces Models”, comprised features
Decision Support Based on Digital Twin Simulation: A Case Study 101

as the real space, the virtual space and their connections allowing the flow of
data [7]. In 2011, the concept was adopted by the US National Aeronautics and
Space Administration (NASA) entering the field of aeronautics to determine the
health of aircrafts and predict their structural life [8].
From this point on, the evolution of the concept has grown rapidly covering
several sectors, like manufacturing. One of the first authors to bring the concept
of Digital Twin to the manufacturing sector was [9], who defined the Digital
Twin as a “the coupled model of the real machine that operates in the cloud
platform and simulates the health condition with an integrated knowledge from
both data-driven analytical algorithms as well as other available physical knowl-
edge”. The concept is increasing and “has evolved into a broader concept that
refers to a virtual representation of manufacturing elements such as personnel,
products, assets and process definitions, a living model that continuously updates
and changes as the physical counterpart changes to represent status, working
conditions, product geometries and resource states in a synchronous manner”
[10]. Another recent definition was provided by [6] that defines the Digital Twin
as “a method or tool for modelling and simulating a physical entity’s status
and behaviour”, that can “realise the interconnection and intelligent operation
between the physical manufacturing space and virtual space”.
The growing interest in Digital Twin technology is illustrated in Fig. 1 that
presents the evolution in time of the number of scientific papers related to the
Digital Twin retrieved from the Scopus database using the search query TITLE-
ABS-KEY (“digital twin” AND “manufacturing”). This analysis shows that the
number of scientific publications regarding the use of Digital Twin in Manufac-
turing is growing exponentially since 2016. This can be translated to a growing
interest from the scientific community and consequent production of knowledge
about the Digital Twin technology in the field of the manufacturing sector.

Fig. 1. Evolution of the number of scientific publications in the Scopus database related
to Digital Twin (Query TITLE-ABS-KEY (“digital twin” AND “manufacturing”)) over
the years.
102 F. Pires et al.

2.2 Challenges of Digital Twin

Despite the rapidly growing scientific interest in the Digital Twin technology in
the manufacturing domain, there are several challenges to be addressed.
According to [6], the main focus of Digital Twin research in manufactur-
ing tackles two main challenges, namely: 1) lack of standard framework for the
physical and virtual worlds to enable real-time interaction between them, and 2)
lack of unification in the development of models in various lifecycle phases and
domains within the manufacturing environment (e.g., product model for data
transmission/sharing).
On the other hand, the study conducted by [10] identified seven key research
issues in this field, namely the existence of a pattern architecture for a Digital
Twin, the required communication latency between the physical system and its
Digital Twin, the data collection mechanisms, the existing standards for Digital
Twins, the decision-support functionality of the Digital Twin, the existence of
Digital Twin model version for management and, finally, the human role in the
Digital Twin applications for the manufacturing domain.
The authors of [11] have concluded that the conducted research in applying
Digital Twin in the manufacturing area is still in its infancy, and there is a lack of
publications that address end-to-end implementation and integration of Digital
Twins in the industrial domain. The existing literature takes into consideration
smaller parts and fewer aspects of the Digital Twin (e.g., virtual modelling or
monitoring) and uses ad hoc integration methods to connect digital and physical
space.

3 Decision Support Based on the Digital Twin

The Digital Twin is gaining significant attention in the scientific and industrial
community for its versatile embedded functionalities and benefits in the manu-
facturing sector. A particular aspect is that the Digital Twin can enhance the
manufacturing systems ability to use the simulation for decision support using
what-if analysis and optimisation techniques in the virtual space.

3.1 Literature Review

The simulation paradigm evolved throughout the years. Initially, around 1960,
the simulation was mostly used for individual applications in particular topics,
e.g. mechanics. In 1985, simulation started to be used as a standard tool to
provide answers to specific problems in specific engineering design domains (e.g.
fluid dynamics). Around 2000, the system-level simulation was developed, which
allowed for a systematic multi-level and multi-disciplinary approach. Over the
last decade, simulation models are considered for use beyond the design phase,
i.e. connected to physical assets to enable dynamic optimisation of systems and
help in providing decision support [2], giving birth to the concept of Digital
Twins (see Fig. 2).
Decision Support Based on Digital Twin Simulation: A Case Study 103

Fig. 2. Evolution of the simulation capabilities (adapted from [2]).

The use of simulation-based Digital Twin for providing decision support is


becoming an important area of research. As previously stated, the literature
review performed by [10] concluded that most of the reported work about Dig-
ital Twin is conceptual and the developed applications are mainly focusing on
monitoring and prediction functions. Although most of the applications can be
seen as decision-aiding systems, very few of them have included the direct and/or
autonomous feedback control system (i.e. Digital Twin control over the physical
object). The authors of [13] propose a decision support framework, based on the
Digital Twin and using a simulation model, to be applied for the order manage-
ment process in manufacturing systems. The proposed decision-making process
is supported by the collection of data from the physical elements connected to the
automatic model generator, which automatically generates a simulation model.
In [14], the authors propose a methodology for implementing a Digital Twin
for decision support in designing automated flow-shop manufacturing systems
(AFMS). The proposed model suggests the use of a hybrid approach, including
discrete event models together with system dynamics models, to evaluate the
decisions over the AFMS design. By applying the Digital Twin model in a sheet
material processing enterprise case study, it was possible to design a new AFMS
solution that enabled the reduction of motion waste and decreasing of the unit
cost. In [17], the authors make an exploratory study about the benefits that
come from using the Digital Twin for decision support asset life-cycle manage-
ment. The study refers to the literature review of the area and includes two case
studies with some details about the decision support provided.
Bearing this in mind, few scientific publications are addressing the application
of the Digital Twin concept with decision support functionality. This shows the
current need for researching the applicability of decision-making systems and
simulation capabilities in the manufacturing area.

3.2 Digital Twin Simulation Architecture


Having this in mind, this paper proposes a general architecture for the Digi-
tal Twin decision-support based on simulation capabilities, illustrated in Fig. 3.
This architecture is constrained by the following requirements: definition of the
physical entity to virtualise (e.g., product, asset, process or factory), modelling
104 F. Pires et al.

the physical entity with a simulation model (e.g., DES model), establishing the
connectivity between the physical and virtual through the use of standard indus-
trial network protocols, realizing real-time monitoring of the collected data, using
simulation to perform optimisation of the physical entity, and offering decision
support to the human operator based on the real-time data and the performed
simulations.

Fig. 3. General architecture for the decision support based on Digital Twin simulation.

The proposed Digital Twin architecture consists of five main modules:


1. Connectivity Module: allows the communication between the physical and
virtual world through the use of industrial network protocols (e.g., OPC-UA
and Modbus TCP/IP), supporting the collection of information/data from
shop-floor devices (e.g., robots IoT devices, PLCs and sensors). The collected
data is transformed into contextual and readable formats. This module also
allows sending commands and deploying new operating configurations, after
validation by the user.
2. Data Storage Module: designed to store the real-time and historical data
from the shop-floor machines/devices, as well as the simulation knowledge
created during the execution of various simulation scenarios by the Simulation
module. The data stored in this module can be accessed by the other modules
using standard interfaces.
3. Visualisation and Monitoring Module: responsible for monitoring and
visualising the real-time status of the production system, as well as the histor-
ical data and future trends based on the results from the simulation scenarios.
Decision Support Based on Digital Twin Simulation: A Case Study 105

This module provides monitoring functions by performing, in the background,


data analysis of the retrieved data (e.g., real-time data, historical data and
simulation data) from the Data Storage module and displaying the warnings
related to the detection of performance degradation and condition change
(including machine learning techniques and control rules).
4. Simulation Module: comprehends two stages, namely the building of the
virtual system model and the performance of discrete event simulation (DES)
following the requirements of what-if analysis. The DES optimiser will per-
form different simulation scenarios allowing the system to find the optimal
result for the physical system in the proposed conditions and requirements.
5. Human Interface Module: In this module, the human operator, based on
the knowledge and information presented to him by the Visualisation and
Monitoring module, can request the performance of new simulation scenarios
to the Simulation module. The real-time data can be used as a trigger for the
human operator to request the simulation of the virtual model or even to feed
the simulated model. If the operator verifies that the results are optimising
the system according to the imposed requirements, they can be applied as new
parameters to the shop-floor devices. The Connectivity module will transform
this new information into readable formats for the shop-floor devices.

The proposed Digital Twin architecture aims to overcome some of the iden-
tified gaps in the literature and addresses some key issues, for example, the
inclusion of the human operator in the Digital Twin applications, the applica-
tion of a feedback control option based on the decision support provided by the
Digital Twin and the conjugation of the Digital Twin with the decision sup-
port functionality. This leads to a better decision-making since it includes the
ability to test the real system through the application of what-if scenarios, ver-
ifying what will be the impact and what will be the most profitable operational
strategy to be followed.

4 Flexible Production Cell Case Study

This section presents the implementation and results of the proposed architecture
into a case study related to a flexible production cell.

4.1 Description of the Case Study

The case study considered in this work comprises a Fischertechniks flexible pro-
duction cell that is producing different parts, as illustrated in Fig. 4. This figure
also illustrates the virtual system model developed using the FlexSim software.
The flexible production cell consists of five assembly stations, two punching
stations (1–2), two indexing stations (3–4), and one pneumatic processing centre
(5). All of the stations have their conveyor belts, a set of light sensors and
RFID (Radio-Frequency IDentification) readers. The stations are controlled by
a programmable logic controller (PLC), in this case, the Schneider Modicon
106 F. Pires et al.

Fig. 4. Case study flexible production cell (physical system and virtual model).

M340. Parts are moved between the stations according to their process plans
through the use of an IRB 1400 ABB robot. Additionally, the parts are fed to
the stations through an input conveyor and leave the system through an output
conveyor.
For this case study, the process plan for a typical part includes the following
steps: the robot picks a piece from the input conveyor and feeds it to the punching
station; after the punching operation is concluded, the robot transfers the part
to one of the indexing stations; and finally, after the conclusion of the indexing
operation, the robot transfers the part to the output conveyor.
The developed Digital Twin for this production cell can monitor the perfor-
mance of the physical system in real-time. When a condition change is detected,
the Digital Twin performs a simulation of different scenarios for the virtual
system model aiming to define the best strategy that improves the system per-
formance.

4.2 Implementation of the Digital Twin

The implementation of the Digital Twin for the flexible production cell uses the
architecture previously defined. Figure 5 represents the technological implemen-
tation for the case study.
As shown in Fig. 5, this technological architecture is divided into two
domains: the physical and the virtual one. The physical system and the opera-
tor are the sources of information for the Digital Twin, and the virtual model
and the visualisation and monitoring dashboard are the main components in the
virtual domain. Physical-virtual connectivity is achieved through the Modbus
TCP/IP industrial communication protocol, which allows collecting data from
the PLC used to control the production cell workstations. The data was col-
lected through the use of the KEPServer software, which supports several types
of communication protocols, such as Modbus TCP/IP, MQTT (Message Queu-
ing Telemetry Transport) protocol and OPC DA (OPC Data Access). In this
work, the communication with the DES model was performed using the OPC
Decision Support Based on Digital Twin Simulation: A Case Study 107

Fig. 5. Technological architecture for the case study flexible production cell.

DA, and the MQTT protocol realised the communication with the developed
visualisation and monitoring dashboard.
The DES model representing the digital copy of the production cell was
developed using the FlexSim simulation software (see the right side of Fig. 4).
This virtual model is fed with the real-time data collected from the physical
system through Modbus and OPC DA, being possible to be simulated according
to different scenarios devised by the user.
The dashboard for monitoring and visualisation was developed by using
NodeRED which allows the operator to visualise the actual operating parameters
from the physical system and to receive the warnings on performance degrada-
tion or condition change as well as the simulation results. The user can also
configure different scenarios to be simulated by the DES, e.g., modifying the
availability of machines, processing time, production line configuration and pro-
duction demand.

4.3 Experimental Results

The flexible production cell was tested in a configuration that contains an input
conveyor, a punching station, an indexing station, an output conveyor and the
robot, having a maximum capacity of 523 parts per shift. In this situation, the
resource utilisation of the punching station, the indexing station and the robot
are 56.4%, 56.3% and 87.2% respectively.
During the production system operation, the Digital Twin is collecting the
real-time data that is displayed on the visualisation and monitoring dashboard.
The monitoring mechanisms are running in parallel aiming to detect abnormal-
ities, condition changes or performance degradation. To simulate a production
demand change scenario, the system is fed with a new demand of 580 parts
per shift, which generates production demand change warning on the dash-
board. Since it is impossible to reach this demand with the current production
108 F. Pires et al.

configuration, the production manager should take decisions on how to increase


the production capacity efficiently to meet the new production demand.
Using the implemented Digital Twin, and particularly the available simula-
tion capabilities, the production manager can simulate different scenarios involv-
ing distinct configurations and variations, and then analyse the results from each
simulated scenario and decide the best action plan to meet the increase in the
production demand. Note that this what-if simulation is performed in the back-
ground, i.e. not impacting the current operation of the production system.
In this case, the strategic manager considers four different alternatives to
solve the problem by considering the following four scenarios:
• Scenario 1: addition of one punching station the current configuration.
• Scenario 2: addition of one punching station and one indexing station to the
current configuration.
• Scenario 3: addition of one indexing station to the current configuration.
• Scenario 4: increase the speed of the robot, maintaining the current config-
uration.
The results for the simulation of these four scenarios are listed in Table 1,
assessing different key performance indicators (KPIs), e.g., throughput per shift,
throughput per hour, mean resource utilisation and profit margin.

Table 1. Achieved results for the four simulated testing scenarios.

Throughput Throughput Mean resource utilisation (%) Profit (euro)


per hour
Punching Indexing Robot
Current 523 65,4 56,4 56,3 87,2 2571,2
Scenario 1 523 65,4 78,1 56,3 87,3 2555,2
Scenario 2 598 74,8 58,3 58,2 100,0 2910,0
Scenario 3 523 65,4 56,4 28,2 87,2 2551,2
Scenario 4 612 76,5 61,7 61,6 76,6 3016,0

Table 1 also includes the expected profit for each scenario, that is calculated
in a simplified manner using the Eq. 1. The calculation of this parameter is based
on revenues (calculated by multiplying the number of parts produced per hour
by the part value and the production time) and expenses (calculated through
the sum of the multiplication between the cost per hour of the machine i and
the production time).
n

P rof it = NP arts × P artV alue × P rodT ime − Ci × P rodT ime (1)
i=1

The achieved results show that from the four simulated scenarios, only Sce-
narios 2 and 4 can attain the desired production demand. In fact, in Scenario 2,
Decision Support Based on Digital Twin Simulation: A Case Study 109

the production capacity is increased but not doubled since the robot manipula-
tor becomes the bottleneck (utilisation of 100%). In Scenario 4, the capability of
the robot to perform more operations per time unit leads to an increase in the
throughput. On the other hand, for Scenario 1, although a punching station was
added, the single indexing station in the system becomes a bottleneck, maintain-
ing the productivity capacity equal to the current production configuration. The
same is happening to Scenario 3, where the existing punching station becomes
the bottleneck.
Having two scenarios that address the initial requirements, the production
manager needs to decide which alternative is better. For this purpose, the profit
parameter can be analysed to take the decision. In this case, Scenario 4 is the
one that fulfils the requirements and presents the highest profit, since there is
no need to add new stations to the current configuration. Based on the achieved
results, the manager can make a justified and applicable decision about which
would be the most profitable configuration to face the increase in production
demand.

5 Conclusions and Future Work

The emergence of Digital Twin technology in the manufacturing domain has


shifted the attention of the scientific community. The research in this field is still
in its infancy. The existing scientific articles are predominantly theoretical and
conceptual, lacking practical demonstrations of the application of the technology.
This paper provides a conceptual Digital Twin architecture for enabling decision
support based on simulation capabilities and illustrates its applicability in a
production cell case study as a proof of concept.
The development of the architecture for the proposed case study used various
technologies, namely Modbus, MQTT and OPC DA protocols to implement
the connectivity module, the Node-RED to implement the visualisation and
monitoring module, and the FlexSim software tool for the simulation module.
With the implementation of this Digital Twin, the user can assess the real-
time monitoring of the physical system, as well as simulate different scenario
configurations aiming to optimise the production processes.
As future work, the case study will be further developed by integrating more
workstations, smart AGVs and robot manipulators, and also by integrating the
control functionality. The described Digital Twin architecture will also be further
developed by considering the possibility to introduce the human operator trust
in the Digital Twin decision-making cycle.

Acknowledgements. This work has been supported by FCT – Fundação para a


Ciência e Tecnologia within the Project Scope UIDB/05757/2020. The author Flávia
Pires thanks the Fundação para a Ciência e Tecnologia (FCT), Portugal for the Ph.D.
Grant SFRH/BD/143243/2019.
110 F. Pires et al.

References
1. BC Group: Embracing Industry 4.0 and Rediscovering Growth. https://www.bcg.
com/capabilities/operations/embracing-industry-4.0-rediscovering-growth.aspx.
Accessed 09 Nov 2018
2. Rodič, B.: Industry 4.0 and the new simulation modelling paradigm. J. Manag.
Inf. Syst. Hum. Resour. 50(3), 193–207 (2017)
3. Bloem, J., van Doorn, M., Duivestein, S., Excoffier, D., Maas, R., van Ommeren,
E.: The Fourth Industrial Revolution Things to Tighten the Link Between IT and
OT (2014)
4. Da Xu, L., Xu, E.L., Li, L.: Industry 4.0: state of the art and future trends. Int.
J. Prod. Res. 56(8), 2941–2962 (2018)
5. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for implementing the
strategic initiative INDUSTRIE 4.0. Final report, Industrie 4.0 WG, no. April, p.
82 (2013)
6. Bao, J., Guo, D., Li, J., Zhang, J.: The modelling and operations for the digital
twin in the context of manufacturing. Enterp. Inf. Syst. 13(4), 534–556 (2019)
7. Grieves, M., Vickers, J.: Digital twin: mitigating unpredictable, undesirable emer-
gent behavior in complex systems. In: Transdisciplinary Perspectives on System
Complexity: New Findings and Approach, no. August, pp. 85–113 (2017)
8. Glaessgen, E.H., Stargel, D.S.: The digital twin paradigm for future NASA and
U.S. air force vehicles. In: 53rd Structures, Structural Dynamics, and Materials
Conference, pp. 1–14 (2012)
9. Lee, J., Lapira, E., Yang, S., Kao, H.: Predictive manufacturing system - trends of
next-generation production systems. Soc. Manuf. Eng. 1, 38–41 (2013)
10. Lu, Y., Liu, C., Wang, K.I., Huang, H., Xu, X.: Digital Twin-driven smart man-
ufacturing: connotation, reference model, applications and research issues. Robot.
Comput. Integr. Manuf. 61, 101837 (2020)
11. Fuller, A., Fan, Z., Day, C., Barlow, C.: Digital twin: enabling technology, chal-
lenges and open research. IEEE Access 8, 108952–108971 (2020)
12. Pires, F., Melo, V., Almeida, J., Leitão, P.: Digital twin experiments focusing
virtualisation, connectivity and real-time monitoring. In: Proceedings of the 3rd
IEEE International Conference on Industrial Cyber-Physical Systems (ICPS 2020),
pp. 309–314 (2020)
13. Kunath, M., Winkler, H.: Integrating the digital twin the manufacturing system
into a decision support system for improving the order management process. In:
51st CIRP Conference on Manufacturing Systems, vol. 72, pp. 225–231 (2018)
14. Liu, Q., Zhang, H., Leng, J., Chen, X.: Digital twin-driven rapid individualised
designing of automated flow-shop manufacturing system. Int. J. Prod. Res. 7543,
1–17 (2019)
15. Rohan: Digital Twin Market Worth 15.66 Billion USD by 2023. MarketsandMar-
kets (2017). https://www.prnewswire.com/in/news-releases/digital-twin-market-
worth-1566-billion-usd-by-2023-642374603.html. Accessed 30 Apr 2020
16. Singh, S.: Digital Twin Market worth $35.8 billion by 2025 (2019). https://www.
marketsandmarkets.com/PressReleases/digital-twin.asp. Accessed 30 Apr 2020
17. Macchi, M., et al.: Exploring the of digital twin for asset lifecycle management.
IFAC-PapersOnLine 51(11), 790–795 (2018)
Digital Twin Data Pipeline Using MQTT
in SLADTA

C. Human, A. H. Basson(B) , and K. Kruger

Department of Mechanical and Mechatronic Engineering, Stellenbosch University, Stellenbosch


7600, South Africa
{humanc,ahb,kkruger}@sun.ac.za

Abstract. Digital twins, defined here as the virtual representations of real-world


entities to facilitate their integration with digital systems, have become a popular
concept in Industry 4.0. The Six-Layer Architecture for Digital Twins with Aggre-
gation (SLADTA) is a framework for a digital twin suitable for complex systems
with a large network of devices. An important part of this architecture is using an
asynchronous, secure and vendor-neutral communication protocol suitable for a
large network of devices. MQTT is here evaluated as a communication protocol
for a digital twin data pipeline based of SLADTA. The evaluation includes a case
study for a simulated heliostat field, using MQTT through Google Cloud Plat-
form’s (GCP) IoT Core broker and the accompanying Cloud Pub/Sub service. For
comparison, the case study also uses MQTT through the Eclipse Mosquitto bro-
ker. The case study demonstrates the potential of MQTT in SLADTA for complex
systems’ digital twins.

Keywords: Digital Twin · MQTT · SLADTA

1 Introduction
The fourth industrial revolution (also known as Industry 4.0) has sparked increased
interest in the concept of a digital twin. Various definitions for digital twins have been
proposed, but in this paper a digital twin is taken to be a virtual representation of a real-
world entity (the physical twin) to facilitate integration with digital systems [1]. Digital
twins facilitate this integration through models in a virtual environment that are updated
by sensor outputs so that the models reflect the state of the physical twin. According
to some authors, a digital twin should also be able to influence the physical twin and,
therefore, bi-directional communication is required [2]. Digital twins can satisfy various
needs, such as [3–6]:

• Simulation and analysis based on real-time and historical data, to accurately predict
system behaviour.
• Centralized and integrated data models that contain all relevant data to assist in
decision making.
• Improved insight into system dependencies for enhanced future designs.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 111–122, 2021.
https://doi.org/10.1007/978-3-030-69373-2_7
112 C. Human et al.

• Real-time remote monitoring of complex systems.


• Improved failure diagnostics and fault detection for system maintenance purposes.

Various frameworks for digital twins have been proposed, such as the 5C Architecture
[7], the C2PS Architecture [8], the Digital Twin Shop Floor architecture [6] and the
Six Layer Architecture for Digital Twins with Aggregation (SLADTA) [9]. SLADTA,
which is outlined in Sect. 2, was chosen for the data pipeline described in this paper
due to its general applicability, clear functional separation, vendor-neutral approach and
provision for scaling and granularity through aggregation. These qualities make it an
attractive architecture for complex systems.
Exchanging data and information between a physical twin and its corresponding
digital twin, as well as between digital twins, requires a communication platform such
as Message Queuing Telemetry Transport (MQTT), Advanced Message Queuing Pro-
tocol (AMQP), Constrained Application Protocol (CoAP), Hyper Text Transport Pro-
tocol (HTTP) or Open Process Control Unified Architecture (OPC UA). The choice
depends on the application because some systems require highly configurable and
reliable communication, while others prioritize speed or resource constraints [10].
CoAP is a lightweight protocol, designed for machine-to-machine communication
and supports request/response communication, as well as resource/observe communi-
cation (similar to publish/subscribe). CoAP uses User Datagram Protocol (UDP), but
also has functionality to improve reliability. AMQP is also a lightweight machine-to-
machine communication protocol that supports request/response and publish/subscribe
messaging. AMQP has a wide variety of features and was designed for reliability, secu-
rity, provisioning (additional services) and interoperability. HTTP is predominantly a
web messaging protocol that supports request/response Representational State Transfer
(RESTful) Web communication [10].
MQTT, which was chosen for the data pipeline in this paper and is further discussed
in Sect. 3, is a publish/subscribe messaging protocol that is suited to large networks
of small devices, particularly if the devices have very limited resources or if network
bandwidth is small. MQTT, compared to AMQP and HTTP, requires less bandwidth,
has a lower latency and is generally more reliable. The drawback of these advantages is
that MQTT has less built-in security, less provisioning (fewer additional services) and
is not as standardized as the other protocols. Compared to CoAP, MQTT has higher
latencies, requires more bandwidth, and uses more device resources. CoAP, which uses
UDP, is more efficient than MQTT which uses TCP/IP. Despite this, MQTT is often
preferred because it is much more reliable and only slightly less efficient, which makes
MQTT more popular than CoAP. Due to its reliability and efficiency, MQTT is an
attractive protocol to use for large networks (which aligns with SLADTA’s target) of
simple devices. AMQP and HTTP offer functionality that is often not required by small
devices and therefore do not justify their larger message overhead and message size [10].
The objective of this paper is to evaluate MQTT as a communication protocol for
use in a digital twin data pipeline, where the digital twin is based on SLADTA. MQTT
is evaluated for communicating information to the cloud, as well as communicating
information between digital twins. The findings about MQTT are expected to extend to
other digital twin architectures too.
Digital Twin Data Pipeline Using MQTT in SLADTA 113

The remainder of this paper is structured as follows: Sects. 2 and 3 outline SLADTA
and MQTT, respectively. Section 4 describes how a data pipeline can be setup using
MQTT and SLADTA and Sect. 5 illustrates how the pipeline was implemented using a
case study. Finally, Sect. 6 provides a conclusion.

2 The Six Layer Architecture for Digital Twins with Aggregation

The Six Layer Architecture for Digital Twins (SLADT) is a reference architecture for
digital twin development and has been applied to a manufacturing cell for close to
real-time monitoring and fault detection [11, 12]. SLADTA (the added A represents
Aggregation) is an extension that includes twin-to-twin communication that facilitates
system orientated decision making [9, 11]. Figure 1 illustrates SLADT and SLADTA.
SLADTA was designed as a general framework that is vendor-neutral and suitable
for adding digital twins to existing systems. SLADTA can be considered for a wide
variety of systems, including manufacturing cells, renewable energy systems, ships and
building information models (BIM), all of which are being investigated by the authors’
research group.

Fig. 1. a) SLADT framework. b) SLADTA. (Adapted from [11]).

The functions allocated by the architecture to its six layers are as follows: Layer 1
contains the devices and sensors that generate data, and Layer 2 contains data sources
that interface with the sensors (for example controllers). Together the first two layers form
the physical twin. Layer 3 contains local data repositories, which is part of the physical
twin or, if necessary, added by the digital twin. Layer 4 is an IoT Gateway, which is a
custom software application that manages the communication between physical twin and
digital twin, and between digital twins. Layer 5 is a cloud-based information repository
and, finally, Layer 6 is the emulation and simulation layer. The software used in Layer
6 is application specific, but is generally a data-endpoint and user interface.
Functionally, Layers 1 and 2 are responsible for data generation, Layers 3 and 4 are
responsible for the data and information flow between a device and the cloud, as well as
between different digital twins, and Layers 5 and 6 extract value from the information
114 C. Human et al.

being gathered. Further details about the functions of Layer 4 are given in Sect. 4 when
discussing the pipeline.
The aggregation ability of SLADTA facilitates communication between digital twins,
while limiting the data flows to what is necessary and what is compatible with privacy
and confidentiality considerations. Higher level digital twins, i.e. aggregate digital twins,
do not contain Layers 1 and 2, but aggregate the data of lower level digital twins for
decision-making that requires information from multiple digital twins.

3 Message Queuing Telemetry Transport (MQTT)

As described in the Introduction, MQTT is a lightweight client-server, publish/subscribe


messaging transport protocol designed for machine-to-machine communication and the
Internet of Things (IoT). The protocol runs on TCP/IP, has a relatively small transport
overhead and is suitable for devices that can maintain only a small code footprint or
for devices that have a small network bandwidth. The standards applied to MQTT are
developed by OASIS [13].
The MQTT server, usually referred to as the broker, acts as an intermediary between
clients and facilitates publish/subscribe messaging. Its functions include authenticating
and accepting connections, processing subscriptions, accepting published messages and
forwarding published messages to subscribed clients. The client is responsible for open-
ing the connection to the broker, publishing messages to defined topics and subscribing
to topics of interest [13].
MQTT allows the user to select one of three Quality of Service (QoS) specifications
for publishing and subscribing, respectively [13]:

• QoS 0: At most deliver once. The message is sent only once and no acknowledge
message is required upon receiving the message. Therefore, network connectivity
faults could result in a message not being received by the broker or the subscriber.
• QoS 1: At least once delivery. The receiver (either the broker or a subscriber) must
acknowledge receipt, otherwise, the message is published again. Note that this is a
non-blocking function and, therefore, the publisher can publish other messages while
it is waiting for an acknowledge message.
• QoS2: Exactly once delivery. The receiver must acknowledge the published message
and additional acknowledgement steps are applied to ensure that the message is not
duplicated on the receiver’s end. This ensures that no data is duplicated or lost, but
the messages have additional overhead to ensure this.

To exchange information, a client connects to the broker, which usually includes


some authentications of the client, and then the client can publish and subscribe to
topics. Topics are dynamically created when a client publishes or subscribes to them and
the topics can have a hierarchical structure, where a certain topic can have sub-topics.
The forward slash ‘/’ operator is used to separate the levels of a topic. Subscriptions can
be made to either higher or lower level topics. For example, a topic sensor could have
sub-topics, sensor/temperature and sensor/humidity. A subscription can then be made
to sensor for both temperature and humidity readings or a subscription can be made to
Digital Twin Data Pipeline Using MQTT in SLADTA 115

only one of the sub-topics. The wildcard operator ‘#’ may be used when subscribing
to an unknown or unspecified sub-topic. The dollar topic prefix ‘$’ is used to make
subscriptions private and prevents wildcard subscriptions [13].
MQTT supports the use of various other application layer transport protocols to
enhance its features, such as the Transport Layer Security (TLS) protocol and Web-
Sockets. TLS, in particular, is used for security and TCP ports 8883 and 1883 are reg-
istered with Internet Assigned Numbers Authority (IANA) for MQTT TLS and MQTT
non-TLS communication, respectively [13].
In terms of security, MQTT does not specify any security solutions since technology
changes quite rapidly and the required security is situation dependent. That said, the
MQTT documentation does provide general guidance to ensure communication security,
such as [13]:

• Servers can authenticate clients by using the username and password field of the
CONNECT packet. The implementation is situation dependent, but common practices
include using the Lightweight Directory Access Protocol (LDAP), OAuth tokens or
operating system authentication.
• It is good practice to hash text before sending it.
• Virtual Private Networks (VPN) can be used when available, to ensure better data
security.
• Clients can authenticate servers by using the TLS certificate sent by the server when
TLS is used.
• Normal messaging containing application-specific authentication information may be
used to authenticate a server.

4 Data Pipeline Communication


4.1 Overview
This section considers the communication within the data pipeline that connects data
sources in a physical twin to data consuming applications, and in particular the parts of
the communication where MQTT can play a significant role. Figure 2a illustrates the
pipeline mapped onto SLADTA for a lower level digital twin and Fig. 2b for an aggregate
digital twin. Google Cloud Platform (GCP) is used here for cloud storage and related
services (in blue in Fig. 2). GCP was chosen as a matter of convenience and other cloud
services also should be suitable, although the implementation details may differ.
The key parts of the data pipeline where MQTT plays an important role are Layer
4 (the IoT Gateway) and the MQTT broker and client parts of Layer 5. It is unlikely
that MQTT will play a significant role in Layers 1 and 2 (the physical twin). Layer 3
(the local data repository) may use MQTT for communication with Layer 4, but that
will be similar to the other MQTT communication in the digital twin and is therefore
not considered in detail here. The parts of Layer 5 not relevant to MQTT, i.e. the cloud
data storage and processing, and Layer 6 (custom cloud-based software) are also not
considered in detail here.
It should be noted that twin-to-twin communication in Fig. 2 is through the respective
Layers 4, and not Layers 3 as in Fig. 1(b). In [11], Layer 3 played a dual role of storing
116 C. Human et al.

data and providing a communication means, since it was an OPC UA server. In general,
however, the required communication functions can be achieved through MQTT without
involving Layer 3.
The following subsections first consider the requirements for Layer 4 and then
MQTT’s use in the data pipeline as a means of secure asynchronous communication.

(a) Lower level digital twin (b) Aggregate digital twin

Fig. 2. The digital twin data pipeline mapped onto the SLADTA framework

4.2 Layer 4: IoT Gateway


Layer 4 plays a key role in the data pipeline and in the communication where MQTT is
most relevant. The functions performed by Layer 4 are therefore considered here. Some
overall requirements for digital twins in a complex system also should be noted:

• The design should allow for the addition and removal of physical twins to/from the
system, with their associated digital twins, with minimal reconfiguration time and
effort, because complex systems are likely to change over time.
• The design should allow for the communication between digital twins, which can
include aggregation and/or peer-to-peer interaction, because in complex systems
the various digital twins in the system may be developed by different parties and
information may be shared on a selective basis.
Digital Twin Data Pipeline Using MQTT in SLADTA 117

Layer 4 acts as gateway for information flows between the physical twin and the
virtual world, and between its digital twin and other digital twins. This layer’s functions
are [11]:

• Transform the data received from the physical twin to information as required by the
context (e.g. perform unit conversions or convert information to forms more useful
for the higher layers).
• Select or transform the information to be transmitted to the cloud repositories to avoid
excessive database requirements and bandwidth bottlenecks.
• Direct the information flows to the different data repositories on Layer 5 and other
digital twins, taking into account that different users of the digital twin may have
access to different subsets of the information.
• Check all information received from the cloud repositories and other digital twins,
to ensure safety, consistency and compatibility with the physical twin. Where
appropriate, communicate the relevant data to the physical twin.

From the data pipeline perspective, the main functions that this layer performs are:

• Structure the data being gathered into a format that is suitable for messaging.
• Structure and process the data so that it can be used and manipulated in other software
applications.
• Interface with the Layers 3 and 5, as well as with other digital twins.

Layer 4 typically will be is a custom software application developed by persons


that have detailed knowledge of the associated physical twin. This application can be
developed in one of many programming languages, provided that the language has
support for the chosen communication means. For MQTT, client libraries are available
for C, C++, Java, Node.js, C#/.NET, Python, PHP and more. Each digital twin must
have one or more MQTT client and must be able to set up the client(s) according to
the chosen authentication and security requirements. Layer 4 may possibly also host an
MQTT broker for twin-to-twin communication, as outlined in the next section.

4.3 Secure Asynchronous Communication

In the previous work on SLADTA [11], as outlined in Sect. 2, Layer 3 was named “local
data repositories”, but included communication functions. Upon closer interrogation of
the previous uses of SLADTA, the combination of the repository and communication
functions in Layer 3 was found to be incidental, because of the choice of OPC UA as
the means of storage and communication. Also, the previous SLADTA research did not
identify the means of communication between Layers 4 and 5 as a distinct function.
Redelinghuys [11] divided Layer 3 into Sublayers 3P and 3A, where 3P (“P” for
physical) is used to exchange data between the physical twin (Layer 2) and Layer 4 of
its digital twin, while 3A (“A” for aggregation) provides information transfer between
digital twins. In this paper, the nomenclature is changed to 4P and 4A, because it is more
general to allocate the communication functions to Layer 4. Further, 4C (“C” for cloud)
118 C. Human et al.

is added, to denote the communication between Layers 4 and 5. The names 4P, 4A and
4C denote all the communication passing through Layer 4, the IoT Gateway.
Although 4P, 4A and 4C can use different communication platforms, their primary
functional requirements are similar: they must provide (1) secure, (2) asynchronous com-
munication in a (3) vendor neutral format. Firstly, secure communication is required to
protect the proprietary information being exchanged and to prevent malicious inter-
ference with the data pipeline. Secondly, asynchronous communication supports the
overarching objectives of easy reconfiguration and aggregation. If synchronous commu-
nication is used, more knowledge about the communicating partners is required when
developing a digital twin’s IoT Gateway and unexpected consequences can arise if one
communication path unexpectedly blocks another path’s communication. In complex
systems, such emergent behaviour is best avoided. Finally, vendor neutral communica-
tion allows interconnections with physical twin elements, other digital twins and cloud
repositories from different vendors. It also supports reconfiguration and aggregation.
MQTT satisfies the above three functional requirements. Using “off-the-shelf” soft-
ware and libraries often will be preferable because the people developing Layer 4 (a
custom application that requires specialist knowledge of the physical twin) usually will
not be specialists in developing this level of communication from scratch.

4.4 MQTT Using Google Cloud Platform vs Eclipse Mosquitto


As mentioned in Sect. 3, a MQTT broker is required to use MQTT. In Fig. 2, GCP’s IoT
Core along with GCP Cloud Pub/Sub provide a MQTT broker for 4C. CGP’s services
or an alternative MQTT broker that is not cloud based, such as the Eclipse Mosquitto
broker, may be considered for 4A, as shown in Fig. 2. This section highlights some key
differences between the two MQTT platforms.
Using the MQTT broker of the cloud service chosen for Layer 5 to facilitate 4C allows
for easy integration, but GCP does not comply with some MQTT standards (presumably
for security and performance reasons). Particularly relevant here are dynamic creation
of topics and subscriptions by a client to data published by other clients to the MQTT
broker.
In GCP, each MQTT client (and therefore every digital twin) has to be associated
with an IoT Core virtual device and each virtual device is contained within an associated
registry. Each registry must be linked to predefined topics in Cloud Pub/Sub, which is the
main messaging service within GCP. The MQTT client can then publish to the “events”
topic of the virtual counterpart of the device and the virtual device will publish to the
registry’s linked Pub/Sub topic(s). All IoT Core virtual devices within a registry can
publish to the same Pub/Sub topic(s). Dynamic creation of MQTT topics are therefore
not possible in the GCP MQTT implementation.
In GCP it is not possible to subscribe directly to the MQTT broker, as in conventional
MQTT. For a MQTT client to subscribe to a topic, a Cloud Pub/Sub subscription must
first be created and then the cloud Pub/Sub client library must be used for a Cloud
Pub/Sub subscription that is linked to the topic of interest.
The publish and subscribe paths in GCP do have advantages, e.g. the IoT Core virtual
device that is associated with an MQTT client can be used to check whether an MQTT
client is connected, when last it had a heartbeat and had sent telemetry.
Digital Twin Data Pipeline Using MQTT in SLADTA 119

Communication using MQTT through GCP is the obvious route for 4C when Layer
5 uses GCP. For 4A, it may also be preferable to use MQTT through GCP due to its
security measures and additional services. An alternative is a conventional MQTT bro-
ker such as Eclipse’s Mosquitto broker, with much more straight forward publish and
subscribe operations. With a broker such as the Mosquitto one, the developer has more
responsibilities than in GCP to implement security measures. For example, authenti-
cation of the client and broker must be configured and there are various options. The
easiest is no authentication in which case the broker will use its default configurations.
Authentication of the broker using TLS protocol includes the considerations:

• The appropriate connection port must be specified (typically 8883).


• The broker must not allow anonymous connections.
• The file paths for the certificate authority (CA) security certificate, the broker security
key and its security certificate must be provided in the configuration file.
• The client must be provided with the CA security certificate.

To authenticate a client, the broker can apply username and password authen-
tication, the TLS protocol or both. For username and password authentication, the
mosquito_passwd utility function (provided with the broker) is used to create a text
file containing valid username and password pairs. To authenticate the client using the
TLS protocol, the client must be provided with a security key and a security certificate
in addition to the CA security certificate. The broker must then be configured to request
a security certificate from the client during the authentication process. It is also possible
to configure the Mosquitto broker to listen on various ports and apply different types of
authentication on the different ports.
Authorization to access certain topics can also be specified for certain usernames
by creating an access control list (ACL) file and specifying the path of this file in the
configuration file of the broker. Mosquitto brokers can further be connected using their
bridging function and then selected topics can be shared between brokers as specified in
the configuration file of each broker.
Irrespective of the MQTT broker chosen, the digital twin’s MQTT clients that connect
to the MQTT broker(s) are set up and configured in Layer 4 of the digital twin.

5 Case Study Implementation


A heliostat field was chosen as the case study for this data pipeline because it is a large
collection of devices that transmit sensor data and receive configuration settings. The
complexity of the pipeline derives from the requirement to structure the large amount
of data so that it can be utilized in the cloud. For this case study, an illustrative small
implementation is used, with three digital twins that each have a physical twin (referred
to further as lower level digital twins) and one aggregate digital twin.

5.1 Layers 1, 2 and 3: Heliostat Sensors, Cluster Controllers and Database


Each digital twin represents a cluster of 24 heliostats. Each heliostat in the cluster (Level
1) sends the following data to one higher-level controller (Level 2): a heliostat ID number,
120 C. Human et al.

a battery voltage level, two stepper motor positions, a timestamp, a status value and a
target position (as calculated by the Grena algorithm). The higher-level controller is
connected to an Ethernet network and sends its data to a PostgreSQL database on Layer
3. The controller for the whole heliostat field interfaces with this database. For the case
study evaluation, Layers 1 and 2 were simulated in software that sends the appropriate
data to Layer 3. The simulation of Layers 1 and 2 is transparent to the higher layers of
the digital twin, but gives the opportunity to easily experiment with, e.g., data rates.

5.2 Layer 4: IoT Gateway


The IoT Gateway is a Python program which includes a configuration file, a setup part,
a data processing part, a data source interaction part and an MQTT client part.
The configuration file contains information such as the file path of the RSA private
key for authentication purposes. The setup part coordinates the use of the various parts.
For the lower level digital twins, this means creating the MQTT clients for 4C and 4A
and establishing communication 4P with the PostgreSQL database.
For the lower level digital twins, the data processing part includes converting an
SQL query to Python dictionary format, the dictionary format to JSON format or the
dictionary format to a string data type. To publish messages to GCP’s IoT Core, the data
being sent must be a byte object and not a string object, and therefore must be encoded
to 8-bit Unicode Transformation Format (UTF-8).
For this case study, the aggregate digital twin’s data processing part additionally
assesses various heliostat parameters and then publishes the status of a pod (a collection
of six heliostats) to the cloud.
The data source interaction part of the IoT Gateway was implemented as follows:
For 4P, an SQL database interface using the psycopg2 library in Python is used. For
4A and 4C communication, respective MQTT clients were created using the Eclipse
Paho MQTT library. The 4A client subscribes to the Mosquitto broker, but 4C has to use
GCP’s Cloud Pub/Sub client library to subscribe.
To set up the MQTT client for 4C and 4A, three general steps must be followed:
the client authentication and the relevant call-back functions must be specified, and then
the client should connect. Thereafter, the client can publish and subscribe as necessary.
Authentication of the client with Google Cloud IoT Core’s broker (for 4C) has three
parts to it: a JSON Web Token (JWT) must be configured, the token must be signed
with a Rivest-Shamir-Adleman (RSA) or elliptical-curve (EC) private key, and the TLS
certificate must be provided.
For 4A in this case study, the broker was authenticated by the client using the TLS
protocol and the client was authenticated by the broker using the TLS protocol together
with username and password authentication. The OpenSSL toolkit was used to generate
the appropriate security keys and certificates for the TLS communication, as recom-
mended by the Mosquitto documentation. The authentication described above for both
brokers was setup using the Paho MQTT client library.
Preliminary results indicate that the Mosquitto broker, running on the local network,
has lower round-trip latencies to the aggregate than the GCP broker. These latencies
are also influenced by the message frequency and size, but message frequency seems to
Digital Twin Data Pipeline Using MQTT in SLADTA 121

have a larger influence. Further tests are being done to determine the number of digital
twins that can be aggregated effectively within the case study environment for various
scenarios.

6 Conclusion
This paper presents a digital twin data pipeline that is built on SLADTA and uses MQTT.
SLADTA provides a framework to implement digital twins for complex systems. MQTT
is a communication protocol that is suited to large networks of small devices and is well
suited as the protocol for communication with the twin’s cloud repository and with other
digital twins.
To demonstrate the functionality of SLADTA with MQTT, a case study of a heliostat
field was chosen because a heliostat field is a large collection of simple devices. When
using the MQTT broker provided within Google’s IoT Core service and the accompany-
ing Cloud Pub/Sub service, as well as the Mosquitto broker, MQTT was found to be well
suited to the tasks of asynchronous communication and aggregation within SLADTA.
In the case study the GCP IoT Core broker was used for communication between the
digital twin’s IoT Gateway and the cloud, while the Mosquitto broker was used for com-
munication between pairs of digital twins, through their respective IoT Gateways. The
choice of broker is situation dependent since both have certain benefits and drawbacks.
The case study demonstrated that MQTT can effectively provide vendor neutral,
secure, asynchronous communication that complements SLADTA’s use for digital twins
in complex systems.
Further research is required to explore the advantages and limitations of the com-
bination of SLADTA and MQTT, such as cases with a large number of digital twins.
Additionally, the use of other cloud services on Layers 5 and 6 also need to be researched.
A more extensive comparison of the GCP broker vs the Mosquitto broker, particularly
regarding security and latency, would be valuable.

Acknowledgements. This work was funded by the Horizon 2020 PREMA project. The project
investigates concentrated solar thermal (CST) power, inter alia, for pre-heating of manganese
ferroalloys to save energy and reduce CO2 emissions. Industry 4.0 technologies and concepts can
play an important role, as it can increase the level of automation and improve the reliability of
CST plants.

References
1. Taylor, N., Human, C., Kruger, K., Bekker, A., Basson, A.: Comparison of digital twin devel-
opment in manufacturing and maritime domains. In: Borangiu, T., Trentesaux, D., Leitao, P.,
Boggino, A.G., Botti, V. (eds.) Springer Service Oriented, Holonic and Multi-agent Manu-
facturing Systems for Industry of the Future. Studies in Computational Intelligence, vol 853,
pp. 158–170. Springer, Cham (2020)
2. Kritzinger, W., Traar, G., Henjes, J., Sihn, W., Karner, M.: Digital twin in manufacturing: a
categorical literature review and classification. In: Proceedings of the 16th IFAC Symposium
on Information Control Problems in Manufacturing INCOM 2018. IFAC-PapersOnLine, vol.
51, pp. 1016–1022. Elsevier B.V. (2018)
122 C. Human et al.

3. Bao, J., Guo, D., Li, J., Zhang, J.: The modelling and operations for the digital twin in the
context of manufacturing. Enterprise Inf. Syst. 13(4), 534–556 (2018)
4. Grieves, M., Vickers, J.: Digital twin: mitigating unpredictable, undesirable emergent behavior
in complex systems. In: Kahlen, F.-J., Flumerfelt, S., Alves, A. (eds.) Trans-Disciplinary
Perspectives on Complex Systems: New Findings and Approaches, pp. 85–113. Springer,
Cham (2017)
5. Siemens AG: MindSphere the cloud-based, open IoT operating system for digi-
tal transformation. https://www.siemens.com/mindsphere https://www.plm.automation.sie
mens.com/media/global/en/Siemens_MindSphere_Whitepaper_tcm27-9395.pdf. Accessed
16 July 2019
6. Tao, F., Cheng, J., Qi, Q., Zhang, M., Zhang, H., Sui, F.: Digital twin-driven product design,
manufacturing and service with big data. Int. J. Adv. Manuf. Technol. 94, 3563–3576 (2018)
7. Roy, R., Tiwari, A., Stark, R., Lee, J.: Predictive big data analytics and cyber physical sys-
tems for TES systems. In: Redding, L., Roy, R., Shaw, A. (eds.) Advances in Through-Life
Engineering Services, Decision Engineering. Springer, Cham (2017)
8. Alam, K.M., El Saddik, A.: C2PS: a digital twin architecture reference model for the cloud-
based cyber-physical systems. IEEE Access 5, 2050–2062 (2017)
9. Redelinghuys, A.J.H., Basson, A.H., Kruger, K.: A six-layer architecture for digital twins
with aggregation. In: Borangiu, T., Trentesaux, D., Leitao, P., Boggino, A.G., Botti, V. (eds.)
Springer Service Oriented, Holonic and Multi-agent Manufacturing Systems for Industry of
the Future. Studies in Computational Intelligence, vol. 853, pp. 171–182. Springer, Cham
(2020)
10. Naik, N.: Choice of effective messaging protocols for IoT systems: MQTT, CoAP, AMQP
and HTTP. In: IEEE International Systems Engineering Symposium (ISSE), Vienna, Austria,
pp. 1–7. IEEE (2017)
11. Redelinghuys, A.J.H.: An architecture for the digital twin of a manufacturing cell. Published
Doctoral dissertation, Stellenbosch University, Stellenbosch (2019)
12. Redelinghuys, A.J.H., Basson, A.H., Kruger, K.: A six - layer architecture for the digital twin:
a manufacturing case study implementation. J. Intell. Manuf. 1–20 (2019)
13. Banks, A., Briggs, E., Borgendale, K., Gupta, R.: MQTT Version 5.0. https://docs.oasis-open.
org/mqtt/mqtt/v5.0/mqtt-v5.0.html. Accessed 14 Oct 2019
Toward Digital Twin for Cyber Physical
Production Systems Maintenance: Observation
Framework Based on Artificial Intelligence
Techniques

Farah Abdoune1 , Maroua Nouiri2(B) , Pierre Castagna2 , and Olivier Cardin2


1 Manufacturing Engineering Laboratory of Tlemcen (MELT), B.P N° 119, Tlemcen, Algeria
2 LS2N, UMR CNRS 6004, Université de Nantes, IUT de Nantes, 44 470 Carquefou, France

Maroua.nouiri@univ-nantes.fr

Abstract. Manufacturing Systems are considered complex engineering systems


given the large number of integrated entities and their interactions. Unplanned
events and disruptions that can happen at any time in real-word industrial environ-
ments increase the complexity of manufacturing production systems. In the fourth
industrial revolution (so called Industry 4.0), the industrial sector is rapidly chang-
ing with emerging technologies like Cyber-Physical Production System (CPPS),
Internet of Thing (IoT), Artificial Intelligence (AI), etc. However, the efficiency
and reliability of these systems are still questionable in many circumstances. To
address this challenge, an observation framework based on AI techniques aimed
at elaborating predictive and reactive planning of the maintenance operations of
CPPS is proposed in this paper. The proposed tool aims to improve the system’s
reliability and helps the maintenance supervisors to adjust maintenance decisions.
In order to assess the performance of the proposed tool, a case study on an industry-
type learning factory is considered. A proof of concept shows the efficiency of the
framework.

Keywords: Predictive maintenance · Reactive maintenance · Cyber-physical


production system · Artificial intelligence

1 Introduction
Industrial companies today face two important problems. Customer demand is increas-
ingly diverse, while customers are more and more demanding. At the same time, mundi-
alization implies significant competition. Faced with these challenges, industries search
to improve the efficiency, reliability and availability of their services to be more com-
petitive. Many studies indicate that the maintenance service and related activities have
a direct impact on the efficiency of the production [1].
In fact, a good maintenance strategy reduces significantly the operating costs of the
concerned systems and increases their reliability and overall availability to undertake
operations. Maintenance activities aim to restore an item for correction or to achieve

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 123–134, 2021.
https://doi.org/10.1007/978-3-030-69373-2_8
124 F. Abdoune et al.

better status [2]. The lack of knowledge about the production systems, its equipment and
the associated processes complicate more the management of these systems. However
with the developments of Information and Communications Technologies (ICT) and the
rise of the fourth industrial revolution, the focus on the maintenance of Cyber-Physical
Production Systems (CPPS) is increasing. Several definitions have been proposed for
CPPS, mostly related to varying contexts. According to [3], Cyber-Physical Production
Systems are defined as “systems of systems of autonomous and cooperative elements
connecting with each other in situation dependent ways, on and across all levels of
production, from processes through machines up to production and logistics networks,
enhancing decision-making processes in real-time, response to unforeseen conditions
and evolution along time”.
Our research target in this work is the maintenance of CPPS. Recently, many tech-
niques have been proposed in order to help managers, supervisors and operators to
optimize maintenance decisions. An observation-based predictive maintenance frame-
work for CPPS is proposed. Our work exploits the concept of CPPS and uses Industrial
Internet of Things (IIoT) technologies to deploy an intelligent tool for predictive and
reactive maintenance. The proposed observation-based predictive maintenance frame-
work is based on real time data acquisition and analysis through Artificial Intelligence
(AI) techniques to detect and treat dysfunctions.
The rest of this article is organized as follows: a review of recent works showing
the evolution of CPPS maintenance is given in the next section. Section 3 describes the
details of the proposed framework. Section 4 discusses the results of our experimentation
on a case study based on a learning factory. A conclusion and some futures directions
are given in Sect. 5.

2 Literature Review on Maintenance

The maintenance strategies are classified in two main groups: reactive and preventive
[4]. In the reactive category, the maintenance activity is triggered by an occurrence of a
failure. In opposition, the preventive category aims at avoiding failure occurrence.
In order to avoid the significant downtime and repair costs due to a classical cor-
rective maintenance, the manufacturers are more and more interested in the predictive
maintenance strategy [5], which is based on continuous measurements to detect faults
and anticipate problems. In [6], methods and tools related to predictive maintenance in
manufacturing systems are reviewed and an integrated predictive maintenance platform
is proposed. In [7], a review on simulation-based approaches is made, which have been
widely used in the maintenance context. In these study, the behaviour of the system is
reproduced and simulated.
Recently, AI techniques and Big Data applications provide technical support for the
efficient development of manufacturing systems by accurately and timely data collection,
data analysis, data processing, root-cause identification, and deriving valuable insights
for maintenance improvement [8]. In literature, there have been several reviews on the
role of AI for the maintenance of manufacturing systems [9].
Machine learning (ML) is widely used in condition monitoring, fault prediction and
predictive analytics. ML techniques are a branch of AI methods based on the use of
Toward Digital Twin for Cyber Physical Production Systems Maintenance 125

huge amounts of data to learn and to identify patterns [8]. In [10], they proposed a
conceptual model for proactive decision support system based on real-time predictive
analytics, designed for maintenance of a cyber-physical systems in order to minimize
its downtime. A Hierarchical Modified Fuzzy Support Vector Machine (HMFSVM) is
proposed in [11] to understand the trends of vehicle faults. This method is compared
with commonly used approaches like logistic regression, random forests and support
vector machines. A reference architecture based on deep learning for CPS is proposed
in [12]. The concept for a CNC machine utilized in shop floor is explored.
The Centre for Intelligence Maintenance System (IMS) created a Watchdog Agent
Technology - an approach for product performance degradation assessment and predic-
tion, for modeling and decision making with human interaction [13]. This technology
includes time domain analysis, Principal Component Analysis (PCA), Fuzzy Logics
System (FLS), Logistic Regression (LR), Artificial Neural Network (ANN), Bayesian
belief networks and Support Vector Machines (SVM) [14].
An Intelligent System for predictive maintenance (SIMAP) [15] has been devel-
oped for real time diagnosis of industrial processes based on neural networks that detect
anomalies. The fuzzy logic method is used to provide behaviour modelling of a main-
tainer experience integrated into an intelligent maintenance system [16]. To estimate
failure degradation of bearings and to predict failure probability, LR has also been used
in combination with relevance vector machine (RVM) [17].
The various applications of ANN applications in fault risk assessment and early
fault detection analysis have been reviewed with examples of their usage in predictive
maintenance cases [18]. SVM has been used for fault diagnosis of automobile hydraulic
brake system [19]. Recently, new methods based on hybridization between supervised
and unsupervised learning techniques have been developed; an example is the root cause
analysis and faults prediction for intelligent transportation systems (ITSs) based on the
coupling of K-means Algorithm and ANN [20]. The method was tested on the Train
Door System at Bombardier Transport (BT) as a case study.
We conclude from the literature review that several methods have been proposed
for maintenance (mathematical modelling, simulation-based techniques, AI tools, etc.).
However, the previous cited methods consider the mass of data accumulated over the
years from integrated embedded sensors of CPPS (historical data) to make effective
maintenance decisions. Few works use real time data to detect the deviation of the
system and treat in real time the system malfunctions.

3 Observation-Based Maintenance Framework

3.1 The Proposed Framework

The faults in CPPS may be due to internal causes (for example: machine breakdown) or
external causes. The proposed framework aims to get early discovery of system faults
that may compromise the reliability of the production system.
In this work, an observation-based predictive and reactive maintenance model frame-
work is proposed. The main objective is to identify and localize the disruptions, assess
their criticalities and then notify maintenance managers or operators via IoT tools.
126 F. Abdoune et al.

Figure 1 presents the flowchart of the proposed framework. The framework is structured
in four main parts detailed hereinafter.

Fig. 1. The flowchart of the proposed framework

The Cyber-Physical Production System (CPPS)


The first part of the proposed framework is the CPPS. It is decomposed into two parts:
a physical part, including workstations, storage and transfer means and a cyber part
responsible to control the physical components of CPPS. This logical part of the CPPS
includes infrastructures such as programmable logic controllers (PLC), a manufacturing
execution system (MES) and other elements.

Data Acquisition (DA)


A fundamental part of our framework is the acquisition of data from the equipment. This
function is important because it allows knowing the state and the behaviour of the CPPS.
There are two sources of information: i) CPPS components such as MES that provide
equipment data and production-specific data; ii) other components such as PLCs refer
to measurable data concerning a product being processed, and also information from the
sensors which is necessary for control. The Data Communication module allows this
exchange with the CPPS (denoted (a) in Fig. 1).
Some information might also be provided by some other sources than the CPPS
itself. For example, with the IIoT technologies, it is possible to sense the surrounding
physical environment. Various signals such as vibration, pressure, or temperature can
Toward Digital Twin for Cyber Physical Production Systems Maintenance 127

be extracted through sensors. In Fig. 1, this flow of data was simplified and drawn as
collected from IIoT devices (b).

Set of Detectors
The third part of the framework consists of a set of n Detector modules (D1 to Dn ) used
to detect CPPS malfunctions. The core of each detector is a real-time observation model.
Based on information from the CPPS and IoT sensors, given by the Data Acquisition
(DA), this model predicts what the “ideal” (or nominal) behaviour of the system should
be. The real time observation model Mi determines the difference () between the
nominal behaviour (e) and the actual behaviour of the CPPS (c). The difference provides
a valuable information about the dysfunction occurring in the CPPS. Each detector is
assigned to a specific aspect of the CPPS (real-time events, thermal behaviour, energy
management, economic, etc.).

Intelligent Maintenance Decision Support Center (IMaDeSC)


The fourth part, denoted Intelligent Maintenance Decision Support Center (IMaDeSC),
makes it possible to deal with the discrepancies detected between the nominal virtual
behaviour and the actual behaviour. The observed malfunctions (d) detected by the
observation model Mi are stored in a database, hence creating a history of how the
equipment has been used over time.
These historical data are used as input of a ML algorithm. The analysis of these data
aims to detect abnormal patterns and identify recurring failure scenarios. The purpose
of using machine learning methods is to improve the system’s overall efficiency.
In fact, the historical data analysis allows defining preventive maintenance strate-
gies. Thus the prediction of the future behaviour of the system indicates the potential
occurrence of a failure within a time window and the notification of the operator of
the dysfunction as well as of its criticality. Therefore, a preventive maintenance task
schedule can be established before the malfunction occurs.
Analysis of immediate data enables decisions to be made regarding reactive main-
tenance, based for example on an estimated threshold set beforehand by the operator.
The results of this analysis can be recommendations transmitted to the operator to help
him undertake reactive responses that will be applied to the CPPS. These recommen-
dations for adjusting maintenance decisions can be transmitted via some IIoT devices
(connected watches, iPhone, virtual reality tools, etc.).

3.2 Artificial Intelligence Techniques


As presented in Sect. 2, machine learning methods have been used to deal with mainte-
nance issues and to improve the efficiency of the system. ML techniques are a branch
of AI methods based on the use of huge amounts of data to learn, identify patterns and
make decisions with minimal human intervention [20]. Classification and regression are
the major supervised ML learning techniques used for prediction target.
Supervised learning is based on using labelled data with a correct output to learn and
be able to classify. Classification is the process of finding mathematical and statistical
models to separate data into multiple classes where data can be divided into binary or
multiple discrete labels. It is used to classify normal and abnormal patterns, i.e. “faulty”
128 F. Abdoune et al.

and “healthy” within certain time periods in predictive maintenance. In opposition,


regression is the process of finding a model to distinguish the data into continuous
real values. It is generally used to predict the remaining useful time of the industrial
equipment.
Here, the challenge is to choose the appropriate and most efficient technique in order
to present a solid decision-making tool. There are several methods able to detect failures.
According to literature, the most relevant ones and adequate in maintenance predictive
model include logistic regression (LR), artificial neural network (ANN), and support
vector machine (SVM).
Logistic regression is a classification technique used to assign observations to a
discrete set of classes for analysing problems, where there are one or more independent
variables that determine an outcome. The outcome is measured with a dichotomous
variable (in which there are only two possible outcomes). The logistic regression uses
the logistic function, which is an S-discriminant one, to squeeze the output of a linear
equation between 0 and 1. It is used in predictive maintenance cases to model the
probability of a certain event existing such as healthy or faulty event.
An Artificial Neural Network (ANN) is a method inspired by the brain biological
structure, built to process information that are linked together. The basic building block is
similar to a real neural network. Thus, it has inputs, outputs and processing elements. It is
very powerful when there are massive datasets where they can learn tasks by considering
input data [18]. It has three basics types of layers: an input layer, hidden layers and an
output one. Due to their ability to learn from examples, ANN has received an important
attention, and it shows promising results for evaluating data in order to support predictive
maintenance activities.
A Support Vector Machine (SVM) method is a hyperplane that separates two sets
with a great margin. The margin is the sum of distances from the plane to the closest
point of each set [14]. The objective of the support vector machine algorithm is to
find a hyperplane in an N-dimensional space with the maximum margin that distinctly
classifies the data points. This method has often been used in condition monitoring and
fault diagnosis.

4 Case Study on Assembly Line: Implementation Architecture

In order to assess the performance of the proposed framework, a case study on an


assembly line is considered.

4.1 Learning Factory Description

Figure 2 presents a global view of a virtual representation of an assembly line in the


University of Nantes. This line includes six workstations and a pallet storage. Pallets
transport the products and are each equipped with RFID tags enabling the storage of the
list of services needed to be executed on the transported products.
Pallets move through a complex network of conveyors. At each switching point,
a RFID Read/Write Unit allows to decide the orientation of the pallets. Seventeen
Read/Write Units are used to identify the position of the pallets. Four PLCs and an
Toward Digital Twin for Cyber Physical Production Systems Maintenance 129

Industrial MES control the system. The aim of the tool is to improve the pallet move-
ment system by detecting minor blockages causing the system to slow down and major
blockages leading to the total immobilization of pallets.

Fig. 2. The learning factory

4.2 Implementation Architecture


A single detector is implemented for this proof of concept. Figure 3 presents the imple-
mented flowchart. This detector is constructed using a discrete event simulation model
running in real time; the FlexSim software was chosen. The simulated digital model com-
municates with CPPS components using an emulation module exchanging data such as:
ID of the incoming pallet, signals from sensors and actuators through an OPC UA server.
The model simulates the nominal behaviour of the real system, detects the real-time
malfunctions and saves them directly in the MySQL database. The database in turn feeds
the AI tool using the chosen machine-learning algorithm, a logistic regression, analysing
the data and alerting the operator about the event of a potential failure through a portable
IIoT device.

4.3 Observation Model Behaviour and Data Logging


A very important step is to synchronize the observation model with actual data coming
from the CPPS. Figure 4 presents the behaviour of the implemented detector. Two dif-
ferent phenomena can occur: (i) the actual pallet is late compared to the virtual pallet;
this happens especially when the actual pallet is blocked, or (ii) the actual pallet is ahead
of the virtual pallet.
For each of the seventeen RFID read/write units, two events must therefore be
detected: the arrival of the virtual pallet at the location of the virtual RFID unit and
the arrival of the actual pallet at the actual RFID unit location.
130 F. Abdoune et al.

Fig. 3. The flowchart of the implemented framework for the case study

Fig. 4. Detector’s behaviour model


Toward Digital Twin for Cyber Physical Production Systems Maintenance 131

When the virtual pallet arrives at the virtual RFID unit point, it is blocked to wait for
the arrival of the actual pallet. But it is important to generate an alert before the arrival of
the actual pallet, especially in case of blockage of this actual pallet. This is the function
of the loop denoted (d) in Fig. 4. Of course, if the actual pallet arrives during the Dlim
period, the alert is not logged in the database.
When the actual pallet arrives at the RFID unit location, the virtual pallet is resyn-
chronized at the location of the virtual unit. This is important, so that the behaviour of
the virtual model is consistent with reality.
The data analysis and decision centre is of course a central element of the proposed
framework. It is based on a learning algorithm. Through literature review, it has been
shown that many artificial intelligence techniques are used. In this work, the logistic
regression seems to be suitable for our case.

4.4 Proof of Concept


Due to the COVID 19 pandemic, we did not have access to the actual learning factory.
A proof of concept based on an emulation of the actual system was developed in order
to validate our framework proposal (Fig. 5).

Fig. 5. Proof of concept’s architecture

A virtual PLC programmed with Schneider Unity was used. This PLC communicates
with the FlexSim observation model via an OPC UA server. The Flexsim simulator was
connected to a MySQL database. A program written in Python enables the analysis of
the recorded data.
This proof of concept is only made on one conveyor, containing an entry point A and
an RFID read/write module L1 located at a distance D from point A. V is the speed of
the conveyor. At time t0 , the virtual PLC sends the information leading to the creation
of a pallet in the simulator. On the date t1 , the PLC sends the information indicating the
arrival of the actual pallet at point L1. t1 was programmed such that:
D
t1 = t0 + +ε
V
132 F. Abdoune et al.
 
with ε being a random variable such as ε = Uniform − VD , VD allowing us to introduce
a perturbation.
Each time a pallet is created, a new value of ε is fired. As shown in Table 1, the
virtual PLC generates a population of values of ε that can be split into five classes. If
ε is negative, this corresponds to an early arrival of the pallet at L1. Otherwise, this
corresponds to a late arrival.

Table 1. Distribution of delays ε classes

Class-2 −3700 ms <= ε < −2000 ms Arrival very early


Class-1 −2000 ms <= ε < −500 ms Arrival early
Class0 −500 ms <= ε < 500 ms Arrival at time
Class1 500 ms <= ε < 2000 ms Late arrival
Classe2 2000 ms <= ε < 3700 ms Very late arrival

The detector analyses the differences between the arrival dates of the virtual pallets
in the observation model and the actual arrival dates provided by the virtual PLC. This
difference  is logged in the SQL database together with the timestamp, the identification
of the tag reader unit and the identification of the pallet. We created a toy application
in Python to demonstrate the possibility to analyze these data. To do so, it is meant to
reconstruct the classes of Table 1. Table 2 shows the number of data logs in the SQL
database for each class of ε together with the number of logs the Python application
considered in each class. Globally, the number of logs is coherent, which demonstrates
the accuracy of the observations made.

Table 2. Proof of Concept results

Classes Number of delays ε generated by the Number of delays  observed by the


virtual PLC Python application
Class-2 15 16
Class-1 11 10
Class0 6 6
Class1 7 7
Class2 11 11

The next step will be to implement the framework on the real learning factory. In par-
allel, machine learning methods will be implemented to interpret the results and propose
probable causes of dysfunctions. Several packages are needed for logistic regression in
Python. The most popular data science and machine learning libraries such as Scikit
Learn, Numpy, Panda and Matplotlib allow writing elegant and compact code and also
implementing models and solving problems. We already used these packages in the toy
Toward Digital Twin for Cyber Physical Production Systems Maintenance 133

application in order to train our LR algorithm and improve its accuracy using simulated
observations to predict faulty patterns, while waiting to experiment it on the real learning
factory and adjust its parameters. For now, the model and application were only about
detecting the gaps between the real system and its nominal model. However, we shall
introduce in a next version an extra analysis aiming at detecting pallet defectiveness or
RFID read/write units’ latency and propose solutions based on the problem criticality.

5 Conclusions
In this work, an observation framework based on artificial intelligence techniques is
proposed to deal with CPPS maintenance. The proposed framework aims to help man-
agers and supervisors to detect and predict dysfunctions of the CPPS. The objective
of the integrated tool is to improve the system’s reliability and to adjust maintenance
decisions. To assess its performance, the implementation of the framework is detailed
and tested on a real case study. Due to the COVID 19 pandemic, the proof of concept is
detailed and only based on a virtual implementation. Tests on the full learning factory
are the first objectives of future works. Different AI techniques will be implemented
on the framework and tested in order to select the best one which is able to optimize
maintenance strategies.
Another future direction is to generalize the framework to get a first design and
implementation in the context of integration inside the Digital Twin of CPPS. The idea
would be to add other detectors with different objectives: minimize energy consumption,
improve productivity, etc. The analysis of historical data in the IMaDeSC will then be
based on multiple objectives. Thus the Digital Twin will be able to help supervisors to
make compromise-based decisions by providing advanced data for decision support.
Additional research will be carried out on the optimization of the implementation of
intelligent sensors (location, frequency of sending data, cycle, etc.) to get more pertinent
data via IIoT technology.

References
1. Jardine, A.K.S., Tsang, A.H.C.: Maintenance, Replacement, and Reliability: Theory and
Applications, 2nd edn. CRC Press, Taylor & Francis, Boca Raton (2013)
2. Shafiee, M., Chukova, S.: Maintenance models in warranty: a literature review. Eur. J. Oper.
Res. 229(3), 561–572 (2013)
3. Cardin, O.: Classification of cyber-physical production systems applications: proposition of
an analysis framework. Comput. Ind. 104, 11–21 (2019)
4. Khazraei, K., Deuse, J.: A strategic standpoint on maintenance taxonomy. J. Facil. Manag. 9,
96–113 (2011)
5. Verhagen, W.J.C., De Boer, L.W.M.: Predictive maintenance for aircraft components using
proportional hazard models. J. Ind. Inf. Integr. 12, 23–30 (2018)
6. Efthymiou, K., Papakostas, N., Mourtzis, D., Chryssolouris, G.: On a predictive maintenance
platform for production systems. Procedia CIRP 3, 221–226 (2012)
7. Nguyen, A.-T., Reiter, S., Rigo, P.: A review on simulation-based optimization methods
applied to building performance analysis. Appl. Energy 113, 1043–1058 (2014)
134 F. Abdoune et al.

8. Zhu, L., Yu, F.R., Wang, Y., Ning, B., Tang, T.: Big data analytics in intelligent transportation
systems: a survey. IEEE Trans. Intell. Transp. Syst. 20(1), 383–398 (2018)
9. Rault, R., Trentesaux, D.: Artificial intelligence, autonomous systems and robotics: legal
innovations. In: Service Orientation in Holonic and Multi-Agent Manufacturing. Studies in
Computational Intelligence, pp. 1–9. Springer, Cham (2018)
10. Shcherbakov, M.V., Glotov, A.V., Cheremisinov, S.V.: Proactive and predictive maintenance
of cyber-physical systems. In: Kravets, A., Bolshakov, A., Shcherbakov, M. (eds.) Cyber-
Physical Systems: Advances in Design & Modelling. Studies in Systems, Decision and
Control, vol. 259. Springer, Cham (2019)
11. Chaudhuri, A.: Predictive Maintenance for Industrial IoT of Vehicle Fleets using Hierarchical
Modified Fuzzy Support Vector Machine, arXiv180609612 Cs, June 2018 (2018)
12. Lee, J., Azamfar, M., Singh, J., Siahpour, S.: Integration of digital twin and deep learning in
cyber-physical systems: towards smart manufacturing. IET Collab. Intell. Manuf. 2(1), 34–36
(2020)
13. Djurdjanovic, D., Lee, J., Ni, J.: watchdog agent - an infotronics-based prognostics approach
for product performance degradation assessment and prediction. Adv. Eng. Inf. 17, 109–125
(2003)
14. Raza, J., Liyanage, J.P., Al Atat, H., Lee, J.: A comparative study of maintenance data classi-
fication based on neural networks, logistic regression and support vector machines. J. Qual.
Maint. Eng. 16, 303–318 (2010)
15. Garcia, M.C., Sanz-Bobi, M.A., del Pico, J.: SIMAP: intelligent system for predictive main-
tenance: application to the health condition monitoring of a wind turbine gearbox. Comput.
Ind. 57, 552–568 (2006)
16. Niu, G., Li, H.: IETM centred intelligent maintenance system integrating fuzzy semantic
inference and data fusion. Microelectron. Reliabil. 75, 197–204 (2017)
17. Caesarendra, W., Widodo, A., Yang, B.-S.: Application of relevance vector machine and
logistic regression for machine degradation assessment. Mech. Syst. Signal Process. 24, 1161–
1171 (2010)
18. Krenek, J., Kuca, K., Blazek, P., Krejcar, O., Jun, D.: Application of artificial neural networks
in condition based predictive maintenance. In: Król, D., Madeyski, L., Nguyen, N.T. (eds.)
Recent Developments in Intelligent Information Database Systems, pp. 75–86. Springer,
Cham (2016)
19. Jegadeeshwaran, R., Sugumaran, V.: Fault diagnosis of automobile hydraulic brake system
using statistical features and support vector machines. Mech. Syst. Signal Process. 52–53,
436–446 (2015)
20. Mbuli, J., Nouiri, M., Trentesaux, D., Baert, D.: Root causes analysis and fault prediction in
intelligent transportation systems: coupling unsupervised and supervised learning techniques.
In: IEEE International Conference on Control, Automation and Diagnosis (ICCAD), pp. 1–6
(2019)
An Aggregated Digital Twin Solution
for Human-Robot Collaboration in Industry 4.0
Environments

A. J. Joseph, K. Kruger(B) , and A. H. Basson

Department of Mechanical and Mechatronic Engineering, Stellenbosch


University, Stellenbosch 7600, South Africa
kkruger@sun.ac.za

Abstract. The digital twin is a powerful concept and is seen as a key enabler for
realizing the full potential of Cyber-Physical Production Systems within Industry
4.0. Industry 4.0 will strive to address various production challenges among which
is mass customization, where flexibility in manufacturing processes will be critical.
Human-robot collaboration – especially through the use of collaborative robots –
will be key in achieving the required flexibility, while maintaining high production
throughput and quality. This paper proposes an aggregated digital twin solution
for a collaborative work cell which employs a collaborative robot and human
workers. The architecture provides mechanisms to encapsulate and aggregate data
and functionality in a manner that reflects reality, thereby enabling the intelligent,
adaptive control of a collaborative robot.

Keywords: Digital twin · Collaborative robot · Human-robot collaboration ·


Industry 4.0 · Cyber-Physical Production System

1 Introduction
The fourth industrial revolution, characterized by the implementation of Cyber-Physical
Production Systems (CPPS) involving vast networks of cognitive, interconnected and
communicating devices, is bringing major changes in the manufacturing industry. The
increase in demand for customized products, along with the new capabilities brought
about by Industry 4.0 technologies, is causing a manufacturing paradigm shift from mass
production to mass customization.
The importance of Human-Robot Collaboration (HRC) is increasing, as it provides,
among other things, a means to attain the required manufacturing flexibility. HRC
offers the advantage of combining a robot’s speed, power, accuracy, repeatability and
insusceptibility to fatigue, with the agility, intelligence and perception of humans.
It is often perceived that the result of the fourth industrial revolution will be full
robot autonomy in ‘dark factories’. However, due to the lack of technology and high
complexity involved in automating intricate tasks, full robot autonomy, especially in
changing and unstructured environments, will remain out of reach for the foreseeable
future. Developing the building blocks of collaborative robot systems, which will enable

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 135–147, 2021.
https://doi.org/10.1007/978-3-030-69373-2_9
136 A. J. Joseph et al.

robots and humans to work in collaboration - building on each other’s strengths - is thus
of great interest [1]. This calls for the development of technologies enabling learning,
cooperating and coordinating machines [2, 3].
Conventional industrial robots lack the ability to collaborate with humans. On the
other hand, new-age robots – better known as collaborative robots or CoBots – are
designed to be intrinsically safe for operation alongside and in collaboration with human
workers within collaborative workstations. CoBots address three main challenges: safety,
rapid programmability, and flexible deployment and re-deployment.
CoBots achieve their collaborative capabilities by incorporating several safety fea-
tures, such as force and power limits, momentum limits, position limits and orientation
limits. On impact with a human or an object, most collaborative robots are designed to
stop moving immediately, or to move away from the point of impact. Although collab-
orative robots are designed with these inherent safety features, this does not mean the
collaborative application will be safe. For instance, a CoBot manipulating a sharp object
is unsafe whether the speed and impact force can be limited or not [4].
The development of CoBots has brought solutions to some of the problems that
plagued HRC implementation in the past. However, there are still numerous challenges
that need to be overcome before collaborative robots can be efficiently and effectively
implemented in HRC applications, to bring about real competitive advantage for man-
ufacturing businesses. These challenges include: addressing the relatively slow speed
of operation of collaborative robots; ensuring that collaborative workstations conform
to safety standards; maintaining robot effectiveness in chaotic environments filled with
uncertainty, and enabling real-time robot motion planning and control.
The objective of this research is to address some of these challenges through a
digital twin (DT) solution, which aims at enabling intelligent control of the robot to
achieve improvements in throughput, safety and efficiency. Presented in this paper is an
architecture for the implementation of this DT solution.
The first section of the paper presents background into: HRC, challenges faced when
trying to achieve high levels of collaboration, and CoBots and their shortfalls. Presented
thereafter is a discussion on DTs and the generic DT architecture on which the proposed
DT solution is based. Finally, the aggregated DT solution for HRC is presented, along
with discussions on the functionality of each DT involved.

2 Human-Robot Collaboration
2.1 Classification of HRC
The HRC research field is focused on finding solutions to problems involved in enabling
a human and robot to work together to achieve a common goal. Research into HRC is
motivated by the desire to achieve high levels of manufacturing flexibility [5]. As shown
in Table 1 and Fig. 1, HRC applications can be divided into categories according to the
level of collaboration between the robot and human. In Fig. 1 the orange zone represents
the human operators’ workspace; the green zone represents the robot’s workspace; and
the overlapping zone represents the shared workspace.
Although there have been significant advances in manufacturing HRC, most
collaborative applications today are constrained to level two and level three collaboration.
An Aggregated Digital Twin Solution for Human-Robot Collaboration 137

Table 1. Levels of HRC (adapted from [4, 6])

Level of collaboration Same vicinity Shared workspace Shared time Shared workpiece
1. Cell (fenced robot)
2. Coexistence x
3. Sequential x x
collaboration
4. Co-operation x x x
5. Responsive x x x x
collaboration
Complexity and need for inherent
safety features

Level of collaboraƟon

Fig. 1. Types of HRC (adapted from [4, 6])

Actually, many robotic applications are still at level one, where the robot and human
are separated by the fence while the robot is functional. To achieve level four and level
five collaboration, more advanced technologies are required, which enable intelligent
control of the robot. Many of the challenges involved in HRC become evident when
trying to achieve these levels of collaboration.

2.2 Challenges Associated with Human-Robot Collaboration


The challenges faced in high levels of HRC require interdisciplinary solutions, often
involving a combination of classical robotics, artificial intelligence, cognitive sciences
and psychology [3]. These are some of the challenges identified with regard to successful
implementation of human-robot collaborative workstations:

• Ensuring that the collaborative workstation is safe: Safety considerations are cru-
cial in collaborative workstations, where humans and robots work in close proximity.
138 A. J. Joseph et al.

It should be the first consideration when designing tasks for HRC. More research is
needed in order to enable safe and robust robot action - especially in environments
where unforeseen events are possible [7–9].
• Robotic interaction in the presence of uncertainty: For a robot to successfully
participate in an interaction with a human, it is necessary to obtain a thorough under-
standing of the environment and the interaction dynamics. This includes knowledge of
the partnering agent, its internal state and the constraints that are imposed on the object
with which interaction needs to occur. Accurate models are necessary for capturing
the interaction dynamics [1, 8].
• Motion planning and control needs to occur in near real time to ensure natu-
ral workflow: In an interactive setting, it is necessary to be able to instantaneously
generate control commands so that the robots’ actions meet the user requirements.
Classical sense – plan – act architectures are not sufficient [1].
• Effective use of multisensory data in real time is necessary: Many existing methods
for integrating sensor data are not adequate for accurately capturing the dynamics of
physical contact with rigid and deformable objects [1].
• Communication mediums, such as speech, gaze and gestures need to be unam-
biguous and interpretable by the robot controller algorithms: Both parties must
be able to convey instructions between each other and should be capable of referring
to objects in a shared workspace without confusion [1].
• Reproducing the effectiveness and flexibility of human hands remains an open
challenge: Robotic grippers still lack dexterity when compared to human hands [1].

CoBots are becoming prevalent in industry today and they bring solutions to a few
of the problems listed above. The foundation of collaborative robots is based on solving
the most important concern in HRC - safety. However, these safety improvements come
at a cost - limited payload size and limited speed of operation, and therefore limited
throughput. This cost is currently quite prohibitive and limits the use of CoBots to
simple automation tasks.
Cobots rely heavily on safety stops initiated on impact. Once a robot encounters
a safety stop the operator usually needs to intervene to get the robot functional again.
Safety stops disrupt natural workflow and throughput in a collaborative workstation,
where obstructions can occur within the programmed robot path at any time.
Without augmenting CoBot capabilities, they currently cannot be used to achieve
high levels of collaboration, since they simply do not address enough of the challenges in
HRC implementation. They do however provide a good starting point by incorporating
necessary safety features to ensure that injury/damage is minimized if impact does occur.
By developing a means to intelligently control the collaborative robot in real-time, it is
possible to exploit the benefits that these robots bring, while increasing their efficiency,
effectiveness and range of applicability.

2.3 Collaborative Robot Applications


Collaborative robots provide small and medium-sized enterprises (SMEs) with a viable
entry-point to robotic automation, allowing them to benefit from the quality and pro-
ductivity improvements that come with robotic automation. However, CoBots are not
An Aggregated Digital Twin Solution for Human-Robot Collaboration 139

limited to SMEs; they can also offer productivity and ergonomic improvements for
larger enterprises that already have automated production lines. Applications of CoBots
include: assisting to carry heavy tools, fetching parts, feeding machines and performing
quality inspections [4].
Human-robot collaboration has been identified to be best suited to manufacturing
operations involving high product variance and low production volume [10]. As shown
in Fig. 2, CoBots fill the gap between manual assembly, and robotic automation [10].

Fig. 2. Economically justified operational regions for manual assembly, HRC, robotic automation
and fixed automation [11]

3 Digital Twin Architecture

One of the concepts growing in popularity as an Industry 4.0 driver is the DT. A DT
is defined in [12] as a “multiphysics, multiscale, probabilistic simulation” of a system,
that utilizes the most accurate physical models and sensory data to mirror the life of the
physical entity which it attempts to ‘twin’; i.e. DTs gather real-time sensor data from
multiple entities and effectively aggregate the data to produce a digital replica of the
physical entities.
The primary goal of DTs in the manufacturing industry is to optimize the entire
production system by enabling the real-time integration of simulation data and sensory
data [6]. This integration opens the door to real-time monitoring and re-planning of
production activities to ensure that actions performed are always the most efficient ones
from a business and operation perspective. Using DTs, complex problems can be solved
140 A. J. Joseph et al.

through sense - predict/perceive - plan - act architectures, over sense - plan - act archi-
tectures. Some of the roles of DTs presented in literature include: remote monitoring,
predictive analytics, simulating future behaviour, optimization and validation [13].
A reference architecture for a single DT instance, called the Six-Layer Architec-
ture for Digital Twins (SLADT), has been proposed in [13]. This architecture has
been expanded in [14] to accommodate the aggregation of multiple DT instances. This
expanded architecture is called the Six-Layer Architecture for Digital Twins with Aggre-
gation (SLADTA). SLADT and SLADTA are illustrated in Fig. 3. The various layers of
SLADT are characterized as follows [13]:

• Layer 1 and Layer 2: The physical twin is encompassed in these two layers. It
consists of the entire physical twin, along with the various sensors and data sources
which measure and provide the actual state of the physical twin to the higher levels.
• Layer 3: This layer consists of the local data repositories such as databases, stored near
the physical twin. It is recommended that vendor neutral data servers communicating
through secure, reliable and widely used communication protocols, such as OPC UA,
be used. The data servers should also be able to communicate with an IoT gateway,
or directly with the cloud if applicable.
• Layer 4: This layer is an IoT gateway, which serves to convert data into information
before uploading it to cloud services. It also serves to manage communication between
the cloud and the local data repositories, and between DTs.
• Layer 5: This layer represents cloud-based information repositories which serve to
enhance availability, connectedness and accessibility of the DT.
• Layer 6: This is an emulation and simulation environment which adds intelligence
to the DT. The actual functionality of this layer depends on the use-case. This layer
is connected to the local data repositories and cloud-based information repositories.

Fig. 3. (a) Six-Layer Architecture for Digital Twins, (b) Six-Layer Architecture for Digital Twins
with Aggregation [14]
An Aggregated Digital Twin Solution for Human-Robot Collaboration 141

SLADTA, shown in Fig. 3b, aims to enable the creation of DTs of multi-system
environments through the aggregation of information from various DTs. This is partic-
ularly beneficial when the system to be twinned is composed of many different compo-
nents, possibly from different manufacturers. Methods and protocols for communication
between the various DTs are presented in [14]. The key SLADT layers involved in aggre-
gation are levels three and four. The DTs are connected with one another in a hierarchal
manner through their level three or level four, and OPC UA is suggested for implement-
ing these connections. The flow of information between the DTs is controlled in layer
four (a custom-built IoT gateway).
The benefit of utilizing an aggregated DT model is that data is reduced as it travels
up towards the higher-level DTs. This ensures that only valuable information arrives at
the highest DT levels where value-creating decisions are made based on the goal of the
whole system. Some other key benefits of DT aggregation are [13]:

• Segmentation of information – Components from different manufacturers can be


used without necessitating the breach of data confidentiality.
• Reduced complexity – a potentially large DT can be broken down into several smaller
DTs with encapsulated functionality and information. Each DT can then be flexible
with respect to its internal architecture and can make decisions by itself without
the need for much data interchange. This reduces the overall complexity of data
management.

4 A Digital Twin Architecture for Human-Robot Collaboration


4.1 Architecture Overview
To achieve efficient co-operative and responsive collaboration (see Fig. 1), it is necessary
that each entity involved in collaboration is aware of the intentions and state of each of
the other entities [3]. In this way, all entities will be working towards a shared plan to
achieve the common goal [3]. Therefore, a robot involved in collaboration needs to have
the ability to perceive and comprehend its environment, so that it can plan, learn and
reflect [3].
By aggregating the DTs of all entities in a collaborative workstation, it is possible
to centralize and analyze information about all entities with respect to the common
intention. This aggregation of DTs can then be used as a basis for intelligently controlling
the robot and informing the human operator of upcoming robot operations.
It is believed that through intelligent control it is possible to increase throughput
in collaborative robot applications by: reducing stoppages due to impact; increasing
collaborative robot motion speed whenever safe to do so; and changing the production
plan without direct physical human intervention. This idea is supported by [15], where
an active collision-avoidance system driven by vision sensors was developed with the
aim of providing better flexibility and productivity in a collaborative robot application.
Figure 4 illustrates the DT architecture for HRC that is proposed in this paper. The
DT will be used to capture the state of critical entities in a collaborative work cell,
such that future states can be predicted, and robot control commands can be adapted to
optimize parameters such as safety, throughput and energy consumption.
142 A. J. Joseph et al.

The architecture is based on SLADTA. This choice is motivated by the benefits of


DT aggregation for realizing the DT of a multi-system environment (see Sect. 3).
The architecture proposes the formation of a collaborative work cell DT through the
aggregation of the DTs of all entities in the work cell. The basic DTs required are: the
collaborative robot DT, the human operator DT and the workspace DT. Any other active
and intelligent components that form part of collaboration should have a DT of its own
and form part of the aggregation. To reduce latencies, it should always be attempted to
minimize data transfer between the DTs; therefore, each individual DT should act as
independently and intelligently as possible such that only real value-adding information
is passed to higher level DTs. The highest level is the collaborative work cell DT, where
business and operation critical decisions will be made.

Legend
Controllers and data Local data Cloud
1 Sensors 2 3 4 IoT gateway 5
acquisiƟon devices repositories repositories

CollaboraƟve Workcell Digital Twin

EmulaƟon
6 and
simulaƟon

3 4 5

CoBot Digital Twin Human Digital Twin Workspace Digital Twin

6 6 6
5 5 5

EmulaƟon EmulaƟon
4 4 and 4
and EmulaƟon
simulaƟon simulaƟon

3 3 3

2 2 2
Shop floor
CoBot
controller
1 1 1

Fig. 4. DT architecture for HRC

Collaborative robots aim to introduce flexibility to production systems by achieving


fast and simple changeover between robot tasks and operations. To achieve this fast
changeover, CoBots typically allow for programming through demonstration. On this
premise, the DT will only be expected to exert control over the robot if an undesirable
event, e.g., the movement of the robot in the collision path of another entity is detected.
The robot DT will be responsible for executing adaptive control and motion planning
of the CoBot, on instruction from the collaborative work cell DT. This is one of the most
critical functions of the entire DT solution and it always involves high frequency data
An Aggregated Digital Twin Solution for Human-Robot Collaboration 143

streaming from the CoBot. Motion planning could also be done by the collaborative work
cell DT, it might be preferred since it contains information about the entire work cell.
However, to ensure proper encapsulation of data specific to the CoBot, it is suggested
that robot motion planning remains a function of the robot DT; while information such
as safe-zones for motion are obtained from the collaborative work cell DT.

4.2 Human Digital Twin


In [16], the concept of a human DT has been explored and its importance in Industry 4.0
environments has been established. Key findings were that a human DT needs to utilize
modern human-machine interfaces and behavioural modelling to enable the storage and
communication of relevant information about the human to the entities that are associated
with the human within the work cell. The Holonic Manufacturing System approach was
identified as a suitable approach for developing a human DT.
The primary goal of the human DT is to always provide an as accurate as possible state
of a human operator in the collaborative work cell. The human operator state includes
information about the operators’ pose, position, current activity and future activities.
This allows the robot to have insight into the operators’ activities and goals.
Accurately capturing the dynamics of a human being in a work cell is complex
task; however, advances in vision technologies and reduction in sensor sizes, make this
increasingly possible. Researchers have shown that it is possible to accurately track:

• The human body and actions in 3D through commercial motion capture devices such
as the Microsoft Kinect depth camera [15, 17];
• 3D location of a human in a workspace through RFID [18];
• Human eye gaze and target using commercial eye trackers such as Tobii X2-30 [19];

Open source algorithms, such as the following, are also readily available today:
BlazePalm [20] for real-time hand/palm tracking and gesture recognition, and PoseNet
[21] for estimating the pose of a human in an image or video.
Level one of the proposed human DT (see Fig. 4) is composed of the sensors necessary
to track the human operators pose, position and heading in the work cell. The primary
sensor is a camera array that can be shared between the human DT and the workspace
DT. The raw data is gathered by the data acquisition device in level two. The raw data is
analysed using algorithms in layer six to determine various types of information about
the human, such as the predicted path of motion with confidence levels.

4.3 Workspace Digital Twin


The primary goal of the workspace DT is to monitor and track any changes in the collab-
orative workstation that is otherwise untracked by any other DT. The core information
held by this DT can be a 3D representation of the static and dynamic work environment.
This 3D representation will be used by the collaborative work cell DT to determine
obstruction-free areas in the workspace that the robot may be able to move through.
This information is crucial to enable motion planning and in-situ adaptive control of the
robot in changing environments. The 3D representation of the environment can be in
144 A. J. Joseph et al.

the form of a depth map. Fine details of the environment may not be of importance. To
reduce computation time workspace change detection should be employed instead of
constantly re-computing the entire workspace map.
It is evident from literature that camera-based monitoring is a popular method for
monitoring shared workspaces. Some ways to achieve a 3D representation of a workspace
is through the use of a set of stereo vision cameras [22]; depth cameras, such as a
Microsoft Kinect [23] or through the combination of sensor data from multiple 3D
sensors of different modalities [24].
One interesting idea for developing the 3D map described previously is to start
with a 3D CAD model of the collaborative workstation - which represents most of
the components in the workspace - and then complete the model using data from a
stereo vision camera array, by dynamically mapping any additions and changes in the
environment.
Layer one of the proposed workspace DT (see Fig. 4) is composed of the sensors
necessary to detect the state of the static and dynamic work environment. The primary
sensor is a camera array that is shared between the human DT and the workspace DT. To
improve confidence in the camera data, extra sensors can also be employed; for instance,
to track continuously moving objects in the workspace, such as objects on a conveyor
belt. The raw data is gathered by the data acquisition device in layer two. Algorithms
in layer six use the raw data from the various sensors to create the required workspace
map.

4.4 Collaborative Robot Digital Twin


One goal of the CoBot DT is to acquire the actual state of the robot (including the asso-
ciated end effector) and make this information available to other DTs. It will also serve
the crucial role of adapting the control of the robot on notification from the collaborative
work cell DT. This is expected to occur mainly when: some unsafe condition has been
detected; a more optimal control plan has been established; or when a new task has been
assigned to the robot. The manner in which the control is adapted will depend on the
information sent to it from the collaborative work cell DT.
Layer one of the proposed CoBot DT (see Fig. 4) consists of all sensors required
to obtain information regarding the complete actual state of the robot. This information
includes robot pose, power consumption, torque at each joint and end effector state. Most
collaborative robots come with built-in sensors that provide all the necessary information
about the robot state. Collaborative robots such as the Universal Robot UR5e also allow
the end effector to be controlled and monitored by the CoBot controller. All sensor
information can therefore be obtained by interfacing with the robot controller.
Layer two of the CoBot DT consists of the CoBot controller. Any other devices
required to enable communication between the CoBot controller and the local data
repositories (3rd layer) are also part of this layer. Historic data about the robot state can
be stored in cloud repositories (5th layer) or in a local data repository if often accessed.
The CoBot DT is the best source for data regarding the CoBot and its actual state.
Layer six provides the CoBot DT the ability to use this data - along with information
provided to it by the other DTs - to run simulations to investigate and optimize the tasks
An Aggregated Digital Twin Solution for Human-Robot Collaboration 145

it needs to perform. These tasks could include motion planning, power optimization and
payload gripping location determination.

4.5 Collaborative Work Cell Digital Twin

This is the highest-level DT, formed by the aggregation of the collaborative robot DT,
the human DT and the workspace DT. It contains (or has access to) all the information
necessary to make business and operation critical decisions. This makes any visualization
of the system information best done within this DT.
The primary goal of this DT is to monitor and identify any unsafe or suboptimal
conditions in the work cell from a business and operation level, and then to inform DTs
controlling the affected process of any changes required to ensure conditions are optimal.
These decisions are made through simulations in layer six which can provide the needed
capabilities using software such as Siemens Technomatix, Simio or AnyLogic.
In the task planning stage, simulations can be used for path, activity, and workspace
planning to optimize parameters such as power consumption, human and robot motion
distances. Once optimal parameters have been obtained, the robot can be programmed
to comply with these parameters. Once the robot program is live, the collaborative work
cell DT will be continuously updated with the state of the robot, human and workspace
through their respective DTs. Within layer six, this real-time information can be used
in various ways, for instance to calculate the safe zone for robot motion in the form
of a free-space map, i.e. a model indicating the space within the work cell which is
unoccupied by any other entity and is available for the robot to safely navigate through.
The free-space map can then be used to consistently check that the robot is not
currently moving, and in the near future is not expected to move within any unsafe zone.
If otherwise detected, the collaborative work cell DT informs the CoBot DT to generate
a new robot motion path within the safe zone. If the time to collision is less than that to
generate a new motion path, the CoBot DT is informed to stop until some condition is
met or to move to a safe position till the original path is clear. The work cell DT can be
used to inform the operator of the robot’s intended motion and possible collisions.

5 Conclusion and Future Work

CoBots offer solutions to some of the challenges associated with HRC. However, the
improvements they bring come at a cost; one cost is throughput. A DT solution has the
potential to address some of the shortfalls of CoBots by enabling intelligent control of
the CoBot and the collaborative work cell.
This paper first establishes the need for intelligent control of a CoBot, and then
presents a DT solution for enabling its intelligent adaptive control. The primary goal
of the DT is to improve CoBots’ safety, efficiency and effectiveness. The proposed
architecture aggregates: a collaborative robot DT, a human operator DT, and a workspace
DT. The value to be produced by each DT, as well as possible technologies that can be
used to create each DT, are also briefly discussed. Future work involves a detailed
requirements analysis for each DT, followed by the implementation of the proposed DT
146 A. J. Joseph et al.

solution in an industrial case study, that will be used to evaluate the performance of the
DT solution relative to improve safety, throughput and efficiency of the collaborative
robot.

References
1. Kragic, D., Gustafson, J., Karaoguz, H., Jensfelt, P., Krug, R.: Interactive, collaborative robots:
challenges and opportunities. In: International Joint Conference on Artificial Intelligence,
pp. 18–25 (2018)
2. Oztemel, E., Gursev, S.: Literature review of Industry 4.0 and related technologies. J. Intell.
Manuf. 31, 127–182 (2018)
3. Bauer, A., Wollherr, D., Buss, M.: Human-robot collaboration: a survey. Int. J. Humanoid
Rob. 5(1), 47–66 (2008)
4. International Federation of Robotics: Demystifying collaborative industrial robots (2018)
5. Krüger, J., Lien, T.K., Verl, A.: Cooperation of human and machines in assembly lines. CIRP
Ann. Manuf. Technol. 58(2), 628–646 (2009)
6. Bauer, W., Bender, M., Braun, M., Rally, P., Scholtz, O.: Lightweight robots in manual
assembly - best to start simply!, Fraunhofer IAO (2016)
7. Malik, A.A., Bilberg, A.: Framework to implement collaborative robots in manual assembly:
a lean automation approach. In: Proceedings of 28th International DAAAM Symposium 2017
(2018)
8. Kulić, D., Croft, E.A.: Safe planning for human-robot interaction. J. Rob. Syst. 22(7), 383–396
(2005)
9. Masinga, P., Campbell, H., Trimble, J.A.: A framework for human collaborative robots, oper-
ations in South African automotive industry. In: IEEE International Conference on Industrial
Engineering and Engineering Management, 2015, pp. 1494–1497 (2015)
10. Djuric, A.M., Rickli, J.L., Urbanic, R.J.: A framework for collaborative robot integration in
advanced manufacturing systems. SAE Int. J. Mat. Manuf. 9(2), 457–464 (2016)
11. Bjorn, M.: Industrial Safety Requirements for Collaborative Robots and Applications (2014)
12. Glaessgen, E., Stargel, D.: The digital twin paradigm for future NASA and U.S. Air Force
Vehicles, Structures, Structural Dynamics and Materials Conference, 2012, pp. 1–14 (2012)
13. Redelinghuys, A.J.H., Basson, A.H., Kruger, K.: A six-layer architecture for the digital twin:
a manufacturing case study implementation. J. Intell. Manuf. 31(6), 1383–1402 (2019)
14. Redelinghuys, A.J.H., Basson, A.H., Kruger, K.: A six-layer architecture for digital twins
with aggregation. In: Studies in Computational Intelligence, vol. 853. Springer, Cham (2019)
15. Mohammed, A., Schmidt, B., Wang, L.: Active collision avoidance for human–robot
collaboration driven by vision sensors. Int. J. Comput. Integr. Manuf. 30(9), 970–980 (2016)
16. Sparrow, D., Kruger, K., Basson, A.: Human digital twin for integrating human workers
in industry 4.0. In: International Conference on Competitive Manufacturing (COMA 2019)
(2019)
17. Bortolini, M., Faccio, M., Gamberi, M., Pilati, F.: Motion analysis system (MAS) for pro-
duction and ergonomics assessment in the manufacturing processes. Comput. Ind. Eng. 139,
105485 (2020)
18. Ko, C.H.: RFID 3D location sensing algorithms. Autom. Constr. 19(5), 588–595 (2010)
19. Clemotte, A., Velasco, M., Torricelli, D., Raya, R., Ceres, R.: Accuracy and precision of
the tobii X2-30 eye-tracking under non ideal conditions. In: Proceedings of 2nd Interna-
tional Congress on Neurotechnology, Electronics and Informatics (NEUROTECHNIX-2014),
pp. 111–116 (2014)
An Aggregated Digital Twin Solution for Human-Robot Collaboration 147

20. Bazarevsky, V., Zhang, F.: On-Device, Real-Time Hand Tracking with MediaPipe, Google AI
Blog, 2019 (2019). https://ai.googleblog.com/2019/08/on-device-real-time-hand-tracking-
with.html
21. TensorFlow. Pose estim. https://www.tensorflow.org/lite/models/pose_estimation/overview
22. Bosch, J.J., Klett, F.: Safe and flexible human-robot cooperation in industrial applications. In:
2010 International Conference on Computer Information Systems and Industrial Management
Applications (2010)
23. Flacco, F., Kroeger, T., De Luca, A., Khatib, O.: A depth space approach for evaluating
distance to objects with application to human-robot collision avoidance. J. Intell. Rob. Syst.
Theory Appl. 80, 7–22 (2015)
24. Rybski, P., Anderson-Sprecher, P., Huber, D., Simmons, R.: Sensor fusion for human safety
in industrial workcells. In: IEEE International Conference on Intelligent Robots and Systems,
pp. 3612–3619 (2012)
Holonic and Multi-agent Process
Control
Ten years of SOHOMA Workshop Proceedings:
A Bibliometric Analysis and Leading Trends

Jose-Fernando Jimenez1(B) , Eliana Gonzalez-Neira1 , Gloria Juliana Arias-Paredes1 ,


Jorge Andrés Alvarado-Valencia1 , Olivier Cardin2 , and Damien Trentesaux3
1 Industrial Engineering Department, Pontificia Universidad Javeriana, Bogota, Colombia
{j-jimenez,eliana.gonzalez,jorge.alvarado}@javeriana.edu.co,
arias.gloria@livejaverianaedu.onmicrosoft.com
2 LUNAM Université, Université de Nantes, LS2N UMR CNRS 6004, Carquefou, France
olivier.cardin@univ-nantes.fr
3 LAMIH UMR CNRS 8201, Université Polytechnique Hauts-de-France, Valenciennes, France

damien.trentesaux@uphf.fr

Abstract. The SOHOMA Workshop series on Service Orientation in Holonic


and Multi-Agent Manufacturing is a set of events aiming at fostering innovative
research and practices in smart and sustainable manufacturing and logistics sys-
tems. The SOHOMA scientific community promotes the development of theories,
methods, solutions, proof of concepts and implementations for the digital trans-
formation of the industry of the future through intelligence distribution in service-
oriented, holonic and agent-based systems. During ten editions, the workshops
have gathered hundreds of researchers and practitioners that have presented their
world-wide scientific contributions, published in the Springer book series “Studies
in Computational Intelligence”. This paper presents an overview of the publica-
tion of the SOHOMA workshop proceedings by using a bibliometric analysis
and identifies leading research and innovation trends generated by the SOHOMA
community in the last ten years.

Keywords: SOHOMA workshops · Industry of the Future · Service orientation ·


Holonic manufacturing · Multi-agent systems · Cloud manufacturing · Smart
control · Bibliometrics · Text mining · VOS viewer

1 Introduction

Since the 2000s, the world has undertaken an industrial, economic and social transfor-
mation that has changed the way we interact with each other. In the industrial context,
this transformation has been recognized as a new technological revolution that lever-
ages the productivity, competitiveness and efficiency of firms with the use of techno-
logical advances [1, 2]. This new revolution, named ‘Industry of the Future’ (IoF) or
Industry 4.0 [3], visualizes a future of smart products, processes and procedures, which
strongly relates to usage of Internet of Things/Services and Cyber-Physical Systems,
search for improving operations in areas as manufacturing, logistics, energy, health and

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 151–168, 2021.
https://doi.org/10.1007/978-3-030-69373-2_10
152 J. F. Jimenez et al.

transport, among others [4]. Certainly, the incorporation of new technologies into the
industry requires new operative and functioning characteristics capable of applying the
new advances in service for industrial objectives. In this respect, it was needed to develop
new concepts, methods, solutions, proof of concepts and implementations to contribute
to the employment of new technological advances towards IoF.
A considerable amount of literature has been published on the Industry of the Future.
Interested readers can refer to the following works for detailed comprehension [5–7].
These studies have portrayed the characteristics involved by the IoF concept and have
categorized the technological enablers that potentiate the desired implementation. Some
examples of these enablers are Internet of Things (IoT), Cyber-Physical Systems (CPS),
visualization technologies, cloud computing (CC), cyber-security, modelling and sim-
ulation, machine learning (ML), distributed systems, data analytics, advanced robotics.
Still, from these enablers, one of the most important characteristics needed to heighten the
benefits of the IoF is related to an orchestrator technology that coordinates and synchro-
nizes the technological enablers towards the expected objectives. From many approaches,
some paradigms ease the orchestration of these technologies such as service-orientation,
holonic and multi-agents paradigms. Service-orientation is a paradigm for organizing
and utilizing distributed capabilities that, controlled under different ownerships, modu-
larize services to solve or support a solution for processing business operations according
to specific requirements [8, 9]. The holonic paradigm deals with the design of organiza-
tional structures composed of autonomous and cooperative elements - the holons - with
recursive properties, that collectively integrate an entire system and interact to achieve
a common goal or objective [10, 11]. The multi-agent paradigm concerns the design of
autonomous decision makers - the agents that, communicating with each other under
prescribed rules, are suited communally to solve a problem in a distributed manner
[12, 13]. In conclusion, these paradigms have generated powerful platforms to con-
trol and pilot the IoF technologies, and play a crucial role in the development of the
technological revolution.
Considering the stated needs for the Industry of the Future, and under the initiative
of the FP7 EU project ERRIC (grant agreement ID 264207) aiming to foster in Roma-
nia and other EU countries the development of Intelligent Information Technologies,
it was decided to launch in 2011 the international workshop on Service Orientation in
Holonic and Multi-agent Manufacturing Control as a scientific event that congregates
high level researchers and practitioners to present and discuss their contributions on
subjects associated with service-oriented, agent-based technologies for holonic manu-
facturing control and management in manufacturing enterprises, and for agile production
considering the factory and the product lifecycle [14]. The first workshop, which opened
in 2011 at ENSAM Paris the series of annual SOHOMA events, was organized by the
University Politehnica of Bucharest (Romania), in collaboration with the Universities
of Valenciennes and Nancy (France) and the research group IMS2 of the GDR MACS
scientific coordination structure of the CNRS in France. Following the positive impact
of this initiative in the international research community working in the area of intel-
ligent manufacturing control, SOHOMA workshops were replicated annually in major
university centres in Europe, gathering the most representative scientists from academia
Ten years of SOHOMA Workshop Proceedings 153

and industry who contributed to the development of new Information, Communication


and Control (IC2 T) technologies and to their application in manufacturing in IoF vision.
Since 2011, the SOHOMA workshops have been organized on an annual basis in
important technical university across Europe, specifically in Paris, Bucharest, Valen-
ciennes, Nancy, Cambridge, Lisbon, Nantes, Bergamo, Valencia and again in Paris in
2020. The proceedings of these workshops, which have changed in 2019 the name to
‘Service Oriented, Holonic and Multi-agent Manufacturing Systems for Industry of the
Future’ to better express the orientation of the SOHOMA scientific community research,
have been recurrently published in the Springer book series ‘Studies in Computational
Intelligence’ [14–22]. These academic books have included 274 scientific papers which
have reached 1359 citations on April 2020. Throughout these ten years, the SOHOMA
community was visible at international level as one of the most creative and proficient
workgroups promoting new digital IC2 T: pervasive shop floor instrumenting, Industrial
IoT, edge and fog computing, manufacturing integration frameworks, holonic manufac-
turing control, distributing intelligence with delegate MAS schemes and service-oriented
agents in semi-heterarchical manufacturing execution systems, shop floor resource vir-
tualization and Cloud manufacturing, batch optimization, reality awareness and in-depth
interoperability in smart manufacturing control, intelligent products, physical internet,
logistics and supply chain optimization, digital twins embedded in manufacturing control
and maintenance, machine learning for predictive resource allocation anomaly detection
and predictive maintenance, energy efficiency, cyber-physical manufacturing systems,
cloud-based enterprise networking, ethics of the artificial and human integration in the
factory of the future.
Solutions were proposed to implement these advances in industry, which resulted in
important contributions to the digital transformation of manufacturing. SOHOMA papers
reported research and innovation in the field of smart and sustainable manufacturing and
logistics systems. Examples are the studies carried out by: Duncan McFarlane in the
domain of product intelligence; Jose Barata and Paulo Leitão in the domains of agent-
based control and shop floor reengineering, service-oriented agents, AI-based control
and cyber-physical systems in manufacturing, André Thomas in the area of intelligent
manufacturing control, Damien Trentesaux in the field of AI ethics and human integration
in cyber-physical systems; Octavian Morariu and Theodor Borangiu in the areas of
machine learning and Cloud manufacturing; Paul Valckenaers who launched ARTI -
the new generalized holonic control architecture - a highly abstracted, reality-aware
successor of PROSA [21, 29] during SOHOMA and INCOM 2018 in Bergamo.
In 2020, SOHOMA celebrates its 10th workshop edition. In view of this, the authors
believe it is worth recognizing and describing the leading trends in the research reported
in the SOHOMA proceedings from the very beginning in 2011, and evaluating both the
impact and usefulness for the international scientific community. Therefore, this paper
presents an overview of the published SOHOMA proceedings by using a bibliometric
quantitative analysis. This methodological research is two-folded. Firstly, the paper
seeks to determine the impact, presenting and analysing the bibliometric information
and performance metrics of the publications. Secondly, the paper explores the set of
SOHOMA publications by analysing the unstructured data from fields, such as title,
abstract, and paper content/body, to identify concepts, patterns, topics, keywords and
154 J. F. Jimenez et al.

other attributes in the data. The authors think that this approach will identify the oncoming
topics for future research in digital manufacturing systems and Industry 4.0.
The remaining of the paper is structured as follows. Section 2 briefly presents the
research methodology including the bibliometric analysis and the text mining approach
for the descriptive and leading trends, respectively. Section 3 presents the results of the
bibliometric analysis including the performance metrics such as number of publications,
number of citations, most active authors, etc. Section 4 explores the trends and patterns
identified in the publications and analyse these findings. Finally, Sect. 5 summarizes the
main findings from the articles in the SOHOMA proceedings.

2 Research Methodology
Bibliometrics is a research field that studies a bibliographic material taking into account
the structured and unstructured information retrieved from a set of publications [23]. In
general, it is the implementation of a set of mathematical and statistical techniques in the
scientific and technical activities to measure the contributions within a specific domain
[24]. For the SOHOMA workshop 10-year series, there are 274 papers in the proceedings
published in the Springer book series ‘Studies on Computational Intelligence’. For this
bibliometrics study, two distinct parts were identified for each paper - the metadata and
the paper content. The metadata is the structured data attached to a document file that
describes the information concerning the paper such as title, authors, affiliations, dates,
keywords, abstract, etc. The paper content is the unstructured data regarding to the
paper itself, specifically from the first word of the first section to the last word of the last
section. In general terms, even though it is likely to assume that the paper content could
be categorized as structured data, the extension and the irregularity organization of each
paper led this study to consider the paper content as unstructured information. In this
section, the research methodology used to analyze the set of publications is explained
for bot, the metadata and the paper content of the manuscript.
The objective in the research methodology of this study was to provide an infor-
mative overview of the bibliographic material, and identify patterns or generalizations
in the SOHOMA proceedings publications. Indeed, the information resulted may differ
depending on the interpretation considered - each reader can interpret the results accord-
ing to his interest. Still, for this study, the methodological approach was engaged specif-
ically from quantitative perspective to gain deeper insights related to the impact/trend
topics and to maintain scientific consistency and validity from the available data.
Figure 1 illustrates the research methodology that was used; it consisted of eight
phases as follows: Firstly, in the Manuscript collection phase, while the paper content
was obtained as pdf file of the paper by extracting each paper from the book file for each
SOHOMA book, the metadata was retrieved by joining the BibTex record from the DBLP
computer science database from the Schloss Dagstuhl Leibniz Center for Informatics1 ,
the detailed information was retrieved from the Mendeley reference manager software
and the citations for each paper from Google Scholar. Unfortunately, even though the
Scopus repository was used for data collection, some books and manuscripts were not
indexed yet in the repository.
1 Database can be consulted at: https://dblp.org/db/conf/sohoma/.
Ten years of SOHOMA Workshop Proceedings 155

Fig. 1. The 8-phase process followed in this study to deploy the research methodology

Secondly, the data cleansing phase is associated to the paper content; the data was
converted to a .txt file, unnecessary words like book headers, footnotes and page numbers
were erased through a java code, and the paper content was trimmed from the first word
of the introduction to the last word of conclusions. Thirdly, in the file consolidation phase
the paper content was combined in a .csv file with an identification code and the text of
the paper content. A .csv file was created for the metadata with the following fields: iden-
tification code, year, title, abstract, authors, affiliations, countries, keywords and Google
Scholar citations. Fourthly, in the importing-files phase, both .csv files were imported to
the text mining software VantagePoint; this software offers correlations, autocorrelations
and cross-correlations among data, along with other text mining analysis.
Fifthly, in the record refinement phase, the VantagePoint software was used to fusion
the metadata and the paper content fields, to clean the records and data, to remove the
duplications, normalize the data fields and refine the fields according to a natural language
processing technique. Sixthly, in the data analysis phase the analysis was divided into
two different approaches: i) the metadata and paper content were analysed with the
VantagePoint software to gather overview information from both databanks; ii) only the
paper content was analysed with a text mining technique using Python 3.6.8 libraries to
further extract more detailed information. The analysis started with the extraction of the
lemma and part of speech of the paper content, using the spacy 2.0 library for lemma
identification with a noun or verb as a part of the speech. Applying this procedure for each
proceedings volume (one per year), a term frequency/inverse document frequency (or
TF-IDF) technique was applied to evaluate the importance of the term within and between
years. Seventhly, the reports retrieving phase examined, extracted and interpreted the
information to derive conclusions resulting from both approaches - the VantagePoint
and the R text mining technique.
156 J. F. Jimenez et al.

Finally, in the data visualization phase the reports extracted were organized in order
to be presented and communicated clearly and efficiently to the paper reader. The bibli-
ographic material was then plotted using the VOS viewer software [25] to visualize the
bibliometric network.

3 Bibliometric Quantitative Analysis

This section presents the results of the bibliometric quantitative analysis. The study
analyses the publications and citations structures including the most productive and
influential authors, institution and countries of the publications. As specified, the series of
SOHOMA proceedings have published 274 articles (book chapters), the first publication
date being February 2012. On April 2020, published papers had 1359 citations. The
average of published papers has been 30.4 papers per year, reaching an increasing average
of 35.5 papers per year for the last four editions and a total average of citations of 151 per
edition. Also, the average of citations per paper was 4.96, where 90 publications obtained
a number of citations above this average. In total, 206 publications have received at least
1 citation, which represents a 75.2% of the publications from the past workshop editions.
The entire information of the annual citation structure and the descriptive analysis of
the publications and citations is included in Table 1. The number of publications has a
slight growing trend, while the number of citations has decreased in the last three years.
This means that the ratio between citations and publications has a clear decreasing trend
after 2013. The year 2012 was particularly unusual because although it was the year
with the smallest number of published papers, these publications obtained the highest
number of citations. Regarding the number of citations per published article, it should
be noted that a little more than half of the articles have at least two citations.

Table 1. Descriptive analysis of publication and citations, and annual citation structure

The SOHOMA workshops cover an entire spectrum of technology enablers for the
digital transformation of manufacturing in the Industry 4.0 vision of the future. In this
respect, it is interesting to review the keywords and title terms to identify significant
research topics of the publications from the last decade.
Ten years of SOHOMA Workshop Proceedings 157

The bibliometric visualization network of the keyword terms, illustrated in Fig. 2,


provides a co-occurrence of terms indicating the number of times the keyword occurred
with another keyword with at least 10 occurrences.

Fig. 2. Co-occurrence of author keywords of documents in SOHOMA proceedings

The visualization network, which resulted in 58 terms, was created under the asso-
ciation strength technique with attraction parameter 2 and repulsion parameter −2. This
technique, which normalizes the strength of the links between terms, illustrates the
strength of the connection between terms with the link thickness and the number of
repetitions of the term with the node size. In addition, the figure presents the coloured
clustering used in the network, using 12 keywords as a minimum number of terms per
cluster. The results from the visualization network reveal a cluster of papers on manufac-
turing problems that holistically analyze the problem through holonic and multi-agent
approaches. This finding may be expected due to the specific research objectives of
the SOHOMA community, but an additional finding is that several papers include also
optimization within the main approaches. A second cluster from the visualization is
related to production and internet of things, which allows developing product-driven,
distributed manufacturing systems. Finally, a third cluster is related to Industry 4.0
and cyber-physical systems, which are topics that apply directly to manufacturing and
logistics areas and the, being also rising topics within the industry of the future.
The results of the co-occurrence analysis demonstrate that the key topics extracted
from the authors’ keywords are classical technical terms that describe the domain of smart
manufacturing and extend it in the IC2 T context with subjects such as multi-agent systems
158 J. F. Jimenez et al.

(MAS), holonic manufacturing execution systems (HMES), service-oriented architec-


tures (SOA), digital twins (DT), cyber-physical production systems CPPS), intelligent
products (IP), reconfigurable systems, self-organization, cloud manufacturing (CMfg),
manufacturing as a service (MaaS), among others. In our opinion, the most interesting
aspect of this network is the growing involvement of terms that originate from the title -
SOHOMA - of the workshop series: service-oriented, holonic and multi-agent system.
Additionally, terms like cyber-physical system, industrial internet of things, digital twin,
product-driven automation, activeness or self-organization are increasing the involve-
ment of researchers, compared to the link and association strength which are not getting
the same level. Still, the lack of association strength of these terms with other terms may
guide researchers to move towards other research perspectives.
Figure 3 illustrates respectively the coupling of authors and institutions, exposing
collaboration groups within the SOHOMA community. The minimum total link strength
for the author network and the institutions networks was 4 and 5, respectively. While the
authors’ network illustrates a set of diverse groups working together between clusters for
collaboration purposes, the inner collaboration within clusters has a deeper association
strength as the network separate institutions in isolated groups. Instead of showing the
entire set of islands of the network, the figure illustrates the most connected subsets from
the 34 participating institutions.

Fig. 3. Coupling of authors and affiliations

Table 2 presents the most cited papers from the SOHOMA workshop series. The
most cited paper was published by Montreuil, Meller and Ballot and has 81 citations
[26]. This paper and other two papers from the list (ranked 2nd and 5th ) are related to
the concept and implementing details of the Physical Internet initiative. This topic is an
example of a positioning paper that was published and presented in SOHOMA events; it
establishes and disseminates the foundations of the trending concept of Physical Internet
Ten years of SOHOMA Workshop Proceedings 159

- an open global logistics system based on physical, digital and operational intercon-
nectivity through encapsulation, interfaces and protocols - a natural consequence of
the digital transformation revolution [27]. Physical internet has been recognized by the
European technology platform ALICE as a key driver for developing the deployment of
logistics and supply chain management in Europe [28]. Besides this initiative, other top-
ics extracted from these top 15 cited papers are related to big data, cloud computing and
knowledge-based technologies; human-centred and/or human-in-loop systems; collab-
orative, self-organizing and sustainable intelligent systems; and self-aware, activeness
and intelligent products.

Table 2. The 15 most cited papers, authors and year of the SOHOMA workshop

R TC Title Authors Year


1 81 Physical Internet Foundations Montreuil, B; Meller, R; Ballot, E 2012
2 39 Physical Internet Enabled Open Hub Network Design For Distributed Networked Oper… Ballot, E; Gobet, O; Montreuil, B 2011
3 31 Knowledge-Based Technologies For Future Factory Engineering And Control Legat, C; Lamparter, S; Vogel Heuser, B 2012
4 28 A Human-Centred Design To Break The Myth Of The “Magic Human” In Intelligent Man… Trentesaux, D; Millot, P 2015
5 27 On The Activeness Of Physical Internet Containers Sallez, Y; Montreuil, B; Ballot, E 2014
6 26 Simulation Modelling Of Energy Dynamics In Discrete Manufacturing Systems Prabhu, V; Woo Jeon, H; Taisch, M 2012
7 26 Using The Crowd Of Taxis To Last Mile Delivery In E-Commerce: A Methodological Resea… Chen, C; Pan, S 2015
8 23 Towards Self-Organized Service-Oriented Multi-Agent Systems Leitão, P 2012
9 22 Are Intelligent Manufacturing Systems Sustainable? Thomas, A; Trentesaux, D 2013
10 21 Manufacturing Cyber-Physical Systems Enabled By Complex Event Processing And Big D… Babiceanu, R; Seker, R 2014
11 21 Technological Theory Of Cloud Manufacturing Kubler, S ; Holmstrom, J ; Framling, K ; Turkama, P 2015
12 20 A Collaborative Framework Between A Scheduling System And A Holonic MES Novas, J ; Van Belle, J; Saint Germain, B; Valckenaers, P 2012
13 20 Human-In-The-Loop Cyber-Physical Production Systems Control (Hilcp2Sc): A Multi-Obj… Gaham, M; Bouzouia, B; Achour, N 2014
14 17 Speech To Head Gesture Mapping In Multimodal Human-Robot Interaction Aly, A; Tapus, A 2011
15 16 Adaptive Storage Location Assignment For Warehouses Using Intelligent Products Tsamis, N ; Giannikas, V ; McFarlane, D ; Lu, W ; Strachan, J 2014
Abreviations: R = Ranking; TC = Total citations

Figure 4 illustrates the participation in SOHOMA workshop at a country level. In


total, 33 countries have participated, where France and Romania are the most produc-
tive countries from the workshop. These countries are followed by Portugal, United
Kingdom, United States, Germany, Italy, South Africa, Spain and Algeria reaching an
average of at least 1 paper for each workshop editions. Nonetheless, SOHOMA events
have gathered participants from many countries across the world.

Fig. 4. Map visualization of countries participating in SOHOMA workshops


160 J. F. Jimenez et al.

4 Research Trends and Perspectives


Section 3 offered a quantitative analysis of the metadata information of the SOHOMA
publications. The analyses referred to the composition and characteristics of the publi-
cations’ fields, specifically: the title, authors, affiliations, keywords and abstract. In this
chapter, the dynamics of the research throughout the nine editions of the SOHOMA work-
shops is addressed. Thus, the current trends or tendencies found in the set of publications
are presented, which suggest the orientations of future research work. An evaluation of
the perspectives for the digital transformation of manufacturing is made considering the
potential research paths identified in the SOHOMA proceedings.
Concerning the trends identified in the SOHOMA publications, one of the most
important outcomes is the evolution analysis of terms extracted from the paper con-
tent within the past workshop editions. As explained in Sect. 2 concerning the research
methodology, a set of terms were extracted as a result of the TF-IDF text mining tech-
nique, considering only the terms that were mentioned at least 20 times throughout the
SOHOMA publications. For better understanding, the terms were categorized accord-
ing to the semantic meaning in 5 categories: concept, method, tool/device, problem to
solve and event during execution. The concept category was further divided for better
understanding the semantic and the corresponding evolution in the subgroups: paradigm,
objective, characteristic and behaviour.
Figure 5 shows a set of terms that exhibit a positive, steady or negative trend through
the paper content of SOHOMA publications specifically for the ‘concept’ category.
In this representation, a short arrow with direction and colour code is included to
indicate the trend of the subject associated to the term throughout the analysed period.
For this trend, the slope of regression analysis was calculated in order to classify the
direction and colour code according to the set of slopes the values if which range from
−0.87 to 2.16. The purpose of applying the linear regression is getting the slope of the
data and interpreting it as a trend metric; this metric is an estimating tool of appearing
or disappearing terms throughout the SOHOMA proceedings. As can be seen from the
Fig. 5, the terms can have a significant increase of occurrences (green or light green), a
steady number of occurrences (yellow) and a vaguely decrease of occurrences (light red
and red) across the published SOHOMA proceedings.
From this analysis it results that, for the ‘concept’ category, most of the terms have
an unchanging trend demonstrating continuity in the development of important research
lines of innovation in manufacturing control, and confirming the set of common terms
related to the digital transformation in the ‘Industry of the Future’ vision.
However, some increasing or decreasing slopes signal a certain dynamics in this
category and subgroups. For instance, related to the concept-paradigm category, the
terms cyber-physical, internet of things and physical internet are further appearing fre-
quently in the SOHOMA publications. This positive dynamics could be expected for
‘cyber-physical’ and ‘internet of things’, as these terms play a crucial role in the devel-
opment of frameworks for Industry 4.0. On one hand, cyber-physical systems (or CPS)
are considered as architectural platforms with strongly coupled elements that control the
orchestration of IC2 T-based activities (processes and services). On the other hand, inter-
net of things and its individual, loosely coupled entities represent the digital framework
for instrumenting and interconnecting manufacturing resources, products and orders; IoT
Ten years of SOHOMA Workshop Proceedings 161

is the layer with distributed intelligence that assures the convergence of information- and
operation-technologies (OT, IT) for the intelligent control tasks to be performed in the
CPS. Lastly, the increase tendency of ‘physical internet’ is expected as the foundations
of this concept was first published in the SOHOMA 2012 proceedings book, and then
implementing solutions were constantly developed [26]; this evolution exemplifies the
SOHOMA strategy of promoting important research, development and innovation lines
for the manufacturing value chain (MVC) of the future by publishing prospective papers
in the field.

Fig. 5. Evolution of ‘concept’ terms throughout SOHOMA workshops

In addition to the highlights about the shared ‘objective’, ‘characteristic’ and


‘behaviour’ terms within the concept ‘category’, other associated terms such as: efficient,
efficiency and operation also have increasing appearance in the publications focusing
on the ‘manufacturing industry of the future’. Even these terms are somewhat intuitive
and can be inferred from publications; their incidence is due they represent stable, per-
manent characteristics, objectives and performance indicators of modern manufacturing
162 J. F. Jimenez et al.

control, although a shift in time from papers describing closed, self-determining struc-
tures to papers describing open, networked structures that share resources, services and
infrastructures took place. This shift and considerations of business objectives (market
dynamics, variability of products, servitisation and after-sales services, customer orien-
tation) came with significant contributions of SOHOMA researchers to develop novel
technologies for global industries and bring closer the factory and product lifecycles.
Concerning the decrease trends in Fig. 5, some basic terms from the ‘concept’ cat-
egory - holonic and agent-based and some other terms derived from these basic ones -
centralized, reconfiguration, synchronization and negotiation, feature reduced appear-
ance throughout the SOHOMA book series. The explanation is that the first two terms
are mostly associated to early reported designs of manufacturing control systems, in
principal inspired by the PROSA holonic reference architecture extended with semi-
heterarchical topology. On the other hand, the 4 derived terms represent main features
and operating modes of this topology. Two things happened: first, the ‘holonic’ concept
was sufficiently well transposed in standard control architectures by the end of 2015,
and second, it looked as if holonic research produced robust, auto-configuring and self-
organizing solutions but sustainable, broadly-scoped, optimized (or even guaranteed)
performance remained out-of-reach for industry. These two facts determined researchers
to focus - beyond the ‘holonic’ and ‘agent-based’ concepts - on solutions that close the
gap between process control and shop floor reality, provide energy-awareness, plan pro-
duction and allocate resources based not only on history but also on the prediction of
behaviours and quality of services. Finally, the terms ‘robustness’ and ‘agility’ appear
less frequent because they were the main attributes of distributing intelligence in multi-
agent frameworks, like dMAS [30]; they were replaced in time by ‘reality awareness’,
‘high availability’, ‘resilience’ and ‘responsiveness’ [29, 31, 32].
Figure 6 illustrates a set of terms that exhibit either a positive, steady or nega-
tive trend through the paper content of the SOHOMA publications, specifically the
more generic categories ‘method’, ‘tool/device, ‘problem-to-solve’ and ‘event-during-
execution’. Concerning this set of terms, the trend is balanced between increasing,
decreasing and steady tendency for the analysed publications. The detailing terms that
increase the occurrence of the generic categories “method’, ‘tool/device’ and ‘problem-
to-solve’ are: algorithm, decision, predictive, machine, logistics, production, factory
and manufacturing. While the terms ‘machine’, ‘logistics’, ‘production’, ‘factory’ and
‘manufacturing’ consolidate the workshops’ research objectives in manufacturing and
logistics systems, one can notice the increase in referring the terms ‘algorithm’, ‘deci-
sion’ and ‘predictive’, which indicates the recent orientation of the SOHOMA research
towards intelligent decision making in manufacturing planning, control and maintenance
based on Artificial Intelligence techniques (AI), machine learning algorithms and data
science tools for prediction, classification, clustering and anomaly detection [33].
Even though the SOHOMA workshops focus on the concepts, methods, solutions,
proof of concepts and implementations for manufacturing systems in the ‘industry of
the future’ vision, (intelligent) decision-making represents the final stage of data-driven
manufacturing control and management of the contextual enterprise, because it uses
effectively the big amount of data about production, processes, products and customer
demands obtained through pervasive device instrumenting and digital marketing, and
Ten years of SOHOMA Workshop Proceedings 163

interconnection orders, products, and resources in a service-oriented approach. The


analysis exposed in Fig. 7 also shows a reduction of terms like modelling, conveyor,
robot, workstation, controller, RFID, and inventory that are usually associated with low
level control in the shop floor layer.

Fig. 6. Evolution of method, tool/device, problem to solve, during execution terms

Concerning the perspectives derived from SOHOMA publications, a correlation


analysis was conducted in order to spot potential research directions for ‘Industry of the
Future’. First, the correlation of the most mentioned terms in the paper content of the
SOHOMA proceedings publications was analysed.
Figure 5 illustrates the correlation between terms ranging from 1 to −1. Note that
the resulting matrix is a symmetric matrix, having all the records identical to 1 on the
matrix diagonal. The figure presents a correlation matrix and was done in R studio with
the ggcorrplot library, considering the co-occurrence number of a couple of terms within
164 J. F. Jimenez et al.

the papers. The entire set of terms was trimmed considering only terms that have at least
8 records of co-occurrence with another term.

Fig. 7. Symmetrical correlation matrix between terms in the SOHOMA paper content

Figure 7 is analysed from two points of views: the collection of terms that gather in a
cloud from positive correlations, and the negative correlations seen in a few couplings of
terms from the records. Both consider only the top diagonal part of the matrix as this one
is symmetrical. From the first point of view, three clusters of terms have been identified
according to the correlation analysis. First, a cluster A (Bottom-left) can be identified
as a natural clustering group of the papers describing the Physical Internet initiative,
gathering terms such as ‘logistics’, ‘robustness’, ‘PI-container’, ‘encapsulation’ and ‘PI-
Hub’. Second, a cluster B (matrix centre), which is the biggest clustering of terms can be
identified as the main topic of SOHOMA workshops with terms associated to: ‘service-
oriented’, ‘holonic’ and ‘multi-agent’ paradigms; ‘cyber-physical’ and ‘digital twin’
technologies; topics referring to solutions and applications - ‘component’, ‘structure’
Ten years of SOHOMA Workshop Proceedings 165

and ‘behaviour’ of ‘distributed control systems’. Finally, a cluster C (top-right) includes


papers describing the interactions in human-in-the-loop systems, that use terms such as
‘human’, ‘human operator’, ‘real-time’, ‘robot’, a.o. One interesting finding in positive
correlations (small top-centre group) is that the terms in cluster C are marginally related
to the terms in cluster B showing the increasing attempt of contributing in research on
human interaction systems. However, this is not consolidated as a main research topic
of the SOHOMA community and it is certainly a research perspective for future studies.
From the second point of view, the analysis of negative correlation might be a research
perspective for the SOHOMA community. The findings of this part suggest certain
coupling of terms for further research. As a general remark, there is a disconnection of
terms in cluster A relative to clusters B and C. It could be worth either further developing
control technologies and systems for the physical internet initiative (negative correlation
can be seen on centre-left) or including human interactions in the same initiative (negative
correlation seen on top-left).
From a specific point of view, terms related to subjects that might be also interesting
to explore for further research are presented in Table 3. We believe that the exploration
of these coupled terms could make an important contribution to the field of manufac-
turing control as such terms represent concepts that are not usually studied jointly. For
instance, the terms ‘manufacturing’ and ‘physical internet’ are not mutually addressed
as the physical internet is mainly applied for logistics systems. Still, the coupling of
these two terms suggests extending the manufacturing value chain with product-service
technologies (or after-sales services).

Table 3. Coupling terms with negative correlations suggested as research perspectives

First Term Second Term First Term Second Term First Term Second Term
Manufacturing ↔ Pi-Hub Architecture ↔ Pi-hub Case Study ↔ Interoperability
Manufacturing ↔ Encapsulation Digital Twin ↔ Robustness Pi-hub ↔ Interoperability
Manufacturing ↔ Physical Int. Digital Twin ↔ Encapsulation Multi Agent ↔ PI-Containers
Operations ↔ Robustness Supply Chain ↔ Robots CPS ↔ PI-Hub
Architecture ↔ Logistics Machines ↔ Pi-Hub

In a Nutshell, these findings suggest that the SOHOMA proceedings series includes
contributions to concepts for manufacturing control rather than to prototyping and appli-
cations. In this respect, the bibliometric qualitative analysis presented in this paper con-
firms that the proceedings published in Springer reflect valuable results of the research
carried out in the last decade by the SOHOMA community in the domain of digital
transformation of manufacturing in the Industry of the Future vision.

5 Conclusions
The aim of the present paper is to examine the scientific articles published in the pro-
ceedings of the SOHOMA workshops held in the last 10 years, in order to proceedings
series to find out important research facts and leading trends concerning manufacturing
control for the industry of the future. The most obvious finding from the bibliometric
analysis is the increasing research effort carried out by the SOHOMA community to
166 J. F. Jimenez et al.

develop and apply advanced information, communication and control technologies to


manufacturing. The perspective of future SOHOMA research resides in using advances
in IC2 T to develop new emerging concepts and solutions for manufacturing based on
the instrumentation and interaction of a multitude of interconnected and even decision-
capable smart objects: products, orders and shop floor resources, embedded or distant,
with associated information counterparts (agents, holons) or purely digital. SOHOMA
research aligns to the world wide effort to develop cyber-physical systems that plan
optimally, control and track production, monitor the health of resources and take intelli-
gent decisions based on the prediction of behaviours, performance indicators and factory
events. This study represents a journey in the manufacturing control research, from the
early holonic paradigm and agent-based distribution of intelligence to the future cloud
networked manufacturing - the shared use of a networked manufacturing infrastructure
to produce goods.
One final remark is that the bibliometric analysis performed on SOHOMA publica-
tions put in evidence terms about research topics the association of which became the
brand of the scientific community SOHOMA: ‘manufacturing’, ‘holonic’, multi-agent’
and ‘Industry of the Future’.
This study also analyses the evolution of the SOHOMA research reported in the
proceedings papers, and the perspectives identified from the correlation between the
terms put in evidence in the papers’ content. Despite its exploratory nature, this study
offers insight into potential research roadmaps to further advance in shaping the industry
of the future and in addressing several unexplored topics, such as integrating the factory
and product lifecycle or integrating logistics in the manufacturing value chain through
physical internet solutions and cloud networking.

References
1. Müller, J.M., Buliga, O., Voigt, K.-I.: Fortune favours the prepared: how SMEs approach
business model innovations in I 4.0, Technological Forecast and Social Change, 132 (2018)
2. Kamble, S., Angappa, G., Gawankar, S.A.: Sustainable Industry 4.0 framework: a systematic
literature review identifying the current trends and future perspectives. Process Saf. Environ.
Prot. 117, 408–425 (2018)
3. Schwab, K.: The Global Competitiveness Report 2017–2018. World Economic Forum (2017)
4. Preuveneers, D., Ilie-Zudor, E.: The intelligent industry of the future: a survey on emerging
trends, research challenges and opportunities in Industry 4.0. J. Ambient Intell. Smart Environ.
9(3), 287–298 (2017)
5. Oztemel, E., Gursev, S.: Literature review of Industry 4.0 and related technologies. J. Intell.
Manuf. 31(1), 127–182 (2020)
6. Romero, M., Guédria, W., Panetto, H., Barafort, B.: Towards a characterisation of smart
systems: a systematic literature review. Comput. Ind. 120, 103224 (2020)
7. Alcácer, V., Cruz-Machado, V.: Scanning the industry 4.0: a literature review on technologies
for manufacturing systems. Eng. Sci. Technol. 22(3), 899–919 (2019)
8. Valipour, M.H., Amir Zafari, B., Maleki, K.N., Daneshpour, N.: A brief survey of software
architecture concepts and service-oriented architecture. In: 2nd IEEE International Confer-
ence on Computer Science and Information Technology, pp. 34–38. IEEE Xplore (2009).
https://doi.org/10.1109/ICCSIT.2009.5235004
Ten years of SOHOMA Workshop Proceedings 167

9. Calabrese, M., Amato, A., Di Lecce, V., Piuri, V.: Hierarchical-granularity holonic modelling.
J. Ambient Intell. Humaniz. Comput. 1(3), 199–209 (2010)
10. MacKenzie, C.M., Laskey, K., McCabe, F., Brown, P.F., Metz, R., Hamilton, B.A.: Reference
model for service-oriented architecture 1.0, OASIS standard, 12(S 18) (2006)
11. Valckenaers, P., Bonneville, F., Van Brussel, H., Bongaerts, L., Wyns, J.: Results of the holonic
control system benchmark at KU Leuven. In: Proceedings of the 4th International Conference
on Computer Integrated Manufacturing and Automation Technology. IEEE Xplore (1994)
12. Jimenez, J.F., Bekrar, A., Trentesaux, D., Leitão, P.: A switching mechanism framework
for optimal coupling of predictive scheduling and reactive control in manufacturing hybrid
control architectures. Int. J. Prod. Res. 54(23), 7027–7042 (2016)
13. Guo, Q.L., Zhang, M.: An agent-oriented approach to resolve scheduling optimization in
intelligent manufacturing. Robot. Comput. Integr. Manuf. 26(1), 39–45 (2010)
14. Borangiu, T., Thomas, A., Trentesaux, D. (eds.): Service Orientation in Holonic and Multi-
Agent Manufacturing Control. Proceedings of SOHOMA 2011, Paris, France. Studies in
Computational Intelligence, vol. 402. Springer, Cham (2012)
15. Borangiu, T., Trentesaux, D., Thomas, A. (eds.): Service Orientation in Holonic and Multi-
Agent Manufacturing and Robotics. Proceedings of SOHOMA 2012, Bucharest, Romania.
Studies in Computational Intelligence, vol. 472, Springer, Cham (2013)
16. Borangiu, T., Trentesaux, D., Thomas, A. (eds.): Service Orientation in Holonic and Multi-
Agent Manufacturing and Robotics. Proceedings of SOHOMA 2013, Valenciennes, France.
Studies in Computational Intelligence, vol. 544. Springer, Cham (2014)
17. Borangiu, T., Thomas, A., Trentesaux, D. (eds.): Service Orientation in Holonic and
Multi-Agent Manufacturing. Proceedings of SOHOMA 2014, Nancy, France. Studies in
Computational Intelligence, vol. 594. Springer, Cham (2015)
18. Borangiu, T., Trentesaux, D., Thomas, A., McFarlane, D. (eds.): Service Orientation in
Holonic and Multi-Agent Manufacturing. Proceedings of SOHOMA 2015, Cambridge, UK.
Studies in Computational Intelligence, vol. 640. Springer, Cham (2016)
19. Borangiu, T., Trentesaux, D., Thomas, A., Leitão, P., Barata, J. (eds.): Service Orientation in
Holonic and Multi-Agent Manufacturing. Proceedings of SOHOMA 2016, Lisbon, Portugal.
Studies in Computational Intelligence, vol. 694. Springer, Cham (2017)
20. Borangiu, T., Trentesaux, D., Thomas, A., Cardin, O. (eds.): Service Orientation in Holonic
and Multi-Agent Manufacturing. Proceedings of SOHOMA 2017, Nantes, France. Studies in
Computational Intelligence, vol. 762. Springer, Cham (2018)
21. Borangiu, T., Trentesaux, D., Thomas, A., Cavalieri, S. (eds.): Service Orientation in Holonic
and Multi-Agent Manufacturing. Proceedings of SOHOMA 2018, Bergamo, Italy. Studies in
Computational Intelligence, vol. 803. Springer, Cham (2019)
22. Borangiu, T., Trentesaux, D., Leitão, P., Giret Boggino, A., Botti, V. (eds.): Service Oriented,
Holonic and Multi-Agent Manufacturing Systems for Industry of the Future. Proceedings of
SOHOMA 2019, vol. 853. Springer, Cham (2020)
23. Broadus, R.N.: Toward a definition of “bibliometrics.” Scientometrics 12(5–6), 373–379
(1987)
24. McBurney, M.K., Novak, P.L.: What is bibliometrics and why should you care? In: Proceed-
ings of IEEE International Professional Communication Conference, pp. 108–114. IEEE
Xplore (2002)
25. Van Eck, N., Waltman, L.: Software survey: VOSviewer, a computer program for bibliometric
mapping. Scientometrics 84(2), 523–538 (2010)
26. Montreuil, B., Meller, R.D., Ballot, E.: Physical internet foundations. In: Service Orientation
in Holonic and Multi-Agent Manufacturing and Robotics. Proceedings of SOHOMA 2013.
Studies in Computational Intelligence, vol. 544, pp. 151–166. Springer, Cham (2014)
27. Savelsbergh, M., Van Woensel, T.: 50th anniversary invited article - city logistics: challenges
and opportunities. Transp. Sci. 50(2), 579–590 (2016)
168 J. F. Jimenez et al.

28. Sternberg, H., Norrman, A.: The physical internet - review, analysis and future research
agenda. Int. J. Phys. Distrib. Logist. Manag. 47(5) (2017). https://doi.org/10.1108/IJPDLM-
12-2016-0353
29. Valckenaers, P.: Perspective on holonic manufacturing systems: PROSA becomes ARTI.
Comput. Ind. 120, 103226 (2020)
30. Novas, J.M., Bahtiar, R., Van Belle, J., Valckenaers, P.: An approach for the integration of a
scheduling system and a multi-agent manufacturing execution system. Towards a collabora-
tive framework. In: Proceedings of 14th IFAC Symposium INCOM 2012, Bucharest. IFAC
PapersOnLine, pp. 728–733. Elsevier (2012)
31. Trentesaux, D., Borangiu, T., Thomas, A.: Emerging ICT concepts for smart, safe and sus-
tainable industrial systems. Comput. Ind. 81, 1–10 (2016). https://doi.org/10.1016/j.compind.
2016.05.001
32. Borangiu, T., Trentesaux, D., Thomas, A., Leitão, P., Barata, J.: Digital transformation of
manufacturing through cloud services and resource virtualization. Comput. Ind. 108, 150–162
(2019). https://doi.org/10.1016/j.compind.2019.01.006
33. Morariu, C., Morariu, O., Răileanu, S., Borangiu, T.: Machine learning for predictive schedul-
ing and resource allocation in large scale manufacturing systems. Comput. Ind. 120, 103244
(2020). https://doi.org/10.1016/j.compind.2020.103244
Proposition of an Enrichment for Holon Internal
Structure: Introduction of Model and KPI
Layers

Erica Capawa Fotsoh1,2(B) , Pierre Castagna2 , Olivier Cardin2 , and Karel Kruger3
1 IRT Jules Verne (French Institute in Research and Technology in Advanced Manufacturing),
44340 Bouguenais, France
erica.fotsoh@irt-jules-verne.fr
2 LS2N, UMR CNRS 6004, Université de Nantes, IUT de Nantes, 44 470, Carquefou, France
olivier.cardin@ls2n.fr
3 Department of Mechanical and Mechatronic Engineering, Stellenbosch University,

Stellenbosch 7600, South Africa

Abstract. The current holon structures that exist so far are built to take advantage
of holon dynamism through self-reconfiguration, but not in case of unexpected
situations when holon behaviour is unpredicted and the dynamism is lost. In this
paper, we propose a way to fill this gap by adding a model layer and a KPI layer to
the holon internal structure. The specificity of these layers is that they allow both
dynamic and non-dynamic reconfigurations for RMS that use holonic control. The
added layer could then be used as forecasting and previewing tool and could be
considered as one more step in aid in control (e.g. for digital twin), as well as
an additional tool in the reconfiguration process. An application on a learning
factory shows the feasibility of the proposed concept that brings perspectives on
the notions of data and models aggregation.

Keywords: Holon internal structure · Model layer · KPI layer · RMS · HMS

1 Introduction

During the last decades, manufacturing systems have evolved in order to cope with the
frequent changes imposed by the significant fluctuation in market demand [1]. Several
manufacturing paradigms have been developed to meet new constraints - the system
has to respond more quickly and efficiently to changes (e.g., the introduction of a new
product), for lower price, in short time, and with better quality. Traditional rigid manu-
facturing systems can no longer cope with such constraints; thus Holonic Manufacturing
System (HMS) [2] and Reconfigurable Manufacturing System (RMS) [3] have emerged.
RMS have the ability to reconfigure hardware and control resources to rapidly adjust the
system in response to sudden changes. They are characterized by modular components
which are integrable with other technologies. In order to enable control reconfiguration
in RMS, the idea of holonic control has been widely adopted [4].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 169–180, 2021.
https://doi.org/10.1007/978-3-030-69373-2_11
170 E. Capawa Fotsoh et al.

Developed by [5], holon refers to an element which is sufficient to exist alone, and
also can live in a social framework. In manufacturing context, a holon represents an
autonomous and cooperative identifiable part of the system, i.e. which interacts with
other elements to meet overall goals in the system. Holonic control provides autonomy,
intelligence capabilities, fast adaptation and reconfiguration to quickly and efficiently
respond to new challenges. Holonic control is usually achieved through the use of refer-
ence architectures [6] such as PROSA [7] or ADACOR [8]. When changes occur in the
system, holons instantly gather information of their closest environment, negotiate with
other holons, and then make a decision, e.g. parameter changes or reconfiguration. This
process takes place during the manufacturing process (dynamic aspect of holons).
As stated in [9], holon behaviour could be of three types: either skills-based behaviour
(when the current situation is exactly the same as a previous one), rule-based behaviour
(when the current situation is similar to a previous one) or knowledge-based behaviour
(when the current situation has never been encountered before). The first two cases
cope with the dynamic features expected in HMS, as actions driven by the holon are
predefined regarding the scope of predefined events. In the last case, a new solution has
to be determined, and this goes beyond the dynamic scope of holons.
Reconfiguration of RMS that use holonic control architecture has to deal both with
dynamic and non-dynamic behaviour of holons. Therefore, holons have to be designed in
order to fit both cases. The descriptions of the holon that exist so far in literature are made
with a focus on the dynamic aspect of the holons. That is, holons are designed to make
the right decision during manufacturing process - they are highly reactive, dynamically
change their behaviour to adapt to changes that occur in the system (self-reconfiguration).
Many works are identified in this context, among which [10] based on ADACOR,
[11] on Erlang, or [12] who proposes a governance mechanism for the control system
that dynamically changes its behaviour. The remaining gap in reconfiguration of RMS
that uses holonic control architecture is when the situation that arises is unexpected
and therefore not predefined [12]. In this case, the self-reconfiguration (i.e. the dynamic
reconfiguration) of the holons can no longer be used, and the existing internal structure
of the holons can no longer cope with changes. It is therefore necessary to review the
internal structure of the holons, so that it can both maintain their dynamism for known
or predicted situations, and react to new unknown situations.
This paper describes a way to fill this gap, by proposing an enrichment of the internal
structure of the holons. The paper focuses on the resource holon (the so called operational
holon in the reference control architecture ADACOR). This holon is an abstraction of a
production mean [7]; it is the one whose description is more complete since it integrates
both software and hardware, whereas the other types of holon (product, order, staff -
respectively called product, task, and supervisor in ADACOR) might only be software,
and therefore will not require interfacing to a physical system entity. The term holon
without precision in the following refers to the resource holon.
The remainder of this paper is organised as follows: Sect. 2 presents the proposal of
enrichment of the holon structure; an application of the proposal is given Sect. 3. Section 4
discusses the interest of the proposal both for RMS reconfiguration and control aid
process. Section 5 concludes the paper and gives some perspectives for futures research.
Proposition of an Enrichment for Holon Internal Structure 171

2 Proposal of an Enriched Holon


Handling a sudden change within a manufacturing system requires an in-depth knowl-
edge of this system, and influencing factors have to be identified as well as changing
parameters. The more complex the system, the more difficult it is to cover this task.
To address these issues, [13] has proposed the use of system models such as simula-
tion models, analytical models or even hybrid simulation-analytic models. The chosen
model will depend on the industrial context and the desired level of detail. For example,
mathematical equations can be considered for the description of temperature variation
in an oven, or a flow simulation model to represent and study a manufacturing cell. The
model can also concern economy, energy consumption or even carbon footprint.
As the model has to be a faithful replication of the real life situation [14], the com-
plexity of the situation may result in some complexity of the model. For manufacturing
systems, the difficulty is increased by the fact that the model must integrate both the
physical system and the control logic. One way to overcome this complexity is by using
the system modularity [15]. We believe that by associating to each holon a model denoted
MODhi , the construction of the system model MOD will be equivalent of adding and
aggregating the different MODhi. Therefore, instead of having a huge and complex
model to design, the task will be to design smaller models able to be aggregated, and the
aggregation method. Furthermore, this modularity allows an accurate system analysis.
As a matter of fact, each holon represents a physical resource which is characterized by
some KPIs (e.g. occupation rate, production rate, etc.), and the global performance of
the system depends on the performance of each. Thus, the modular analysis leads to a
better knowledge of the system in case of decision making processes.
Based on the previous explanation, we propose to integrate in an innovative holon
internal structure a model layer and a KPI layer representing each of them. The proposed
structure (Fig. 1) is based on the one proposed by [11], and is applicable for essentially
modular systems including RMS. It is made of the following components:

• Communication manager: It defines how holons exchange information, the cooper-


ation relationship between holons, and thus the negotiations;
• Agenda manager: It is a part of the decision component. It manages the list of service
commitments that the resource will perform, and triggers the execution component.
After gathering necessary information from the other holons, this component chooses
(decides) the service that needs to be completed in order to respond to the Order holon;
the execution component is then activated;
• Execution component: It is the place of the hardware driving actions. It exchanges
information directly with the hardware interface, and chooses (decides) how the ser-
vice will be processed by the resource, regarding the actual state of the machine and
system;
• Physical interface: It is the connection between the software part and the physical
manufacturing resource. It is commonly achieved through an OPC connection;
• KPI layer: This layer contains information about the holon performances that will
guide the configuration choices. These KPIs are either simulated (coming from the
model), measured on the real system or even imposed by the production context.
172 E. Capawa Fotsoh et al.

Relating each KPI with its holon allows a modular and more accurate analysis of the
system;
• Model layer: This layer contains a representation of the holon, and is built using
data from the real manufacturing system ([9] proposes the use of a discrete-event
observer that gathers the information on the current state of the system), or data that
can be introduced by the production manager. The model is used to forecast the holon
behaviour and to foresee the impact of potential changes. The data are commonly
stored in a database as well as KPIs, and could be accessed through SQL queries. The
aim of adding a model layer is twofold: firstly, to aid in reconfiguration process, and
secondly to aid in control.

Fig. 1. Proposal of the new holon structure with model and KPI layer

The use of a model of the system in decision making (especially for aid in control) is
not newly developed. Indeed, [16] proposed in their work the use of D-MAS as a virtual
representation of some holons’ tasks and duties in order to obtain feasibility information,
explore and propagate them. D-MAS thus allows to foresee the impact of future inter-
actions, as well as the enriched holon. It is used within the ARTI architecture (Activity-
Resource-Type- Instance, which is an update of PROSA) proposed by [17], whose pur-
pose is to clearly exhibit the digital twin (in a larger context than manufacturing) [6].
For example for digital twin applications, the model is often an online simulation and
the reconfiguration process is dynamic and based on predefined behavioural decision.
Yet, when the situation is unforeseen and unexpected, the system gets stuck, and none of
the physical or digital twins know how to react. The proposal to add the model and KPI
layers that were explained above aims to address this kind of situation. Like D-MAS, the
Proposition of an Enrichment for Holon Internal Structure 173

enriched holon is used as a previewed tool. In addition, it can be considered as one more
step in control decision support. Moreover, it offers possibilities for hardware decision
support, and can be considered as an additional tool in the reconfiguration process. This
will be discussed further in Sect. 4.

3 Application on a Learning Factory


In this example, the model used is a flow simulation built with the software ARENA.
Simulation is widely used in the manufacturing context as a predictive/evaluation tool
[18], often concerning energy consumption [19], economic evaluation, or even support to
decision-making [20]. Recently it has been used in the reconfiguration decision process
as a predictive tool [21, 22] to foresee consequences of potential decision. In the present
example, we will follow the same idea. It is important to specify that the simulation we are
talking about here is an offline simulation model i.e. not synchronized in real time with
the real system. The aim of this application is to show that the use of a simulation model
for each holon makes it easier to create the system simulation model and to evaluate the
configurations; the analysis of the system from the point of view of each holon allows
having reconfiguration proposals which aim at the same time reaching global objectives
and local objectives represented by the KPIs of each holon.

3.1 Case Study Description


The example is a production line that manufactures 4 types of products. Each workstation
(P10, P20 and P30) performs one operation at a time. The goal at the end of the 8 h of
work is to produce 560 products, and to reduce to the minimum the work in progress
(WIP) of the system. The operation time for each product (P) on each workstation is the
following: P1 [0.6;1.8; 1.8], P2 [0.9; 1; 0], P3 [0.5, 1.5; 1.5], P4 [1;1.3;0].

3.2 Holon Identification and Model Building


We have identified the following list of holons: workstation holons (holon P10, holon
P20, holon P30), divergence holon and convergence holon (which act as a guide for
the entry and exit of products on the workstation), conveyor holons (used for product
mobility within the system), and bend holons (used to guide products on the conveyors
by changing direction).
For the example considered a library has been designed. Each element of this library
is built from ARENA’s basic objects and represents the identified holons. Table 1 gives
an example of the library elements and the corresponding number of ARENA’s basic
objects used to build the holon model. The construction of the simulation model of the
learning factory therefore consists of adding and parametrizing elements of the library.
The parameterization makes it possible to establish the link between the elements. It is at
this stage that the holon models are aggregated to obtain the system model, as described
in Sect. 2. Doing so, the time spent in the model building is drastically reduced. Indeed,
a configuration like the one in Fig. 3 is made of 15 holons and is represented by 15
objects from the library. It would have required more than 500 ARENA’s basic objects
174 E. Capawa Fotsoh et al.

to build the same configuration. Hence, by using a model for each holon, the number of
basic objects used to build the configuration is divided by 30. This shows that the use
of a model representing each holon allows gaining time, reducing the complexity and
facilitating the construction of the system’s simulation model.

Table 1. Holon model in the library

No. of ARENA’s
Holon Model in the library
basic objects

Convergence 20

Workstation P10 51

Bend 9

Small conveyor 9

For configuration evaluation, we will focus on workstation holons whose model and
part of the description are given in Fig. 2. We will consider for the example that the
initialization data corresponds to an empty model at the beginning of the simulation.
That is, there is no ongoing operation on the workstation, and the execution manager has
no current operation time to set. To test different configurations, the production manager
will vary the data related to the operation to be performed by the holon, as well as the
next holon to send the product to. He will use the WIP and availability of each holon to
decide which configuration would best fit the production context.

3.3 Alternative Configurations and Discussion


In order to meet the daily production volume, we first propose configuration 1 shown
in Fig. 3. The workstation (P20) has a lot of operations to perform (total processing
time 5.6, versus 3 for P10 and 3.3 for P30); we decided to duplicate P20a and P20b in
order to reduce the WIP. At the end of the day, the desired 560 products have not been
produced, yet the system is at full charge. Configuration 1 is therefore not able to meet
the production requirements. The analysis of the holons’ simulation model shows the
results presented in Table 2. The simulation shows that the availability of the workstations
is almost null, whereas the production work in progress increases. Holon P20b is the
holon with a considerable availability. Holon P20a performs the worst, followed by
Proposition of an Enrichment for Holon Internal Structure 175

P10 and P30. Note that even if holon P30 has a low availability, its WIP refers to the
product currently manufactured. Holons P20a and P10 seem to be the bottlenecks of the
manufacturing line. This analysis is possible because the holon model is associated with
a KPI layer that retrieves the data related to the holon simulation.

Fig. 2. Description of the workstation holon P20a

Fig. 3. Simulation model of configuration 1

The first action regarding a reconfiguration process is to act on P10 and P20a. The
use of the library with each holon model allows rapidly building and testing alternative
configurations. Table 3 gives an overview of the results of tested configurations. These
configurations were built using the holon model of the library.
176 E. Capawa Fotsoh et al.

Table 2. KPI values for each workstation holon of configuration 1

Conf1 Production vol-


545
ume
holon P10 P20a P20b P30
WIP 7 5 1 1
Availability 0.81 % 0.46% 28.25% 0.98%

Table 3. Proposed reconfiguration solutions

Conf
2 Production vol-
548
ume
Holon P10a P10b P20a P20b P30
WIP 1 4 5 1 1
Availability (%) 62.09 38.98 0.46 27.65 0.98

Conf
3 Production vol-
542
ume
Holon P10 P20a P20b P30
WIP 1 1 1 5
Availability (%) 0.99 13.14 12.92 1.36

Conf 2 has a better production volume than conf 1, but WIP and the availability of
P20a are not improved. Conf 3 performs better than Conf 1 and Conf 2 in terms of WIP
yet the drawback is the production volume. It is important to notice that, within Conf 2,
a new workstation was added, whereas Conf 3 uses the same elements as Conf 1. An
additional analysis such as economic analysis or KPI analysis on the system level i.e.
KPI aggregation, would be necessary for an actual choice of configuration. Nevertheless,
Conf 2 seems to be the one that best fits the production context (considering both WIP,
availability and production volume). Having a model that represents the holon simplifies
Proposition of an Enrichment for Holon Internal Structure 177

the construction of the system simulation model. It also allows testing configurations
and analysing the KPIs of each holon.

4 Discussion About the Use of the Enriched Holon

4.1 In the Context of Reconfiguration

In the context of reconfiguration, the enriched holon will be used to preview potential
configurations or to evaluate the consequences of the behaviours before their implemen-
tation. When an unexpected reconfiguration situation arises, the production manager
usually has little time to react. This reaction must of course depend on the state of the
system and the expected objectives. To know better what will be the consequences of
his choices, the production manager can use the model of the system as an analysis tool.
The model of the system is modularly built according to the explanation of Sect. 2,
i.e. by aggregation of holon modules and KPI (Fig. 4). Different configurations of the
system can then be tested and evaluated, and the configuration that best fits the new
production context is chosen. Data resulting from this configuration and parameters
have to be shared between several holons. That is, the behaviour of the upper holon,
the KPI and the parameters corresponding to the lower holons have to be transmitted to
each of these. We also propose to store these data in databases, in order to reuse them in
case of rule-based behaviour or skill-based behaviour decision making. The model of the
system can also be used as a forecasting tool in case of sudden changes. The production
manager could introduce new data in order to test new possible configurations scenarios.
If the results of these tests are conclusive, the parameters of the corresponding holons
are saved in a database and used at the appropriate time.

4.2 In the Context of System Control

System control consists in dynamically deciding the relevant instructions to be given to


a system in order to achieve a given objective [23]. For RMS that use holonic control,
the interest is to satisfy both global objectives (at system level) and local objectives (at
the level of each holon), especially in the case of disturbances where dynamicity and
self-reconfiguration of holons cannot be used. The choice of the control logic requires
some knowledge about the behaviour of each holon, individually and within the whole
system. These decisions need to be evaluated before their application in order to know
what will be their potential consequences.
The enriched holon can be used in this purpose. It will enable to study and evaluate
the different behaviours of holons to choose the control logic that best fits a production
context. Indeed, when disturbances occur in the system, the production manager uses
the behavioural models to test, analyze and choose the control logic (for example, the
priority order of tasks used by the agenda manager) that will be applied for the new
production context. The chosen behaviour after testing could be stored in a database to
be used again later.
Since each holon has its own control and parameters, it is necessary to define at
the system’s level a way to keep all behaviours under control, i.e. a way to manage
178 E. Capawa Fotsoh et al.

the different parameters when the models of the holons are aggregated. It is therefore
essential to have an aggregation manager for models, KPI and control logics. Indeed,
the data of the holons taken individually are relevant for a local analysis. They must be
integrated into the overall system data in order to guarantee consistency in the decision
process for a global view. The holon model gives a partial overview of the system, yet the
decision making regarding the system level needs to consider supplementary information
that could not be found in holons (for example the priority order assigned to each holon,
the formula to evaluate a KPI at the system level, etc.). We thus propose to consider an
aggregation model within the system’s model that coordinates first the holon model to
guarantee the relevance of both the model and the resulting data, and secondly provides
the supplementary information needed to conduct decision making as shown in Fig. 4.

Fig. 4. The construction of the system model and introduction of the aggregation model

5 Conclusion and Perspectives


For RMS using holonic control, the reconfiguration is not always dynamic. The holon
structures that existed so far are focused on dynamic reconfiguration issues. The purpose
of the reported research was to propose an enrichment of the holon structure by intro-
ducing a model and KPI layer to overcome the drawback of non-dynamic behaviour of
the holon. These layers are modularly built to represent each holon. The model layer is
a replication of the physical word; it enables and accurately analyses the system/holon
capabilities and behaviours, and allows predictions and forecasting.
Moreover, having a model representing each holon allows reducing the time used to
build the system model, whereas the KPI layer gives information about the holon/system
performances, and thus drives decision making process. The interest of the proposition
of the enriched holon is both for reconfiguration and in the context of aided control
system. The use of theses layers in a reconfiguration process has been presented for
a learning factory. In this example we have used an offline simulation to model the
system. However, the model could be implemented in online simulation such as for
Proposition of an Enrichment for Holon Internal Structure 179

digital twin applications where each holon has a digital twin and the system digital twin
is the aggregation of the lower level digital twin. This idea has been developed by [24].
The construction of the system model and the analysis of the KPIs are based on
modularity. The more holons, the more local information about the system and the
more need to aggregate and bring it up to the system level. The relevance of the model
and the KPIs will therefore depend on the aggregation model chosen. For numerical
values (KPI values) there are many different aggregation methods: arithmetic means,
geometric means, quadratic means, harmonic means, linear combinations, etc. [25]. The
aggregation of control logic remains a big issue in holonic systems. However, a lot
of research is being conducted to propose solutions to this problem. Future work may
follow the same direction, i.e. the proposal of an aggregation model for control logic
within the system model.

Acknowledgments. This research work is supported by the funding of the PhD program PER-
FORM (Fundamental research and development program resourcing on manufacturing) from the
IRT Jules Verne (https://www.irt-jules-verne.fr/).

References
1. El Maraghy, H.: Flexible and reconfigurable manufacturing systems paradigms. Flex. Serv.
Manuf. J. 17(4), 261–276 (2006). Special issue
2. Van Brussel, H.: Holonic manufacturing systems, the vision matching the problem. In: First
European Conference on Holonic Manufacturing Systems, Hannover (1994)
3. Mehrabi, M.G., Ulsoy, A.G., Koren, Y.: Reconfiguration manufacturing systems: key to future
manufacturing. J. Intell. Manuf. 11, 403–419 (2000)
4. Kruger, K., Basson, A.: Implementation of an Erlang-based resource Holon for a Holonic
manufacturing cell. In: Borangiu, T., Thomas, A., Trentesaux, D. (eds.) Service Orientation
in Holonic and Multi-agent Manufacturing. Studies in Computational Intelligence, pp. 49–58.
Springer, Cham (2015)
5. Koestler, A.: The Ghost in the Machine, Oxford. Macmillan, New York (1968)
6. Derigent, W., Cardin, O., Trentesaux, D.: Industry 4.0: contributions of holonic manufacturing
control architectures and future challenges. J. Intell. Manuf. (2020)
7. Van Brussel, H., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture
for holonic manufacturing systems: PROSA. Comput. Ind. 37(3), 255–274 (1998)
8. Leitão, P.: An agile and adaptive holonic architecture for manufacturing control. Ph.D. thesis,
University of Porto (2004). https://www.ipb.pt/~pleitao/pjl-tese.pdf
9. Cardin, O., Castagna, P.: Using online simulation in Holonic manufacturing systems. Eng.
Appl. Artif. Intell. 22(7), 1025–1033 (2009)
10. Leitão, P., Restivo, F.: ADACOR: a holonic architecture for agile and adaptive manufacturing
control. Comput. Ind. 57(2), 121–130 (2006)
11. Kruger, K., Basson, A.: Erlang-based control implementation for a holonic manufacturing
cell. Int. J. Comput. Integr. Manuf. 30(6), 641–652 (2017)
12. Jimenez, J.F., Bekrar, A., Trentesaux, D., Rey, G.Z., Leitão, P.: Governance mechanism in
control architectures for flexible manufacturing systems. IFAC-PapersOnLine 28(3), 1093–
1098 (2015)
13. Buzacott, J.A.: Modelling manufacturing systems. Robot. Comput. Integr. Manuf. 2(1), 25–32
(1985)
180 E. Capawa Fotsoh et al.

14. Brandimarte, P., Villa, A.: Modeling Manufacturing Systems: from aggregate planning to real
time control 53(9) (2013)
15. Lameche, K., Najid, N.M., Castagna, P., Kouiss, K.: Modularity in the design of reconfigurable
manufacturing systems. IFAC-PapersOnLine 50(1), 3511–3516 (2017)
16. Holvoet, T., Valckenaers, P.: Beliefs, desires and intentions through the environment. In:
Proceedings of the International Conference on Autonomous Agents, vol. 2006, pp. 1052–
1054 (2006)
17. Valckenaers, P.: ARTI reference architecture - PROSA revisited. In: Borangiu, T., et al. (eds.)
Service Orientation in Holonic and Multi-Agent Manufacturing. Studies in Computational
Intelligence, p. 19. Springer, Cham (2019)
18. Castagna, P., Mebarki, N., Gauduel, R.: Apport de la simulation comme outil d’aide au pilotage
des systemes de production-exemples d’application. In: Proceedings of MOSIM MOSIM
2001, Troyes, France, 25–27 April 2001, pp. 241–247. https://www1.utt.fr/mosim01/pdf/
ARTICLE-091.pdf
19. Kouki, M., Cardin, O., Castagna, P., Cornardeau, C.: Input data management for energy related
discrete event simulation modelling. J. Clean. Prod. 141, 194–207 (2017)
20. Maier-Speredelozzi, V., Hu, S.J.: Selecting manufacturing system configurations based on
performance using AHP. Technical Paper – Society of Manufacturing Engineering MS, no.
MS02-179, pp. 1–8 (2002)
21. Cardin, O., Castagna, P.: Proactive production activity control by online simulation. Int. J.
Simul. Process Model. 6(3), 177–186 (2011)
22. Ateekh-Ur-Rehman, L.-U.-R.: Manufacturing configuration selection using multicriteria
decision tool. Int. J. Adv. Manuf. Technol. 65(5–8), 625–639 (2013)
23. Trentesaux, D.: Pilotage hétérarchique des systèmes de production, Habilitation thesis, Uni-
versité de Valenciennes et du Hainaut-Cambrésis (2002). https://tel.archives-ouvertes.fr/tel-
00536486/en/
24. Redelinghuys, A., Basson, A., Kruger, K.: A six-layer digital twin architecture for a man-
ufacturing cell, service orientation in holonic and multi-agent manufacturing. In: Borangiu,
T., et al. (eds.) Studies in Computational Intelligence, pp. 273–284. Springer, Cham, January
2019
25. Bouyssou, D., Dubois, D., Prade, H., Pirlot, M.: Decision Making Process: Concepts and
Methods. Wiley, New York (2013)
Holonic Architecture for a Table Grape
Production Management System

Johan J. Rossouw, Karel Kruger(B) , and Anton H. Basson

Department of Mechanical and Mechatronic Engineering, Stellenbosch University, Stellenbosch


7600, South Africa
{jjrossouw,kkruger,ahb}@sun.ac.za

Abstract. The management of table grape production is complex, since it must


adhere to strict production requirements, facilitate numerous decisions by various
experts, and integrate many human workers. This paper discusses the challenges
experienced with a current production management system, as well as the potential
benefits that could be gained from implementing an improved system. The paper
suggests the use of a holonic approach to develop such a table grape production
management system – using the ARTI reference architecture to implement the
holonic system. The implementation of the ARTI reference architecture is also
discussed to show that the architecture can be a feasible solution.

Keywords: Holonic manufacturing systems · Production management system ·


ARTI reference architecture

1 Introduction
The fourth industrial revolution has brought forth the evolution of cyber-physical systems
made possible through the developments in the IT infrastructure. These developments
include the increased usage of the internet to wirelessly connect resources, information,
objects and people with each other, to create the Internet of Things (IoT). The IoT has
been adopted in the manufacturing industry to improve their efficiency in order to stay
competitive [1].
The agricultural industry is traditionally very labour intensive, with significant depen-
dence placed on human workers to perform the production tasks. With the fourth indus-
trial revolution and the development of cyber-physical systems and IoT, the world has
become more connected, and the product specifications have become more precise as
new technologies enables the production of customized products. Therefore, the agri-
cultural industry must imitate the manufacturing industry and adopt new technologies
to improve their efficiency and product quality to stay competitive.
This paper considers the South African table grape industry. South Africa is a devel-
oping country where manual labour costs are still comparatively low and companies
are motivated to create jobs. Therefore, it is often more beneficial for companies in
South Africa to rather employ workers to accomplish tasks than investing in expensive
technology to automate processes.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 181–192, 2021.
https://doi.org/10.1007/978-3-030-69373-2_12
182 J. J. Rossouw

South African farmers are not only faced with the challenges of keeping up with
the rapidly changing markets, but also with managing all the production management
aspects, such as the workers, tools and production processes. This creates a demand for
a production management system that improves the efficiency of the production process
and the quality of the final product [2].
Table grape production management consists of the following aspects that define it
as an open-air engineering process [3]:

• Management of mobile equipment such as vehicles.


• Interaction with non-flat surfaces in 3D space.
• Removal and deposit of materials, such as grapes being harvested and packaged into
boxes.
• Logistics to transport grapes from vineyards and packing materials form a local storage
facility to the packhouses on different farms.

This means that a table grape production management system must not only effi-
ciently adapt to frequent and unexpected production order changes, but also to changes
produced by the varying work performance of tasks, cooperation between workers,
cooperation between successive system tasks, changes in environmental conditions and
changes induced by market demand changes. The system needs to exhibit agility in
response to changes and robustness in its handling of disturbances [4].
This paper presents a holonic architecture for a table grape production management
system, which has the potential to address the above-mentioned challenges. The holonic
systems approach originates from the theories of Arthur Koestler [5]. The word holon
is constructed from the Greek word ‘holos’ meaning whole, and the suffix ‘on’ meaning
a part. These holons are, as the word itself describes, used to construct entities that
can simultaneously be a part of a larger entity and be an entity consisting of numerous
autonomous and cooperative entities.
The holonic systems approach has subsequently been used to develop architectures
for the modelling and control of complex systems – most notably in the field of holonic
manufacturing systems. However, the recent development of the Activity-Resource-
Type-Instance (ARTI) holonic reference architecture [3] aims to support applications
outside the manufacturing domain as well. This paper thus proposes an ARTI reference
architecture implementation for a holonic production management system for the table
grape industry.
The paper discusses the production management of the table grape industry in Sect. 2
to give an overview of the challenges and how the table grape industry can benefit from
the use of a table grape production management system. The paper then provides an
overview of holonic systems and the ARTI reference architecture in Sect. 3. In Sect. 4,
the paper describes how the ARTI architecture can be implemented to create a production
management system for the table grape industry and discusses the potential benefits of
such a system. Finally, the paper finishes with a conclusion and a discussion of the future
work in Sect. 5.
Holonic Architecture for a Table Grape Production Management System 183

2 Table Grape Production Management


The table grape production management initiates with the receipt of a production order.
The production order is created through negotiations between the production manager
and the export company. This production order contains all specifications regarding of
the grapes, the packing process and the packing materials to be used.
The grapes of the company are divided into sections called farms. The grapes on each
farm is divided into vineyard blocks. Each farm is assigned a farm manager who manages
that farm’s blocks of vineyard. The reason this is done is to improve the management of
the grapes by having a farm manager that is only responsible for a small section of the
total grapes [6].
The tasks of a table grape production management process can be divided into two
levels, according to the decisions that are made throughout the production process. The
first set of decisions are made by the production manager and the second set of decisions
are made by the farm manager (see Fig. 1). The production manager bases his decisions
on the production order specifications and is responsible for deciding which grapes,
packhouses and packing materials to assign to the production order. The farm manager
then receives the harvesting, grape quality, packaging and transportation information
from the production manager and then needs to decide which workers to assign to the
harvesting of the grapes, quality control and grape packaging, and which vehicles to
assign to the transportation of grapes and packing materials [7].

Fig. 1. Table grape production management decision levels

The production order contains specifications for the grapes, the packhouse and the
packing material. This information is used to support three decisions to be made by
the production manager – labelled as numbers 2–4 in Fig. 1. These decisions, in turn,
initiate the set of decisions to be made by the farm manager (numbers 5–8 in Fig. 1).
The decision regarding grape harvesting (5) is executed according to the result of the
grape selection decision (2). Decisions regarding quality control (6) and grape packing
(7) are dependent on the results of the packhouse selection (3). Decisions regarding
transportation (8) consider the transportation of both grapes and packing materials and,
184 J. J. Rossouw

as such, are dependent on the outcomes of decisions regarding the selection of grapes
(2) and packhouse (3).
The grape selection is done according to the quality and quantity of the available
grapes that can satisfy the production order specifications. Each block of vineyard will
have a unique quality and quantity of grapes, which determines its suitability for a
specific production order. The quality of grapes is determined according to the colour,
sugar levels, blemish and size of the grape berries.
Each farm is equipped with its own packhouse. Usually, when grapes are assigned
to a production order, the packhouse on the same farm as the assigned grapes will be
assigned to the production order. This reduces the distance the grapes are transported
from the vineyard to the packhouse where the quality control and packaging are done.
However, each packhouse is different in terms of the farm on which the packhouse is
located, the capacity of grapes the packhouse can handle, and the food safety and hygiene
accreditation of the packhouse (as required by certain markets). This may result in the
selected packhouse being on a different farm due to the accreditation requirements.
It may also happen that multiple packhouses are assigned to one production order to
increase the throughput when there are packhouses available.
The packing material selection is done according to the specifications of the produc-
tion order. The production order will specify the inner packaging containing the grapes,
carton boxes wherein the grapes are packaged, labels placed on the packaging to iden-
tify the grapes and sulphur dioxide sheets to prevent fungal growth during storage and
transportation that should be used for the product. The packing materials assigned to the
production order are chosen from the available packing materials in the local packing
material storage facility.
The harvesting of the grapes is done by teams of workers with the required skills.
The harvesting teams are assigned by the farm manager to harvest the grapes assigned by
the production manager. The harvesting teams will harvest the grapes according to the
instructions they receive from the farm manager. Each team has a supervisor to ensure
the harvesting is done correctly and to instruct the team on the harvesting.
The transportation consists of transporting grapes from the vineyards, and packing
materials from the packing material store, to the packhouses. The transportation task
depends on the quantities of packing materials and grapes required by the production
order. The transportation fleet, from which vehicles can be assigned to the transportation
task, consists of tractors with trailers, trucks and utility vehicles. The farm manager
assigns a vehicle to perform the transportation task. Although a tractor is ideally used
for the transportation of grapes, the trucks ideally for the transportation of packing
materials, and the utility vehicles for the transportation of workers, the farm manager
can decide to assign any of the vehicles to any activity; depending on the availability of
vehicles and priority of a transportation activity.
The quality control and the packaging of the grapes are done within the assigned
packhouse. The quality control and grape packing are done by quality control and grape
packing stations. These stations employ one to four workers. The farm manager assigns
workers with the required skills to the stations. The farm manager also gives instructions
to the stations regarding the quality control and the grape packing. The workers at
the quality control stations will inspect the quality of the grapes as they arrive in the
Holonic Architecture for a Table Grape Production Management System 185

packhouse and adjust the grape bunches if required (by removing grape berries that are
damaged, too small or that did not colour enough).
The grapes that satisfy the production order’s quality specification are then passed
on to the packing stations. The workers at the packing stations place the grapes in the
inner packaging, before placing them in the carton boxes and placing the sulphur dioxide
sheets on top. The carton boxes are then closed and the labels specified in the production
order are placed on the boxes. The packaged grape boxes are then placed on pallets to
load them onto trucks to transport the product off the farm [8].

3 Holonic Architecture

3.1 Holonic Systems

The holonic systems approach aims to model complex systems as multiple autonomous
and cooperative entities. These entities, called holons, are autonomous in the sense that
they can create their own plans and/or control the execution thereof; and cooperative, in
the sense that they can develop mutually acceptable plans and/or strategies and execute
them. Holons can represent a physical or logical activity that transforms, transports or
stores information and physical objects [3].
Holons can be grouped (often dynamically) into holarchies, which could exhibit
hierarchical, heterarchical or hybrid structures. In holarchies, holons can work together
in a cooperative manner to achieve a complex system goal by combining their skills and
knowledge [9]. The structure of the holonic system can also exhibit fractal characteristics,
by which holarchies can be aggregated to form larger holons with their own identity.
The holons within the holarchy can belong to multiple holarchies, depending on the
functionality of the holons and where in the system the functionality of these holons are
required. These holarchies can be designed upfront or they can be dynamically created
by interactions with other holons within the system, according to the application’s needs
[10].

3.2 ARTI Reference Architecture

The ARTI holonic reference architecture uses the holonic systems approach to simplify
the modelling and control of complex systems. It is the result of a recent reconsideration
of its predecessor, the Product-Resource-Order-Staff Architecture (PROSA) [10], which
was developed for holonic control implementations for manufacturing systems. ARTI
addresses the shortcomings of PROSA, and proposes more generic terminology, to offer
improved support for applications outside the manufacturing domain.
As explained in Sect. 3.1, the holonic systems approach dictates that a complex
system must be broken down into a collection of holons. According to ARTI, these
holons should be classified in three dimensions (as depicted in Fig. 2):

• Resource or Activity
• Type or Instance
• Intelligent Being or Intelligent Agent.
186 J. J. Rossouw

Fig. 2. ARTI-Cube: ARTI architecture components and their interaction [11]

ARTI prescribes that the holons in the system can either be Resources or Activities
– holons can either perform some service or coordinate the performance of services
by other holons. Furthermore, a holon can be classified as a Type or an Instance. Type
holons contain the expert knowledge and functionality to support the performance of
system tasks, while Instance holons are responsible for actually performing system
tasks. Finally, holons can either be Intelligent Beings or Intelligent Agents. Intelligent
Being holons can reflect and affect the state of the real or virtual system, and are thus
capable of performing system tasks. Intelligent Agents encapsulate the decision-making
functionality required for the effective performance of system tasks. The classification
of holons within ARTI is further explained in Sect. 4, in the context of a table grape
production management system.
Since the ARTI architecture maps a complex system to several holons of specific
type and functionality, each holon becomes responsible for managing and monitoring its
own small environment. Having each holon monitoring and managing only a small part
of the system reduces the overall complexity and increases the stability of the system.
Furthermore, the architecture allows for easy modification by only having to add or
remove single holons instead of changing entire system sections and their interactions
with the rest of the system.
For the ARTI reference architecture to sufficiently simplify a complex system while
maintaining flexibility and modification, the architecture requires that each system com-
ponent be mapped into a single ARTI-cube. Since a lot of systems are still strongly
dependent on humans, the ARTI architecture makes provision for humans performing
system tasks. However, humans are inherently equipped with abilities that allows them
to span across multiple ARTI-cubes and can therefore not be divided into a single cube.
Therefore, humans are represented as activity performers performing tasks spanning
multiple cubes [3].
Holonic Architecture for a Table Grape Production Management System 187

4 ARTI Architecture for Table Grape Production Management


This section expands on the ARTI reference architecture to illustrate the various classi-
fications of the cubes, discussed in Sect. 3.2, within the application context. This section
also presents the potential benefits for the implementation of an ARTI based production
management system for the table grape industry.

4.1 ARTI Reference Architecture Implementation


To implement the ARTI reference architecture it is required to reflect the real world as
accurately as possible by mirroring the real world in software to create the world-of-
interest (WOI). The functionalities of the real world that must be reflected in the WOI
are categorised, (according to [3]), into:

• Resources reflecting the task performers.


• The activities reflecting the executable tasks to achieve a desired outcome.
• The mental states and commitments reflecting all the mental states in which the agents
might be and the dedication of the agents to their tasks.
• The policies and decision-making mechanisms reflecting all the rules, policies, laws
and human reasoning of decision-making.

The WOI should reflect the real world as accurately as possible mirroring everything
in the WOI with a single real world counterpart and should update accurately whenever
reality changes.
The activities identified in the table grape production management are the selection
of the grapes, selection of the packhouse, selection of the packing materials, grape har-
vesting, quality control, grape packing and transportation. These activities all coordinate
the performance of service-providing holons. The resources identified in the production
management WOI are the vineyards, packhouses, packing material storage facility, har-
vesting teams, quality control stations, packing stations and transportation fleet, as well as
the production manager, farm manager and packing material store team. These resources
are all service providers and are represented by service-providing resource holons. All
these identified resources and activities inherent mental states, commitments, policies
and decision-making mechanisms.
To implement the ARTI architecture, the table grape production management sys-
tem’s resources and activities must be mapped to the ARTI-cubes (Fig. 2). To illustrate
how a system can be mapped according to the ARTI-cubes, the transportation activity
and transportation vehicle resource will be used. The transportation activity entails the
assignment of a transportation vehicle resources to transport grapes, packing materials
or workers, and the transportation vehicle resource manages the transportation service.
Transportation Activity Holon. The different types of transportation activities that can
be executed by the system are mapped to activity type holons. When a transportation
activity is required, the real world transportation activity is mirrored in the system by
creating an activity instance holon. These types and instances are further divided into
intelligent beings containing the execution functionality and intelligent agents containing
the decision-making functionality, as summarised in Fig. 3.
188 J. J. Rossouw

The execution functionality of the transportation activity intelligent beings consists


of specifying the vehicle resource assignment and message response behaviours, as
well as defining how the collected activity-specific information should be processed and
stored.
The activity instances mirror the real world activities. However, the implementation
of the activity instances is generic and can therefore be used for all activity instances.
This requires that the activity-specific execution functionality be encapsulated in the
activity type, but executed by the activity instance. This is done by implementing the
Next Execute Update (NEU) protocol [12] in the activity instance, giving the activity
instance the functionality to obtain the behaviour to execute, to execute the specified
behaviour, and to obtain the next behaviour depending on the outcome of the previous
behaviour.

Fig. 3. Transportation activity mapping to ARTI components

The decision-making functionality of the transportation activity consists of deciding


which resources are required to perform the activity, what information needs to be
included in the request messages sent to resources, and what behaviour the intelligent
being instance should execute next depending on the outcome of the previous behaviour.
The decision-making functionality of the intelligent agents are divided into types
and instances. The intelligent agent instances link the intelligent being instances to the
intelligent agent types containing the activity specific decision-making functionalities.
This allows all activity-specific decisions to be made by the intelligent agent types.
The separation of execution and decision-making allows easy modifiable system
activities by updating the decision-making aspects without changing the execution
Holonic Architecture for a Table Grape Production Management System 189

aspects, or updating the execution aspects without changing the decision-making aspects.
Since the implementation of the intelligent being instances are generic, the functional-
ities concerning how an intelligent being activity instance interacts with its intelligent
being type and with the intelligent agents are generic for all activities. This allows
the activity specific execution or decision-making functionality of the system to be
updated without changing the functionality concerning the interaction between ARTI
components, or changing the functionality concerning interaction between ARTI compo-
nents of the system without changing the activity-specific execution or decision-making
functionality.

Transportation Vehicle Resource Holon. The different types of vehicle resources that
can be used to perform transportation activities are mapped to resource type holons.
When a vehicle resource is active and available for the system to use, the real world
vehicle resource is mirrored in the system by creating a resource instance holon of the
specific vehicle type. These types and instances are further divided into intelligent beings
containing the execution functionality of the resources and intelligent agents containing
the decision-making functionality, as summarised in Fig. 4.

Fig. 4. Vehicle resource ARTI cube mapping

The execution functionality of the vehicle resource intelligent of specifying the mes-
sage response behaviour and the defining the beings consists functionality concerning
vehicle resource’s schedule, as well as how the collected resource-specific information
should be processed and stored.
190 J. J. Rossouw

The resource instances are used to mirror the real world vehicle resources. However,
the implementation of the resource instances is generic and can therefore be used for mul-
tiple vehicle resources. This requires that the resource instances contain the functionality
concerning interaction between ARTI components. This allows the resource instances to
execute the resource-specific functionality that is encapsulated in the resource types. This
is done by the implementation of the NEU protocol in the resource instances to enable
them to obtain the behaviour to execute, to execute the specified behaviour and to obtain
the next behaviour to execute depending on the outcome of the previous behaviour.
The decision-making functionality of a vehicle resource consists of making decisions
about the vehicle resource’s schedule by deciding when a resource completed its service
and what service to schedule next, what information to include in the response messages,
and what behaviour should be executed next depending on the outcome of the previous
behaviour.
The decision-making functionality of the intelligent agents are divided into types and
instances. The intelligent agent instances link the intelligent being instances to intelligent
agent types containing the resource specific decision-making functionalities. This allows
all the resource specific decision-making to be done in the intelligent agent types.
Similar to the activities, as previously stated, the separation of resource execution
and decision-making allows the system’s resources to be easily modified by separately
updating the execution and decision-making aspects. The implementation of the intel-
ligent being instances are generic, similar to that of the activities, which allows the
resource specific execution functionalities to be separately updated to the functionalities
concerning the interaction between ARTI components.

4.2 Expected Value of ARTI Reference Architecture Implementation


The intelligent being instances of both the activities and resources are reality-reflecting
components and, as such, maintain digital twins of the operations and resources of the
table grape production. Each resource and activity instance thus contains near real-time
information and, since the holons are cooperative, it is possible to retrieve the real-time
information on an entire production order. This mechanism provides the production
manager with greater access to production information and thus more insight into the
execution of production orders, which will support informed decision making.
The differentiation between intelligent beings and intelligent agents represents a sep-
aration of concerns regarding decision-making and execution functionality. In the table
grape production management system, this allows the components reflecting the produc-
tion order and the components reflecting the system tasks to function separately from
the decision-making components. Decision-making strategies and tools, embedded in
intelligent agents, can thus be adapted without affecting the reality-reflecting intelligent
beings – allowing for enhanced system flexibility.
The architecture employs intelligent beings to supply intelligent agents with updated
information to support effective decision-making. Based on the information received
from the intelligent beings and input from external holons, the intelligent agents can
specify the behaviour to be executed by the intelligent beings and also the sequence
and timing of execution. As such, the intelligent beings also provide the mechanism
for affecting reality in accordance with the decisions of the intelligent agents. This
Holonic Architecture for a Table Grape Production Management System 191

relationship between intelligent beings and agents provides an agile mechanism for the
development and execution of optimized production plans.
Apart from the information on their current state, the activity and resource holons,
as autonomous entities, also manage and maintain their own schedules for future assign-
ments and commitments. This information of future states and behaviours further
supports the production and farm managers in decisions regarding production plans.

5 Conclusion and Future Work


The fourth industrial revolution brought forth the development of cyber-physical systems
and IoT, making the world more connected. The agricultural industry must adopt new
technologies to improve their efficiency and product quality to meet consumer demands.
This paper proposed the ARTI reference architecture to implement a holonic production
management system to improve table grape production management.
Firstly, the paper described table grape production management by dividing the
activities into two levels of decision-making and providing a short description of each
activity. The paper then described holonic systems, which aim to model complex sys-
tems as multiple autonomous and cooperative entities. A description of the ARTI refer-
ence architecture is also provided as a means to implement the holonic architecture by
dividing a complex system into activities or resources; instances or types; and intelli-
gent beings or intelligent agents. Finally, the paper discussed the implementation of an
ARTI-based production management system, using the transportation activity as imple-
mentation example by mapping all the components of the transportation activity and
vehicle resources to ARTI-cubes.
Basing the design of the production management system on the ARTI architecture
is expected to add value through two mechanisms: the creation of digital twins by
means of the intelligent beings, and the separation of reality-reflecting and decision-
making components. The digital twins are expected to provide the system with updated
information on the current and future states and behaviour of the physical entities, which
will support informed decision making. The separated decision-making components can
be adapted without affecting the digital twin components, which will enable flexible and
agile decision-making, planning and optimization.
Future work will focus on the implementation of the ARTI based table grape pro-
duction management system. The system should mirror the resources and activities of
a table grape farm and be able to handle all the unexpected changes that might occur in
the production management. The developed system must then be implemented in a real
world table production management environment to verify that the system is flexible and
adaptable to the production management changes and able to improve the production
management of a table grape farm.

References
1. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for implementing the strategic
initiative INDUSTRIE 4.0: Final report of the Industrie 4.0 working group. National Academy
of Science and Engineering (2013)
192 J. J. Rossouw

2. Sihlobo, W.: SA horticulture is blooming, but there’s still room for growth. Daily Maverick
(2019)
3. Valckenaers, P., Van Brussel, H.: Design for the Unexpected, 1st edn. Elsevier, Oxford (2015)
4. Ali, O., Valckenaers, P., Van Belle, J., Saint Germain, B., Verstraete, P., Van Oudheusden,
D.: Towards online planning for open-air engineering processes. Comput. Ind. 64, 242–251
(2012)
5. Koestler, A.: The Ghost in the Machine, 1st edn. Hutchinson, London (1967)
6. Kritzinger, D.: Modulêre kursus in tafel- en droogdruifverbouing. Modular course on table
and dried grape cultivation), Agrimotion, Somerset West, South Africa (2020)
7. South African Department of Agriculture, Forestry and Fisheries: Production guideline –
grapes (2012). https://www.nda.agric.za/docs/Brochures/grapesprod.pdf. Accessed 23 Feb
2020
8. South, S.: Star South Packing Guide. Star South, Wellington (2019)
9. Leitão, P.: An agile and adaptive holonic architecture for manufacturing, University of Porto
(2004)
10. Van Brussel, H., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture
for holonic manufacturing systems: PROSA. Comput. Ind. 37, 255–276 (1998)
11. Valckenaers, P.: ARTI reference architecture - PROSA revisited, service orientation in holonic
and multi-agent manufacturing. In: Borangiu, T., et al. (eds.) Proceedings of SOHOMA 2018.
Studies in Computational Intelligence, vol. 853, p. 19. Springer, Cham (2018)
12. Valckenaers, P., De Mazière, P.A.: Interacting holons in evolvable execution systems: the
NEU protocol. Ind. Appl. Holonic Multi-Agent Syst. 9266, 120–129 (2015)
Learning Distributed Control for Job
Shops - A Comparative Simulation Study

Oliver Antons1(B) and Julia C. Arlinghaus2


1
Chair of Management Science, RWTH Aachen University,
Kackertstraße 7, 52072 Aachen, Germany
antons@oms.rwth-aachen.de
2
Otto-von-Guericke University Magdeburg,
Universitätsplatz 2, 31904 Magdeburg, Germany
julia.arlinghaus@ovgu.de

Abstract. This paper studies the potentials of learning and benefits


of local data processing in a distributed control setting. We deploy a
multi-agent system in the context of a discrete-event simulation to model
distributed control for a job shop manufacturing system with variable
processing times and multi-stage production processes. Within this sim-
ulation, we compare queue length estimation as dispatching rule against
a variation with learning capability, which processes additional historic
data on a machine agent level, showing the potentials of learning and
coordination for distributed control in PPC.

1 Introduction

The concept of distributed control in manufacturing has been a focus of research


in recent years due to its potential to increase the capability of a manufacturing
network to cope with problems that are particular hard for centralized control
approaches - such as machine failure, sudden demand fluctuation as well as the
ability to scale with the every increasing size of manufacturing networks and the
abundance of information potentially beneficial if considered in the scheduling
process. The idea to deploy a multitude of autonomous entities that coordinates
with each other to control the manufacturing process rather than having on
central control unit is the foundation of distributed control. Commonly, these
autonomous entities are linked to manufacturing machines or products, and sub-
sequently enable distributed control to easily scale with a growing manufactur-
ing network by deploying additional autonomous entities simultaneously while
adding new machines. Furthermore, the ability to process information locally
on a machine level allows to consider both global and local information such
as generated sensor data in the decision process, to a degree that would over-
whelm a centralized control approach. Since every autonomous decision entity
only has to solve a comparatively very small decision problem, nearly real-time
decisions are feasible, and consequently these entities can quickly react to any
sudden changes such as machine failure or rapid demand changes. This vision
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 193–202, 2021.
https://doi.org/10.1007/978-3-030-69373-2_13
194 O. Antons and J. C. Arlinghaus

of a manufacturing network controlled by autonomous entities goes back to the


1980s (Aström 1985; Koinoda et al. 1984), with two distinct requirements.
On the one hand, we have the necessity of computational power, sensors and
network interfaces on the machine and even the product level in order to have the
capability to gather and process information locally as well as to communicate
within the manufacturing network. The advances in microcontroller and sensor
development in the past decades made it feasible to equip every entity within
the manufacturing network with these capabilities and lead to the deployment
of cyber-physical systems (CPS) that easily satisfy these requirements.
On the other hand, such a distributed control approach requires a high level
of coordination and conflict resolution in order to counteract the myopic and
selfish behaviour that plagues autonomous entities which primarily rely on local
information to make their decisions according to their respective objectives.
While this concept of distributed control is highly anticipated by practition-
ers due to its many perks addressing challenges in manufacturing today and
its ability to consider enormous amounts of information, as of yet many highly
relevant questions concerning especially the required coordination and commu-
nication setup remain open as of yet. How do we need to design coordination in
a manufacturing network to maximize manufacturing efficiency in a distributed
control approach? What information additionally considered improves the over-
all manufacturing networks efficiency? In the following sections we will briefly
review existing concepts for distributed control and give an overview on the state
of the art in Sect. 2. Following this, we introduce our model in Sect. 3 and subse-
quently present our findings in Sect. 4. Lastly, we conclude with a brief overview
and outlook in Sect. 5.

2 Literature Overview

The concept of distributed control has been studied in a variety of research disci-
plines. Naturally, Production Planning and Control (PPC) is a research stream
which puts an emphasis on the idea of distributed control for manufacturing.
Moreover, as the implementation of such distributed control relies on large net-
work of computers, it is also studied within computer and electrical engineering.
As the core of the underlying assignment problem is an optimization problem,
the concept of distributed control is also studied within a variety of fields of
applied mathematics, such as operations research, control and game theory. Fur-
thermore, it has also been studied in operations management and management
science. Within PPC, researchers have shown the potential of distributed control
both for logistic as well as production control, with a focus of research on the
application to shop floor manufacturing setting (Bongaerts et al. 2000; Philipp et
al. 2006; Scholz-Reiter et al. 2006; Meissner et al. 2017; Hussain et al. 2019). Such
setup allows for a multitude of production paths for a product, thus providing a
number of alternative paths in case of disruption such as machine malfunctions.
However, in a system with a consequently great number of entities exhibiting
autonomy, the self-serving nature of these entities is likely to result in a local
Learning Distributed Control for Job Shops 195

optimum, which is potentially very inferior compared to a centralized approach


providing a globally optimal solution. This effect is referred to as the aforemen-
tioned myopic behaviour, a consequence of the local information horizon and
scope of the autonomous entities (Trentesaux 2009; Zambrano Rey et al. 2014).
Thus, providing some form of centralized coordination can improve the perfor-
mance of any fully decentralized decision-making (Antons et al. 2020b). Adding
a form of centralized coordination leads to a hybrid approach, which features
characteristics of both centralized and decentralized control. Such centralized
coordination reduces the degree of autonomy of the entities within the manufac-
turing network, thus leading to the question of an optimal degree of autonomy.
For a subset of PPC, the optimal degree has been determined (Blunck et al.
2018), overall however there remains a lack of general understanding. Despite the
myopic behaviour, PPC research also associates a multitude of advantages such
as easy expansion with distributed control (Monostori et al. 2015). Moreover,
the ability to cope with machine failure through quick information propagation
and local rescheduling has been recognized (Duffie 1990). The advent of CPS
catalyzed the study of distributed control, providing the necessary requirements
regarding local computational power, data collection and processing addressing
the current challenges of manufacturing (Bertelsmeier and Trächtler 2015; Wang
et al. 2015; Jones et al. 2018; Romero et al. 2018) Also, the potential for prognos-
tics, predictive maintenance and machine learning in the context of distributed
control was noted (Beregi et al. 2018; Salvador Palau et al. 2019). Furthermore,
the potential for optimizing the coordination between humans in smart factories
was researched (Jones et al. 2018; Weichart et al. 2019).
The general lack of insight into the optimal design for coordination in dis-
tributed control also motivated research in artificial intelligence and engineer-
ing. Across all these disciplines the deployment of multi-agent systems (MAS)
is the commonly preferred tool to study the concept of distributed control, as it
allows to represent all relevant entities in the manufacturing network as agent
within the simulation (Caridi and Cavalieri 2007; Morariu et al. 2014; Antons
and Arlinghaus 2020a).

3 Model

In order to study the concept of distributed control, we deployed an discrete-


event simulation (DES) based on a MAS. The basis for this simulation is a job
shop setting, due to the aforementioned aptness for distributed control aided
by the multitude of production paths available. Within our simulation, every
machine and every production order is represented as an autonomous agent,
capable to exhibit a degree of decision autonomy.
196 O. Antons and J. C. Arlinghaus

Order O_k
Product Type A

Job J_k_1 Job J_k_2 ... Job J_k_i

Job J_k_1 Job J_l_1 Job J_X_Y Inbound Jobs

Machine M1 Machine MZ

Machine Type Machine Type

Buffer Processing Buffer Processing

No

First Finished job is


Decision last job of order?

Second Yes
Decision
Shop Floor

Assignment of jobs to For every Machine:


machines sequencing order

Order 1 Order 2 Order K Finished Orders

Fig. 1. Overview on job shop MAS DES setup

Starting with the machinery, our model consists of a set of machines M =


{M1 , . . . , MZ }, Z ∈ N. Every machine M ∈ M is of a specific type and has
subsequent machining capabilities. Furthermore, we consider machines of various
quality, i.e. machines of the same type may require a varying amount of iterations
for the same machining step. To encapsulate this behaviour in our simulation, we
model the deviation from the ideal processing time with a Poisson distribution.
As such, every machine M has an associated rate parameter λM .
This set of machines is capable to produce products of various types according
to productions orders. A production order Ok for a specific product type consists
of i ∈ N jobs Jk1 , . . . , Jki that need to be performed subsequently. Every such
job requires some machining capability for a fixed amount of iterations, which
may be provided by one or more machine types.
In order to control this manufacturing network, the agents deployed for
machines and production orders need to coordinate, assigning every job to a
capable machine as well as sequencing the orders assigned for every machine.
Figure 1 gives an overview on this setup. Every new production order Ok joins
the pool of inbound jobs, represented by its first job. In a first decision problem,
Learning Distributed Control for Job Shops 197

every inbound job is assigned to a capable machine. In a second decision prob-


lem, on every machine the sequencing order of assigned jobs is decided. Once
a machine finishes its current job, it processes the next job according to its
sequencing order. If the finished job was the last job of the production order,
the entire production order is finished. Otherwise, the next subsequent job of
the production order enters the set of inbound jobs to be assigned to a machine
and to subsequently be processed.
Contrary to centralized production control approaches, these two decision
problems are not solved by one central decision authority, but by the afore-
mentioned agents representing machines and production orders. For the first
decision problem, the machine agents communicate with each other and share
the information necessary to solve the assignment problem for every iteration i
of the simulation. This problem consists of assigning every job in the current set
of inbound jobs j ∈ JI to a machine of matching capability M ∈ MCj . These
assignments could have further constraints and various objectives; we require
but a matching to exactly one capable machine for every job and aim for the
shortest makespan for all production orders. We apply two different dispatching
rules to solve these assignment problems for every iteration. The first dispatching
rule we review is queue length estimation (QLE), whereas every job is assigned
to the machine with the currently shortest queue. This dispatching rule is since
long established both for centralized as well as distributed control in job shop
settings, generally delivering reliable and good results (Scholz-Reiter et al. 2009;
Grundstein et al. 2017). In this setting, every machine agent determines its cur-
rent queue length estimation, shares and compares it with the other machine
agents. Inbound jobs are then assigned to the machine possessing the required
machining capability with the shortest queue length estimation. Hence, we can
formulate this decision process for an inbound job j ∈ JI as follows

min pl (QLE)
M ∈MCj
l∈JM

where MCj ⊆ M is the subset of machines capable to process job j, JM the set
of jobs already assigned to machine M and pl the amount of iterations necessary
to process job l.
We compare the QLE dispatching rule with a modification of it, in which
every machine agent considers its own historical data. For every machining step
performed, the machine agent records the deviation from the ideal processing
time and considers these deviations for future queue length estimations. With
λ̄M as maximum likelihood estimator of the rate parameter for machine M , we
can formulate this decision process analog to QLE as

min (pl + λ̄M ) (LQLE)
M ∈MCj
l∈JM

We refer to this new dispatching rule as learning queue length estimation


(LQLE). The second decision of sequencing scheduled jobs on a singular machine
is made by the production order agents of the jobs assigned to that machine.
198 O. Antons and J. C. Arlinghaus

There is a multitude of rules that these production order agents can apply, such
as first-in first-out (FIFO) or last-in first-out (LIFO). For our simulation, we give
preferential treatment to jobs of production orders with the most following sub-
sequent jobs, otherwise applying FIFO. This approach aims to enable the entire
manufacturing network to process a mixture of different product types efficiently,
ranging from simple products requiring only two jobs to more complicated prod-
ucts requiring three or four jobs. The overall objective of the manufacturing
system is the fulfillment of every production order. Beyond this basic require-
ment, there is a multitude of further objectives that could be considered, such
as the reduction of costs, the maximization of the manufacturing systems uti-
lization or a reduction to the minimal iterations necessary to fulfill production
orders among many others.

4 Findings
In order to compare the QLE and LQLE dispatching rules, we devised the fol-
lowing simulation setup. We consider a job shop with a total of 20 machines
and four different machine types, with five machines per machine type. For
every machine type, the five machines are of different quality, represented by the
parameter λ ∈ {0, 1, 2, 3, 4} of its Poisson distribution, modeling the machines
deviation from the ideal processing time.
This setup is depicted in Fig. 2. We con-
sider a time frame of a total of 500 iterations
λ=0 λ=1 λ=2 λ=3 λ=4 for each simulation run, divided into intervals
of 50 iterations each. Production orders enter
the system in form of batches, with a batch
λ=0 λ=1 λ=2 λ=3 λ=4
consisting of a multitude of production orders
of the same product type. Every iteration
interval features exactly the same batches of
λ=0 λ=1 λ=2 λ=3 λ=4
production orders entering the system, at the
same relative iterations respectively. Produc-
tion orders of these batches can either be fin-
λ=0 λ=1 λ=2 λ=3 λ=4
ished within the iteration interval in which
they were ordered, or extend into the follow-
Fig. 2. Job shop configuration ing iteration interval, delaying the batches
of that iteration interval. Consequently, we
introduce batches of production orders in the first nine of the total of ten itera-
tion intervals. For this simulation, we consider six different batches of six different
product types, respectively. These batches vary in size (amount of production
orders) and the iteration at which they enter the manufacturing system. Con-
cluding, the load on the manufacturing system added every iteration interval
is identical, with production orders not finished within its respective iteration
interval impacting the processing of the following iteration interval. The con-
stant base load per iteration interval on the system is chosen in such a way, that
in principle it is possible to be processed entirely within the iteration interval in
which it entered the manufacturing network.
Learning Distributed Control for Job Shops 199

Comparing QLE and LQLE, at first we evaluate the relative proportion of


production orders finished in the respective iteration interval for nine intervals of
50 iterations each, depicted in Fig. 3. Already within the first iteration interval,
the LQLE rule leads to a greater degree of finished production orders compared
to the QLE rule. As the machines within the manufacturing network consist of
four types of five machines each with different quality and subsequent reliability
in regard to processing time, the learning aspect of LQLE recognizes the supe-
riority of machines with lower deviations, and hence assigns jobs preferably to
these machines. However, in iteration interval two the performance of the QLE
rule increases while the performance of LQLE decreases. While the increase of
QLE performance is due to its indifference to the varying machine quality and
subsequent randomness, this cause is not the only influence on the performance
deterioration of LQLE. Since LQLE considers the past performance of every
machine, it becomes too sensitive towards historic data and overly prefers the
best performing machines and thus overloads them while not utilizing the seem-
ingly worse machines. With increasing iteration count, the LQLE dispatching
rule appears to feature a positive trend, with increasing performance for later
iteration intervals. Given the greater historic data that each machine agent can
access in later iterations, its estimator for its corresponding machine gives a more
reliable estimation, hence improving the quality of load balancing provided by
LQLE.

100.0%

95.0%
Completed Orders of Intervall

90.0%
Control
Method
QLE
LQLE
85.0%

80.0%

75.0%
1 2 3 4 5 6 7 8 9
Intervall

Fig. 3. Relative proportion of completed orders per iteration interval


200 O. Antons and J. C. Arlinghaus

70

60
Additional Iterations required

50

Control
Method
40
QLE
LQLE

30

20

10
1 2 3 4 5 6 7 8 9
Intervall

Fig. 4. Deviation of required iterations per production order

Another interesting aspect comparing both dispatching rules is the deviation


from the ideal processing time, i.e. the minimal amount of iterations steps neces-
sary to fulfill the production order without waiting and process time deviation.
Figure 4 shows a boxplot for each dispatching rule and iteration interval, includ-
ing mean, quartiles as well as outliers. Notable again are the first two iteration
intervals, showing a better LQLE performance in the first iteration interval, fol-
lowed by a notable worse result in the second iteration interval. Furthermore,
outliers can stray significantly from the mean as seen in iteration one and four.
Overall however, there is a notable trend concerning the interquartile range of
the LQLE dispatching rule being smaller as the corresponding QLE interquar-
tile range in every iteration interval but the second. This indicates the beneficial
impact of the learning aspect of LQLE, provided the machine agents aggregate
sufficiently much historic data to provide an accurate estimator and to prevent
an overly sensitive assignment. However, this simulation approach would greatly
benefit from an extension by reviewing a variety of job shop settings with dif-
ferent production order batches.

5 Conclusion
The conducted simulation provides interesting insights into the potentials of dis-
tributed control for PPC in shop floor settings. The presented DES-MAS simu-
lation setup allows us to distribute control to agents corresponding to machines
and productions orders, thus enabling these agents to exhibit local control over
parts of the manufacturing process. With this local decision making, we explored
the potential of local data processing by considering historic data on a machine
Learning Distributed Control for Job Shops 201

level. This additional information consideration on a local level enabled a vari-


ation of the traditional QLE dispatching rule, enabling the machine agents to
learn from previous machining operations and to communicate their abilities
more precisely within the manufacturing network, leading to an overall more
efficient shop floor utilization. However, already within this simulation potential
drawbacks are apparent: for small historic data sets, the studied LQLE rule is
overly sensitive and results in an detrimental load balance. Moreover, this simu-
lation was but a small experiment showing the potential of learning from historic
data for distributed control in PPC.
Future research should consider a broader simulation setting, considering var-
ious manufacturing scenarios and job shop designs in order to facilitate a deeper
understanding of the underlying influence of information in the distributed con-
trol process. Another avenue could be the adaption of more sophisticated learning
models, such as neural networks enabling a classification of machines and produc-
tion orders. Moreover, considering scenarios with machine failure or machines
that can be refitted to change their processing capability could extend these
simulations. Lastly, studying the influence of agents processing additional infor-
mation on the overall agent coordination processes may significantly improve
the understanding of distributed control.

References
Antons, O., Arlinghaus, J.C.: Modelling autonomous production control: a guide to
select the most suitable modelling approach. In: Lecture Notes in Logistics, pp. 245–
253, January 2020a. https://doi.org/10.1007/978-3-030-44783-0 24
Antons, O., Bendul, J.: Decision making in Industry 4.0 – a comparison of distributed
control approaches. In: Studies in Computational Intelligence, vol. 853, pp. 329–339,
January 2020b. https://doi.org/10.1007/978-3-030-27477-1 25
Aström, K.J.: Process control - past, present, and future. IEEE Control Syst. Mag.
5(3), 7 (1985)
Beregi, R., Szaller, Á., Kádár, B.: Synergy of multimodelling for process control. IFAC-
PapersOnLine 51(11), 1023–1028 (2018). https://doi.org/10.1016/j.ifacol.2018.08.
473
Bertelsmeier, F., Trächtler, A.: Decentralized controller reconfiguration strategies for
hybrid system dynamics based on product-intelligence. In: 2015 IEEE 20th Con-
ference on Emerging Technologies & Factory Automation (ETFA). IEEE, pp. 1–8
(2015). https://doi.org/10.1109/ETFA.2015.7301527
Blunck, H., et al.: The balance of autonomous and centralized control in scheduling
problems. In: Applied Network Science 3.1, January 2018. https://doi.org/10.1007/
s41109-018-0071-6
Bongaerts, L., et al.: Hierarchy in distributed shop floor control. Comput. Ind. 43(2),
123–137 (2000). https://doi.org/10.1016/S0166-3615(00)00062-2
Caridi, M., Cavalieri, S.: Multi-agent systems in production planning and control:
an overview. Prod. Plan. Control 15(2), 106–118 (2007). https://doi.org/10.1080/
09537280410001662556
Duffie, N.A.: Synthesis of Heterarchical manufacturing systems. In: Comput-
ers in Industry 14.1-3, pp. 167–174, May 1990. https://doi.org/10.1016/0166-
3615(90)90118-9
202 O. Antons and J. C. Arlinghaus

Grundstein, S., Freitag, M., Scholz-Reiter, B.: A new method for autonomous control
of complex job shops – Integrating order release, sequencing and capacity control to
meet due dates. J. Manuf. Syst. 42, 11–28 (2017). https://doi.org/10.1016/j.jmsy.
2016.10.006
Hussain, M.S., Ali, M.: Distributed control of flexible manufacturing system: control
and performance perspectives. Int. J. Eng. Appl. Manage. Sci. Paradigm 54(2), 156–
162 (2019)
Jones, A.T., Romero, D., Wuest, T.: Modeling agents as joint cognitive systems in
smart manufacturing systems. Manuf. Lett. 17, 6–8 (2018). https://doi.org/10.1016/
j.mfglet.2018.06.002
Koinoda, N., Kera, K., Kubo, T.: An autonomous, decentralized control system for
factory automation. Computer 17(12), 73–83 (1984)
Meissner, H., Ilsen, R., Aurich, J.C.: Analysis of control architectures in the context of
Industry 4.0. Procedia CIRP 62, 165–169 (2017). https://doi.org/10.1016/j.procir.
2016.06.113
Monostori, L., et al.: Cooperative control in production and logistics. Ann. Rev. Control
39, 12–29 (2015). https://doi.org/10.1016/j.arcontrol.2015.03.001
Morariu, O., et al.: Multi-agent system for heterarchical productdriven manufactur-
ing. In: 2014 IEEE International Conference on Automation, Quality and Testing,
Robotics, pp. 1–6. IEEE (2014). https://doi.org/10.1109/AQTR.2014.6857897
Philipp, T., Böse, F., Windt, K.: Evaluation of autonomously controlled logistic pro-
cesses. In: Proceedings of 5th CIRP International Seminar on Intelligent Computa-
tion in Manufacturing Engineering. CIRP, The International Academy for Produc-
tion Engineering, pp. 347–352 (2006)
Romero, D., Jones, A.T., Wuest, T.: A new architecture for controlling smart man-
ufacturing systems. In: 2018 International Conference on Intelligent Systems (IS).
IEEE, pp. 421–427 (2018)
Palau, A.S., Dhada, M.H., Parlikad, A.K.: Multi-agent system architectures for col-
laborative prognostics. J. Intell. Manufact. (2019). https://doi.org/10.1007/s10845-
019-01478-9
Scholz-Reiter, B., et al.: The influence of production networks’ complexity on the per-
formance of autonomous control methods. In: Proceedings of the 5th CIRP Interna-
tional Seminar on Computation in Manufacturing Engineering, pp. 317–320 (2006)
Scholz-Reiter, B., et al.: Modelling and analysis of autonomously controlled produc-
tion networks. IFAC Proc. Vol. 42(4), 846–851 (2009). https://doi.org/10.3182/
20090603-3-RU-2001.0081
Trentesaux, D.: Distributed control of production systems. Eng. Appl. Artif. Intell.
22(7), 971–978 (2009). https://doi.org/10.1016/j.engappai.2009.05.001
Wang, L., Törngren, M., Onori, M.: Current status and advancement of cyber-physical
systems in manufacturing. J. Manuf. Syst. 37, 517–527 (2015). https://doi.org/10.
1016/j.jmsy.2015.04.008
Weichart, G., et al.: An agent- and role-based planning approach for flexible automation
of advanced production systems. In: 2018 International Conference on Intelligent
Systems (IS), May 2019. https://doi.org/10.1109/IS.2018.8710546
Zambrano Rey, G., et al.: Reducing myopic behavior in FMS control: a semi-
heterarchical simulation-optimization approach. Simul. Model. Pract. Theory 46,
53–75 (2014). https://doi.org/10.1016/j.simpat.2014.01.005
A Reactive Approach for Reducing the Myopic
and Nervous Behaviour of Manufacturing
Systems

Sebastian-Mateo Meza1(B) , Jose-Fernando Jimenez2 , and Carlos Rodrigo Ruiz-Cruz1


1 Department of Industrial Engineering, Escuela Colombiana de Ingenieria Julio Garavito,
Bogota, Colombia
{sebastian.meza-v,carlos.ruiz}@escuelaing.edu.co
2 Department of Industrial Engineering, Pontificia Universidad Javeriana, Bogota, Colombia

j-jimenez@javeriana.edu.co

Abstract. Scheduling is a crucial activity for the successful control and piloting
of manufacturing activities. Manufacturing systems operate in dynamic environ-
ments vulnerable to real-time events, which frequently force a reactive revision of
pre-established schedules. In an uncertain environment, it has become preferable
to adapt the scheduling from pre-established schedules as the latter may become
vastly degraded at unexpected events. For this, a reactive module is often included
to update the scheduling in order to assure manufacturing execution. However,
due to the need of rapid response, this updating might be executed experiencing
myopia and/or nervousness issues. This paper aims to develop a proof of con-
cept of a decision support system for practitioners, attempting to minimize the
degradation and lack of information during these scenarios. This approach starts
with a metaheuristic technique to generate a predictive schedule for establishing
an initial scheduling, named pre-established schedule. Then, during disruptions, it
executes a scheduling updating through a reactive module based on a heuristic tech-
nique. The proposed approach was tested in a simulated scenario of a real flexible
manufacturing system located in Valenciennes (France), called AIP-PRIMECA
Valenciennes.

Keywords: Semi-heterarchical systems · Reactive decision-making ·


Nervousness · Myopia · Dynamic hybrid control architecture

1 Introduction
The arrival of new concepts, methods and technologies related to the digital transfor-
mation revolution has induced a substantial influence in manufacturing industries. It is
based on the establishment of smart factories, smart products, and smart services [11].
This revolution is perceived to be the key to higher levels of automation, to more efficient
processes and to better planning and control of manufacturing systems to achieve higher
flexibility and robustness. In a smart factory scenario, the definition of a control system

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 203–214, 2021.
https://doi.org/10.1007/978-3-030-69373-2_14
204 S.-M. Meza et al.

architecture is fundamental since it orchestrates the manufacturing parameters at each


production resource [4].
In the last years, several architectures have been proposed ranging from fully hierar-
chical to fully heterarchical control architectures according the classification presented
in [12]. On the one hand, hierarchical control architectures were widely accepted and
deployed for manufacturing control, since long-term optimality was possible. They offer
a centralized control that enables the deployment of optimal decision-making to cascade
through the entire hierarchical dependencies. Nevertheless, hierarchical structures do
not offer the sufficient flexibility, adaptability, robustness and responsiveness needed
for coping with current technological trends [1]. On the other hand, heterarchical con-
trol architectures distribute decision-making among entities, which agree to cooperate
without following a specific plan. The communication among entities, such as machines
and parts, and the measurement of their own situation to make decisions based on local
information seems to be attainable because of the recent digitalization advances of pro-
duction. However, no clear guidelines for the design and the control of such distributed
architectures have been provided so far [2]. In addition, these architectures have draw-
backs such as limited capacity to predict process outcomes, own goal orientation and
lack of guarantee by achieving a sufficient level of performance [9].
Industries and researchers have looked for alternative control approaches that assim-
ilate locally autonomous decision entities with some other coordinating entities. In such
organizational sociability, decision entities must negotiate with their neighbour entities
and maintain some social relationships at their operational and structural levels [8]. The
resulting architectures, defined as hybrid control architectures (HCA), adopted the fea-
tures provided by the aforementioned architectures, and also mitigate their drawbacks.
A HCA is based on holonic manufacturing concepts. It is justified by the fact that the
holonic manufacturing is a paradigm explicitly created for such environments, by mixing
informational and physical components [14]. Moreover, Industry 4.0 key enablers can
be fulfilled by the holonic paradigm [5]. For example, the “decision-making” and “au-
tonomy” holonic properties corresponds to the “autonomous and decentralized decision
support systems” key enabler.
In flexible manufacturing systems (FMS) the implementation of hybrid control archi-
tectures has been stimulated in order to cope with the challenges of producing increas-
ingly individualized products with a short lead-time to market and higher quality [16].
In fact, a balance of centralized and autonomous control yields the best systemic per-
formance and allow FMS to rapidly respond to unexpected events, called perturbations,
which can arise either from manufacturing resources or the operated jobs. This needs to
adapt to the changes of the environment and thus reduce the transient states and the asso-
ciated loss of performance fits into the dynamic change of system’s behaviour postulated
by dynamic control architectures, being one of the most promising current trends in liter-
ature in HCA [3]. Dynamic hybrid control architectures (D-HCA) are characterized by a
mechanism that switches from one holonic architecture to another one. The advantages
of these type of architectures lies in their capability to handle with uncertainties in real
time.
Conversely, D-HCA suffer from two main drawbacks, due to their nature. The first
one is related to the entities’ limited visibility or myopic behaviour. Myopic behaviour
A Reactive Approach for Reducing the Myopic and Nervous Behaviour 205

(myopia) is defined as “a condition of distributed decision-making in which decisional


entities are not capable of balancing their local objectives with the system’s global
objectives” [15]. The second one is related to undesirable instability due to the presence
of nervousness. Nervous behaviour (nervousness) refers to “a conduct of a whole or
part of a system in which its decisions or intentions change erratically without leaving
a sufficient time for stabilizing into an expected functioning” [7]. The paper introduces
a manufacturing dynamic hybrid control architecture featuring a reactive mechanism
and a reactive control policy to reduce the impact of myopia and nervousness on the
decision-making process. The novelty is that the mechanism contains strategies designed
to address explicitly the mentioned drawbacks. This system’s design is based on the
definitions of a hybrid architecture [8]. The rest of the document is organized as follows.
Section 2 introduces the proposed approach, presenting the reactive mechanism and the
reactive control policy. The case study and a preliminary assessment of the proposed
framework are presented in Sect. 3. Finally, Sect. 4 states the conclusions and the future
work to follow this research.

2 Proposed Approach
This section introduces the proposed approach that attempts to reduce the impact of
both myopia and nervousness in dynamic hybrid control architectures. First, a reactive
mechanism is presented. It is characterized by a local performance indicator trigger that
evaluates the status of the system and the need for a switch from the pre-established
schedule to a reactive policy. Second, a reactive control policy guided by a heuristic
is introduced. It contains strategies to control both myopia and nervousness based on
the supervisory entities and communication protocols mechanisms using multi-agent
systems [15].
This proposal is based on the Pollux architecture definitions [8]. Pollux is built over
three layers: the operation, the coordination, and the physical layer. It is composed of
three main types of decisional entities: the global decisional entities (GDE), the local
decisional entities (LDE) and the resource (RDE) decisional entities. A decisional entity
is mainly composed of an entity objective, a decision-making technique, governance
parameters, and a communication component.

2.1 A Local Performance Indicator for the Reactive Mechanism

The reactive mechanism (RM) introduced in this approach contains three describing
components as follows. Regarding the first component, the RM has both structural and
behavioural characteristics. In the structural characteristics, the RM can switch the gover-
nance parameter between decisional entities, i.e. coercive, permissive. Then, it is feasible
that the physical layer can simultaneously contain entities guided by a hierarchical deci-
sion relationship (coercive relationship), and entities following heterarchical decisions
(permissive relationship). At behavioural characteristics, the RM switches the objective
function and the decision-making processes of decisional entities to react to possible
perturbations occurring in the system. This means that the GDEs and the LDEs have
206 S.-M. Meza et al.

different behavioural characteristics. Regarding the second aspect, the degree of opti-
mality reached by RM is heuristic. Once RM switches the operation mode (structural
and behavioural characteristics), a reactive heuristic is executed by the LDEs whose
governance parameter state is permissive. Finally, the reason for switching is to react to
unforeseen events. The reactive mechanism is triggered by a local performance indica-
tor (LPI) measure method. Hence it is necessary to monitor the system continuously to
gather real time system information with aim of estimating the indicator.
A major advantage of RM is that it limits the nervousness of the control architecture.
Furthermore, RM leads to a fair use of LDE’s decision-making technique assuring a
reduction of the disturbance effect on the system while avoiding a global performance
decrease. To achieve that, RM changes the governance parameter state only of the entities
affected by a perturbation during the execution. At the beginning of the execution, the
operating mode establishes all governance parameters as coercive; this means that the
GDEs impose the instructions on the LDEs. The process starts when RM retrieves the
data from the set of jobs being processed. Once the LPI of each LDE exceeds the reference
threshold (rt) the reactive mechanism is triggered. It permits to specifically know which
manufacturing entities are actually impacted by the disturbance. The LPI is also used to
switch back the governance parameter. In case the indicator value of an LDE is lower
than rt the LDEs follow the commands initially given by the GDE.

2.2 Reactive Control Policy to Address D-HCA Drawbacks


The reactive heuristic introduced in this approach serves as LDE’s decision-making
technique and deals with the machine sequence decision, which defines the sequence of
machines that a product requires to fulfil its operation sequence. Throughout this paper,
the term ‘reactive decision’ will refer to a change on the mentioned decision.
The first strategy is defined as the job priority. The main idea of this strategy is to
assure that a reactive decision is taken only by those LDEs whose local indicator have
suffered the greatest degradation. Since a change on the governance parameter occurs
once the LPI exceeds rt, any LDE can execute a reactive decision without considering
the performance of other LDEs and consequently the overall performance of the system.
The supervisory entities mechanism provides knowledge about the status of the LPI
of each LDE and the communication mechanism allows the LDEs to know whether
they are the most affected. The second strategy is called queue control. It aims to
maintain the optimal queue size for each machine as determined by the GDE. Once the
reactive mechanism is triggered, a reactive decision can increase the queue size to reach a
machine. In that case, the local indicator of the LDEs could suffer a degradation caused by
the waiting time until the beginning of the processing. Thus, the overall performance of
the system is affected. A maximum queue size should be set so that an LDE cannot select
a machine that exceeds the allowed size. The supervisory entities mechanism quantifies
the LDEs waiting to be processed by a machine. The communication mechanism permits
an LDE consider other available machines for processing. These strategies are expected
to achieve a balance between local and global system’s objectives and thus reduce the
myopic behaviour. A strategy is proposed to reduce nervousness in decision making. This
consists of defining unique physical points (PDP) on which the entities will evaluate and
execute the result of the reactive mechanism.
A Reactive Approach for Reducing the Myopic and Nervous Behaviour 207

Figure 1 shows the progress of the following control approaches: proposed approach
(green line); predictive–reactive approaches (red line) without control strategies; and
centralized approaches (blue line). For the centralized approach, there is a degradation
of both indicators caused by the perturbation occurrence. In the predictive-reactive app-
roach, manufacturing execution is assured but while one indicator improves, the other
may suffer further degradation because of the nature of reactive decisions and the lack
of control over the architecture drawbacks. The proposed approach achieves a balance
between the indicators (improvement of both) given the proposed mechanisms.
Table 1 summarizes the proposed mechanisms and their relationship with the D-HCA
drawbacks.

No reactive policy
Reactive policy – mechanisms not included
Reactive policy – mechanisms included
Local performance indicator (LPI)

ΔL1 LPI decrease


Perturbance occurrence

LPI decrease ΔL2

Global performance indicator (GPI) GPI increase


ΔG1

ΔG2 GPI decrease

Fig. 1. Conceptual model of the relationship between global and local objectives

3 Case Study
This section has been divided into three parts. The first part describes the flexible job
shop system used in this case study. Then, the instantiation of the proposed D-HCA
is presented, considering the inclusion of the proposed approach into the structural and
behavioural characteristics of the D-HCA. Finally, the experimental protocol on the case
study in order to validate the benefits of including a coupled strategy or reducing the
myopic and nervous behaviour of distributed systems is presented and conducted.
208 S.-M. Meza et al.

Table 1. Summary of the proposed mechanisms

Mechanism Drawback Strategy Description


Reactive mechanism Nervousness LPI It changes the governance
parameter state once LPI exceeds
a reference threshold
Reactive control policy Myopia Job priority It assures that only LDEs with the
greatest degradations make a
reactive decision
Reactive control policy Myopia Queue control It controls the queue size to reach
a machine
Reactive control policy Nervousness PDP It defines physical points to
execute a reactive
decision

3.1 AIP-PRIMECA System Description


The proposed manufacturing cell for this study is based on a real flexible manufactur-
ing system located in the AIP-PRIMECA laboratory in the Université Polytechnique
Hauts-de-France (Valenciennes, France). This facility is composed of seven worksta-
tions connected by a material handling machine with self-propelled shuttles. The work-
stations are: a loading/unloading machine (M1), four assembly machines (M2, M3, M4
and M7), an inspection unit (M5) and an additional workstation (M6) not used in this
paper. Seven types of jobs (B, E, L, T, A, I and P) can be produced. Each job has a
predetermined sequence of operations to be processed. The system can be formulated
as a flexible job shop scheduling problem with recirculation [10]. Further information
about the AIP-PRIMECA flexible manufacturing system can be consulted in [13].

3.2 Instantiation of the D-HCA


The control architecture proposed in this paper is built over the operation, the coordina-
tion, and the physical layers. In this approach the decisional entities are modelled using
multi-agent paradigms.

Structural and Behavioural Characteristics


The global layer contains two GDEs. The first GDE makes the decisions associated to
the release sequence and machines sequence of the jobs. The objective is to minimize
the makespan of the production order. This problem was solved as a flexible job shop
scheduling problem by a genetic algorithm (metaheuristic technique). It considers the job
shop problem’s classical restrictions, the transportation route through machines (defined
by the second GDE) and the simultaneous jobs in the system limit. The second GDE
takes the decision associated with the route a product (job) must follow to reach the
machines. This decision was treated as a shortest path problem and was solved using a
Dijkstra’s algorithm. The operation layer contains n LDEs as jobs to be produced, and
six RDEs as available machines in the system. The LDE’s decision-making technique is
A Reactive Approach for Reducing the Myopic and Nervous Behaviour 209

guided by the reactive heuristic technique. Its objective is to minimize the LPI of each
LDE considering the described strategies in Sect. 2 as shown in Fig. 2. With regard to
the job priority strategy, it was defined that the amount of most affected jobs (greatest
LPI) corresponds to 20% of the total number of jobs to be processed. This implies that
only these entities continue to look for a reactive solution. Regarding the queue strategy,
the maximum queue size was set at 1. Two PDPs were defined in the heuristic. The first
one corresponds to the turning points of (TN) in the manufacturing cell. The second
corresponds to the machines (MN). Decisions are executed only on those points. The
RDEs control the resources and their role are static, as their behaviour is not changed
by the switching mechanism.

Dynamic Characteristics and Switching Mechanism


The switching mechanism contains a module that detects the perturbations and, con-
sequently, triggers a switching. The mechanism can be classified as local detection
mechanism since it is guided by an LPI associated to each LDE. For the case study, the
LPI is called partial lateness (L j ) and it measures the earliness or tardiness of competition
of the next operation. It is defined as follows:

Lj = t − tsj (1)

where t is the current execution time and tsj is the expected start time of the next
operation derived from the pre-established schedule. The mechanism switches the gover-
nance parameter to permissive of each LDE whose associated LPI exceeds the reference
threshold. It can be defined as follows:

f (t, X ) = Lj > α0 (2)

where X is the current state of the LDE on the shop floor and α 0 is the reference
threshold. For the experiments, the parameter α 0 is set to 0. Similarly, when the LPI
value of an LDE is below α 0 the mechanism switches back the governance parameter to
coercive. Form that switching point, the LDE follows the instructions given by the GDE
in order to complete the remaining operations. This approach does not need a switching
synchronization because the reactive decision-making is executed at real time.

3.3 Experimental Protocol


The main goal of the experimental protocol is to evaluate the inclusion of the described
mechanisms in a D-HCA. To measure the myopic behaviour, the completion time vari-
ance (CTV) indicator proposed by [14] is used. It is expected that in the presence of
myopic decisions the CTV increases. The nervousness indicator (NI) used in this paper
is inspired by the graphics to observe the nervousness proposed by [6]; namely the fre-
quency of a job changes its decisions in a defined time interval - in this case the reactive
decision. A reduction in the nervousness of the system leads to a reduction of the NI.
Makespan and mean lateness are defined as global and local indicators, respectively.
The experimental model tested the datasets presented in Table 2, which are based
on the AIP benchmark [13]. The implementation uses NetLogo to simulate the AIP-
PRIMECA flexible job shop cell. A disrupted scenario is proposed in which machine
210 S.-M. Meza et al.

Start

Is the NO
LDE one of the most NO A
affected entities?

Job priority strategy NO


YES

Is the LDE on a TN? NO Is the LDE on a MN?

Physical points strategy


YES YES

Identify the set of machines Mf ⊆ M.


GDE defined machine ∉ Mf. Can machine m
NO process LDE next
operation?

Order Mf by distance from TN.


A
Closest machine first and so on.

f=1 YES Are there


more than one LDE with
Select mf ϵ Mf. m as next machine?
mf = m. YES
Queue control
strategy NO

NO
Can machine m
process LDE next YES Is machine m empty?
operation?

NO YES
NO
Reassign machine m into LDE machine
f=f+1 sequence to process next operation

f = |Mf|? YES End A

Fig. 2. LDEs decision-making technique

M7 suffers a breakdown. It lasts 15 times the processing time of all products in seconds.
The moment of disruption is after the departure of the first shuttle from M7. For the
experiments, three different scenarios were defined to evaluate the performance of the
proposed D-HCA. In scenario A the described disruption was modelled following only
the predictive decisions made by GDEs (fully predictive approach). In scenario B, the
architecture integrates the switching mechanism and the reactive heuristic. The latter
A Reactive Approach for Reducing the Myopic and Nervous Behaviour 211

does not consider the strategies presented in Table 1. Finally, in scenario C, the D-HCA
integrates the proposed strategies and is instantiated as presented in this chapter.

Table 2. Datasets for the experimental protocol

Instance ID Total number of Total number of Orders


jobs operations B E L T A I P
mrj_101 1 10 81 1 3 0 2 1 0 3
mrj_151 2 15 129 3 2 1 3 3 3 0
mrj_201 3 20 179 4 5 5 2 1 1 2
mrj_251 4 25 208 5 5 0 3 3 6 3
mrj_301 5 30 267 5 8 7 4 2 3 1

3.4 Results

Figure 3 presents the makespan obtained by each scenario for mrj_101. It also shows the
evolution of the mean lateness during execution for the same instance. Firstly, the result
reinforces that performance varies depending on the reactive policy configuration used
in the control architecture as shown in Fig. 1. The architecture described in scenario A
does not have a reactive behaviour and therefore is not able to make alternative deci-
sions. The reactive mechanism allows alternative decisions to be chosen and it absorbs
the degradation caused by disruptions. Secondly, the graph shows that the reactive pol-
icy of scenarios B and C represents improvements on the global performance measure
compared to the reference scenario. However, in scenario B, given the absence of mech-
anisms to control myopia, there is no balance between the global and local objectives
of the system. Therefore, although the value of makespan is reduced, the final value of
mean lateness increases.
Finally, in scenario B, the CTV value (20846) is higher than in scenario A (CTV
= 20215) since the decisions were only reactive to ensure the production continuity
but did not consider other entities information. In scenario C the CTV value decreases
(13036) demonstrating the efficiency of the proposed mechanisms. It was expected after
achieving a balance between the indicators.
Figure 4 shows the number of changes of jobs decisions in 10 s intervals during
execution and presents the evolution of the total number of decision changes of the jobs
for the mrj_101 instance. This result confirms that the proposed mechanisms achieve
their objective of controlling the nervousness of the system. On the one hand, they reduce
the number of decision changes in the presented time interval from a maximum of six
changes (scenario B) to a maximum of two changes (scenario C). On the other hand, they
reduce the total number of decision changes during execution from 82 changes to 25
changes in scenarios B and C, respectively. Additionally, the NI calculated for scenario
C (0, 40) was lower than in scenario B (2, 14).
212 S.-M. Meza et al.

140 A
B Lt = 129,85

Perturbation occurrence
C CTV = 20846
100
Mean lateness

Lt = 85,2
60 CTV = 20215

Lt = 52,39
CTV = 13036
20

0 100 200 300 400 500 600


-20
Execution time

C 536,4
B 632,2
A 651
makespan

Fig. 3. Global and local performance indicator for mrj_101 instance

Table 3 presents the results obtained from the simulation of each scenario above
described. The makespan (mkp) and mean lateness (mlt) refer to the system performance
indicators. The CTV and NI values show myopic and nervous behavior measurements,
respectively. The results indicate a link between the control of myopia and nervousness
and the system performance since the decrease of the indicators led to an improvement of
the system’s global and local indicators. Nevertheless, complementary statistical studies
must be led to confirm generalization of these results.
In scenario A, it was possible to follow the decisions generated by the GDEs since the
disruption did not last the whole execution time. Therefore, products with the machine
M7 in their sequence could be processed after repair. In scenario B, the results reinforce
the reactivity provided by the heuristic technique. In fact, it reduces the degradation on
the global indicator caused by the disturbance. However, given the myopic decision-
making (myopic behaviour), the local indicator performance suffers greater degradation
than in the reference scenario in which no reactive decisions were made. Furthermore,
the nervousness in the decision making, i.e. changing many times the selected machine
(reactive decision), causes the products start looping through the flexible job shop search-
ing for a decision. On the contrary, in scenario C, the myopic behaviour is reduced by the
queue control strategy achieving a balance between the global and local indicators of the
system. The job priority strategy reduces the recirculation of products in the system, as
only 20% of them search for a reactive intervention avoiding the changes of intentions
caused by the reactive mechanism. Physical points strategy has an impact as well on the
recirculation of jobs in the system. Its major advantage is that it allows the job to remain
on the machine being processed if it is one of the most affected (greatest LPI indicator)
and if that machine can perform its next operation.
A Reactive Approach for Reducing the Myopic and Nervous Behaviour 213

14
B 80
Number of decision changes in time interval

C Tc = 82
12

Total number of decision changes (Tc)


NI = 2,14 70
B
C

Perturbation occurrence
10 60

8 50

40
6
Tc = 25
NI = 0,40 30
4
20

2
10

0 0
0 100 200 300 400 500 600
Execution time

Fig. 4. Evolution of decision changes during execution for mrj_101 instance.

Table 3. Results of the simulation of the proposed scenarios.

ID Scenario A Scenario B Scenario C


mkp mlt CTV mkp mlt CTV NI mkp mlt CTV NI
1 651,0 85,2 20215,9 632,2 129,9 20846,7 2,1 536,4 52,4 13036,5 0,4
2 921,0 123,7 47145,1 894,7 141,8 54359,8 2,3 727,2 61,3 29006,7 0,7
3 1188,6 222,0 91788,7 1110,4 163,1 77990,2 1,8 1068,8 131,4 68986,9 0,7
4 1418,3 193,1 125411,0 1290,9 230,3 111702,7 2,3 1109,2 143,6 85211,7 0,7
5 1672,9 272,8 187567,8 1641,7 294,3 173615,0 2,1 1462,2 164,0 141088,1 0,7

4 Conclusions
This paper proposed a D-HCA that integrates a reactive mechanism and a reactive con-
trol policy within the functioning of the semi-heterarchical system for reducing the
myopic and nervous behaviour for the dynamic scheduling of a flexible job shop problem.
The result confirms that including coupled strategies in a D-HCA reduces the myopic
behaviour for minimizing the degradation suffered from perturbations and reduces the
nervousness due the changing between reactive decision-making. Taken together, these
findings suggest including coupled strategies for promoting the control of not desirable
behavior within the control of distributed manufacturing systems.
Despite its exploratory nature, this study offers some insight into the synergy featured
from the coupled strategy. A natural progression of this work is to analyze the parameters
of the proposed strategies that lead to minimization of the degradation between the global
expected metrics and the local execution metrics. However, further studies need to be
214 S.-M. Meza et al.

carried out in order to validate the benefits of coupling strategies and, certainly, exploring
the likely trade-off resulted from the seeking the reduction of myopic behaviour and
nervousness simultaneously.

References
1. Barbosa, J., Leitão, P., Adam, E., Trentesaux, D.: Dynamic self-organization in holonic multi-
agent manufacturing systems: the ADACOR evolution. Comput. Ind. 66, 99–111 (2015)
2. Bendul, J.C., Blunck, H.: The design space of production planning and control for industry
4.0. Comput. Ind. 105, 260–272 (2019)
3. Cardin, O., Trentesaux, D., Thomas, A., Castagna, P., Berger, T., Bril El-Haouzi, H.: Coupling
predictive scheduling and reactive control in manufacturing hybrid control architectures: state
of the art and future challenges. J. Intell. Manuf. 28, 1503–1517 (2017)
4. Dassisti, M., Giovannini, A., Merla, P., Chimienti, M., Panetto, H.: Hybrid production-system
control-architecture for smart manufacturing. In: Debruyne, C., et al. (eds.) On the Move to
Meaningful Internet Systems, OTM 2017 Workshops. Lecture Notes in Computer Science,
pp. 5–15. Springer, Cham (2018)
5. Derigent, W., Cardin, O., Trentesaux, D.: Industry 4.0: contributions of holonic manufacturing
control architectures and future challenges. J. Intell. Manuf. (2020)
6. Hadeli, P.V., Verstraete, P., Germain, B.S., Van Brussel, H.: A study of system nervousness
in multi-agent manufacturing control system. In: Brueckner, S.A., Di Marzo Serugendo, G.,
Hales, D., Zambonelli, F. (eds.) Engineering Self-Organising Systems, ESOA 2005. Lecture
Notes in Computer Science, pp. 232–243. Springer, Heidelberg (2006)
7. Jimenez, J.F., Bekrar, A., Trentesaux, D., Leitão, P.: A nervousness regulator framework for
dynamic hybrid control architectures. In: Borangiu, T., Trentesaux, D., Thomas, A., McFar-
lane, D. (eds.) Service Orientation in Holonic and Multi-Agent Manufacturing. Studies in
Computational Intelligence, pp. 199–209. Springer, Cham (2016)
8. Jimenez, J.F., Bekrar, A., Zambrano-Rey, G., Trentesaux, D., Leitão, P.: Pollux: a dynamic
hybrid control architecture for flexible job shop systems. Int. J. Prod. Res. 55, 4229–4247
(2017)
9. Mezgebe, T.T., Demesure, G., Bril El Haouzi, H., Pannequin, R., Thomas, A.: CoMM: a
consensus algorithm for multi-agent-based manufacturing system to deal with perturbation.
Int. J. Adv. Manuf. Technol. 105, pp. 3911–3926 (2019)
10. Pinedo, M.L.: Scheduling: Theory, Algorithms, and Systems. Springer, New York (2016)
11. Stock, T., Seliger, G.: Opportunities of Sustainable Manufacturing in Industry 4.0. Proc. CIRP
40, 536–541 (2016)
12. Trentesaux, D.: Distributed control of production systems. Eng. Appl. Artif. Intell. 22, 971–
978 (2009)
13. Trentesaux, D., Pach, C., Bekrar, A., Sallez, Y., Berger, T., Bonte, T., Leitão, P., Barbosa,
J.: Benchmarking flexible job-shop scheduling and control systems. Control Eng. Pract. 21,
1204–1225 (2013)
14. Zambrano Rey, G., Bonte, T., Prabhu, V., Trentesaux, D.: Reducing myopic behavior in FMS
control: a semi-heterarchical simulation-optimization approach. Simul. Model. Pract. Theory.
46, 53–75 (2014)
15. Zambrano Rey, G., Pach, C., Aissani, N., Bekrar, A., Berger, T., Trentesaux, D.: The control
of myopic behavior in semi-heterarchical production systems: a holonic framework. Eng.
Appl. Artif. Intell. 26, 800–817 (2013)
16. Zhong, R.Y., Xu, X., Klotz, E., Newman, S.T.: Intelligent manufacturing in the context of
Industry 4.0: a review. Eng. 3, 616–630 (2017)
Multi-agent Approach for Smart Resilient City

Sergey Kozhevnikov1(B) , Miroslav Svitek2 , and Petr Skobelev3


1 Czech Institute of Informatics, Robotics and Cybernetics, Czech Technical University
in Prague, 160 00 Prague 6, Czech Republic
koz@kg.ru
2 Faculty of Transportation Sciences, Czech Technical University in Prague,
110 00 Prague 1, Czech Republic
3 Samara State Technical University, Molodogvardeyskaya Street 244, 443100 Samara,

Russian Federation

Abstract. The Smart City concept now entirely relies on information and com-
munication technologies (ICT) with projects providing new or better services for
city residents. The resilience of a city, from our perspective, should also rely on
ICT. Resilience as a service implies predictive modelling and “what if” analysis
for better reaction at unpredictable events and providing emergency services in
critical modes of city life. Resilience as a property of a city means working as
normally as possible for citizens when extreme events occur and also adaptively
react and change the system’s behavior when normal mode has no chance to be
applied. This paper provides a review and analysis of the resilient properties of
existing Smart City frameworks and offers a new concept of a resilient city based
on the Demand-Resource (DR) model, multi-agent system (MAS) and ontologies.
The main idea of this concept is to create an ICT framework that is resilient by
design. For this, it should operate as a digital ecosystem of smart services. The
framework development process is divided into two main steps: first to create
Smart City simulation software for modelling, planning, and strategic assessment
of urban areas as a set of models at different levels of abstraction. The second
step involves the full integration of all services in one dynamic adaptive real-time
digital ecosystem with resilient properties.

Keywords: Smart resilient city · Smart city 5.0 · Digital ecosystem ·


Multi-agent system · Ontology · Artificial intelligence

1 Introduction
1.1 Basic Definitions
Recent developments place increasing emphasis on strengthening the resilience of ter-
ritorial units to global climate change, natural disasters, social unrest, terrorist attacks,
and cyber-attacks or power outages.
Urban resilience in full was defined in the paper [1]. Based on the classical approach
of Holling [2], resilience is the ability of the system to continue to function with the
change but does not necessarily remain the same. Chelleri [3] defines urban resilience

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 215–227, 2021.
https://doi.org/10.1007/978-3-030-69373-2_15
216 S. Kozhevnikov et al.

not as the ability to return for basic condition, but as the ability to change, evolve, and
adapt smoothly. From another perspective, a resilient city is a sustainable network of
physical systems and human communities [4]. Mehmood [5] emphasizes the need to
consider the city as a complex adaptive system to assess resilience. Linkov [6] notes the
need for a network-centric approach to addressing urban sustainability. In the theory of
complex systems, urban resilience is the ability to evolve [7].
There are many definitions contradictory at some points. Therefore, in our work,
we will adhere to the definitions of urban sustainability, urban resilience, and urban
transformation, which Elmqvist et al. [8] derived in their research:

Urban sustainability - manage all resources the urban region is dependent on and enhance
the integration of all sub-systems in an urban region in ways that guarantee the wellbeing
of current and future generations, ensuring distributional equity.
Urban resilience - the capacity of an urban system to absorb disturbance, reorganize,
maintain essentially the same functions and feedback over time, and continue developing
along a particular trajectory. This capacity stems from the character, diversity, redun-
dancies, and interactions among and between the components involved in generating
different functions.
Urban transformation - the systemic change in the urban system. It is a process of
fundamental irreversible changes in infrastructures, ecosystems, agency configurations,
lifestyles, systems of service provision, urban innovation, institutions, and governance.

1.2 State of the Art


The city is a very complex system of social-ecological and social-technical networks; a
network of physical systems and human communities [1]. Therefore, the Resilient City
(RC) should be considered as a complex adaptive network-centric system of systems with
a vast number of interconnections [5, 6]. To create a Smart Resilient City (SRC) it is
important to manage the flow of resources taking into account various interconnections,
and optimizing consumptions [9]. Processes also need to be planned in real-time. SRC
is characterized by the ability to produce knowledge. To achieve high complexity, the
SRC as the digital ecosystem should be created on a common knowledge base [10].
To implement these requirements, it is proposed to use multi-agent architecture.
Massei and Tremori [11] use intelligent agents to model urban responses to threats
in a military context, emphasizing human behaviour. Brudermann and Yamagata [12]
also use agents to model people’s behaviour – the primary purpose is to study crowd
behaviour in case of an emergency. Mustapha et al. [13] are developing the multi-agent
architecture for a disaster-resistant city. The approach of Rieger et al. [14] considers the
use of multi-agent systems and graph theory for urban resilience.
Ontologies are widely used to store knowledge. The use of ontologies, for example, is
the most promising method of ensuring urban interoperability [15]. Ontologies have been
developed, for example, to collect information from sensors of smart houses and cities
[16]. Km4City [17] is the most comprehensive open ontology for smart cities, covering a
large number of areas (e.g., weather, sensors, structures, transport, etc.) needed to reduce
energy consumption and CO2 emissions. However, there is currently no single ontology
covering all sectors, no standardization, and no consistency [15].
Multi-agent Approach for Smart Resilient City 217

As opposed to our research, Trucco et al. [18] modelled the city using ontologies
not for planning, but for resilience assessment. Also, several other works are devoted
to modelling the city to assess its resilience. Uribe-Perez and Pous [19] argue that
connections in the city require a unique architecture for modelling. Inspired by the
human nervous system, they propose to give the service bus functions like a spinal cord
for the simplest quickest reactions to events. Cavallaro et al. [20] solve the problem of a
vast number of connections in the city using Hybrid Social-Physical Complex Networks.
Compared with these approaches, we focus in our work on using multi-agent systems
as main decision-making elements, with the city knowledge base as the core element of
storing knowledge about decision-making processes and the most promising method of
ensuring the interoperability of services. It is a network in which all nodes are accessible
to each other and are capable of self-organization and risk management.
To summarize, we can say that many cities accept SRC strategies. They provide the
definition and vision, but they do not answer at how to create the SRC. Based on the
current technologies review, we will consider SRC as a network-centric system based
on ontologies. It should manage the flow of resources and all city services, be capable of
self-organization, adaptability and risk management, and vitally important, have a full
understanding of the current situation and instruments for modelling “what if” scenarios.

2 Smart Resilient City 5.0


2.1 Smart Resilient City as Digital Ecosystem
The concept of Smart Resilient Regions, Cities and Villages tries to make good use of
modern technologies to create synergies between different sectors and services (such as:
transport, energy, logistics, security, environment, building management, public health,
agriculture, etc.) based on credible data, information, and knowledge, concerning the
resilience and sustainability of urban areas and the quality of life (QoL) of their citizens.
Since urban areas are characterized not only by their high population density but also
by interacting and physically overlapping infrastructure elements with different func-
tions, city resilience cannot be studied separately. The solution must include all links and
synergies between the sectors concerned, including a significant decentralization of the
necessary resources. The complexity of the city’s social, economic, technology environ-
ment is increased exponentially. At the same time, their managerial and administrative
approaches have remained the same and, therefore, unable to operate effectively under
new dynamic conditions (condition of raising complexity). At present, every city service
is managed individually, and there is almost no coordination of services on horizontal
or vertical levels.
The main target of our approach is to offer a vision of SRC, which is characterized
by cooperation between Artificial Intelligence (AI) systems and humans. They can har-
moniously balance all spheres of life and contradictory interests of different city actors
to achieve Urban resilience and Urban transformation properties. This model can help
find a “consensus” between different services and, more importantly, citizens.
To achieve the mentioned principles, the Smart City 5.0 framework initially should
be created as a digital ecosystem. It should be based on the unified semantic space
and the methodology of complex adaptive systems, where each part can operate
218 S. Kozhevnikov et al.

entirely autonomously, but able to interact and negotiate, and through concessions make
consistent decisions with other systems [21].

2.2 Agent approach of the Smart Resilient City

In the nearest future, all smart city systems and smart transport will work in the service
mode. The users will just mention two points A and B, points of origin and destina-
tion. The system will offer several options differing in price, comfort, time, and other
preferences. Even the public transport schedules will be developed based on real-time
demands from customers; solving the global transport task will be a combination of the
resources and demands of millions of users. To solve this task, we need to create the
global city Demand-Resource (DR) model.
The concept of DR is widespread for MA systems. In our work, we extend the
classical vision and offer a general DR model for the whole city. Agents are not localized
in every separate service but can negotiate on the free market of the whole city. In our
model, a demand created in one service can be matched by different solutions of multiple
agents of other services. Every service competes or cooperates for resources and demands
through a special service bus.
Every service in this platform can be functionally unique but common in the core
architecture and created on the same basic principles. The core of every service is a
Demand and Resource model and open access to the collective knowledge base. This
allows services to solve their internal task, but also be part of bigger services (holonic
principal) [22].
The SRC framework is designed as an autonomous smart cyber-physical system, able
to analyze the situation, make decisions and plan their actions, as well as monitor the
execution of plans and results, predict the development of the situation, and communicate
with all participants [21]. It allows all new elements (smart services) to easily enter or
leave this digital ecosystem and provides the full authorized access to the Smart City
Knowledge Base. Except for the KB the core element of the Smart Resilient City is
the virtual forum of agents as global city Demand and Resource model. Every agent on
this level represents a smart service of the city. We describe the SRC framework as a
large-scale networked application composed of the functionally similar elements:

• “Task” (demand, order, request for services) incoming from any entity within the
system or external world;
• “Resource” as the particular entity or product of the smart city services (taxi, parking
spot, a table in the restaurant, etc.);
• “Data source” as the basic telemetry data (sensor, GPS, cloud data, etc.);
• “Software” to support platform operation.

The primary approach is to achieve the interaction of all tasks and resources. This
means that every problem solved by SRC can be described as a combination of demand
and resource interactions. Practically, we can simulate different city sectors as shown in
Fig. 1 using different simulation software for transport, energy, land use, environment,
or other segments.
Multi-agent Approach for Smart Resilient City 219

Fig. 1. Smart city demand-resource model

Each technical component (building, street light, charging station, etc.) or different
uses (citizen, municipality, group of people) requires limited resources (energy, transport,
parking slot, land, etc.) in a given time interval t. We will call them dynamic demand
requirements. This means we need to make a plan for all entities.
To solve this task, we use a multi-agent system where all requirements and resources
are represented by Demand Agents and Resource Agents, which can negotiate among
themselves. In multi-agent systems (MAS), we can organize negotiations among demand
agents through different modelling and simulation tools [23]. Each model (transportation,
energy, environment, etc.) plays the role of “a dynamical digital DR market place” with
limited time-varying resources. Different demand agents negotiate in each time interval
to capture requested resources.
Our approach to a SRC is like puzzles of different pieces (urban areas), which could
be assembled into higher urban units like districts or whole cities. Negotiation among
Demand agents with the city simulation model will yield into dynamical resources
assignments represented by Resource agents that offer the best possible service to each
consumer. In case one consumer does not accept the assigned resources, it must change
its demands. The negotiation can be repeated once again under new conditions.
A plan represents the resulting approach as a user interface between aggregated
demand agents assigned to different smart city components and aggregated urban sus-
tainability parameters (economic, environmental, and social). The decision-makers, typ-
ically the municipality, should specify the sustainability parameters (KPIs) for the whole
urban area. The demand and resource agents mutually negotiate with a city simulation
model to propose to each smart city component the reduced comfort to fulfil the requested
KPI. We use DR service architecture to combine requests and resources, and a multi-
agent system to create a work plan (to satisfy all the demands with limited resources),
so every match of the demand and resource will have its own time slot. From this
perspective, the SRC is becoming a demand and resource model advanced by agents,
satisfaction functions, bonus-penalties and compensations. This model and technology
allow rebuilding a plan in real-time. At unpredictable events we can reschedule the emer-
gency services, governmental services, and other services. This already gives a first but
220 S. Kozhevnikov et al.

essential profile for SRC. If the rescheduling process can be maintained automatically,
we say that this architecture is resilient by design (Fig. 2).

Fig. 2. Smart city as digital ecosystem

The second SRC profile can be a model of different “what if” scenarios of city
development. The MAS can provide hundreds of different variants according to the set
of different KPIs. Different systems on the top of the proposed framework will provide
additional services (AI, knowledge bases, blockchain, and other instruments).

2.3 Ontology and Semantic Interoperability of the SRC

Now, most of the Smart City concepts rely on creating one unique database (DB) to
provide access for city services [24]. Creating the ontology as the knowledge base on
the top of such data sets is a new idea that is not yet achieved popularity.
This task requires formalizing the knowledge of all aspects of the SRC and enabling
simple access to this knowledge for different services and support interaction within
digital platforms and ecosystems.
The city’s ontology-based model can specify main objects like buildings, roads, bus
stops, traffic lights, energy sources, environment, and others. These objects can have a
detailed description - building can be a restaurant, a business centre or a living building
with many other properties stored as attributes (number of floors, date of construction,
etc.). The ontology can also describe people, their activities or requests, requirements,
or business processes.
For example, we can take the Smart Parking Service as an automated service that
provides the following features:

• It is available for the citizens via mobile application supporting search, reservation,
and payment of car parking place within the city area.
Multi-agent Approach for Smart Resilient City 221

• The mobile application is provided with an embedded interactive city map supporting
the selection, reservation, and payment of the chosen car parking place, visualizing the
shortest car route to the parking and informing parking security about the reservation
made and forthcoming visit.
• The mobile application support for the monitoring and planning of the parking places
availability and reservations using an embedded interactive planner.
• Connection of the car parking service with city adjustable ontology allows creating
new relationships based on existing properties that did not exist before. In case of an
emergency or a full load of parking spaces, the system can analyze the private parking
slots and offer the owners to rent their places when they are not occupied with other
drivers. Thus, the system adds new knowledge about the possibility of parking where
it was previously unfeasible.

The ontology leads to the possibility of changing the destination of the roads. Ontolo-
gies will generate various solutions to problems: launching night buses or new routes,
changing the organization’s work schedule or location, and improving recreational areas
on the outskirts of the city (so that people do not seek the centre).

3 Practical Realization
The development of the SRC framework is divided into three main steps. In the first step
the basic services with their own KPIs and data sets are modelled. In the second step the
smart services are connected and show the cooperative work. The common KPIs for the
whole city are set up, and the “what if” analysis is performed. The first resilient features
can be shown.
The third step (not presented in this paper) is implementing of the SRC platform for
creating a digital ecosystem of services. This step involves the full integration of all ser-
vices in one dynamic adaptive real-time digital ecosystem with resilient and sustainable
properties.

3.1 Urban Simulation Software


To achieve the cooperation between different actors with different KPIs and sometimes
contradictory aims, the Smart City simulation software for modeling, planning, and
strategic assessment of urban areas (CSS) was developed. CSS is a set of models at
different levels of abstraction and covers the first and the second steps of the development
process. Its conceptual architecture is shown in Fig. 3. It consists of several models:
Smart Grid, Energy Building, Urban mobility, Urban Micro Climate, and Semantic
Urban Morphology model. It allows performing conceptual “what if” simulation and
reaction on emergencies.
The general idea of CSS for modelling, planning and strategic assessment of urban
areas is to create the first model of the SRC by combining different simulation micro-
models. CSS’s primary purpose is to show the real situation in the city as a set of City
KPIs, check the system stability, and show the possible reaction of the city services for
different disturbing impacts. The CSS development process divided into the following
steps:
222 S. Kozhevnikov et al.

• Every service is analysed for its own KPIs.


• Set up the KPIs for the whole SRC (KPI of services can be contradictory).
• Develop different micromodels.
• Join the data sets of micro models, find the interdependencies.
• Develop the first version of the ontology.
• Join different micromodels in one solution.
• User interface development.

Fig. 3. Architecture of CSS

The CSS results can be visualized in two different ways. We use the SRC equalizer
in Fig. 4 for showing the economic, environmental, and social parameters assigned to
each smart city components or the visual comparison of the results made by different
models, Fig. 5.
The mixture of models yields in the SRC simulation of “what-if” critical scenarios
and different city cases:

1. Strategical planning of green areas (modelling of the urban ecosystem together


with its optimized future evolution).
2. Recommendation for apartments’/residents’ number in urban areas concerning
transportation, energy, etc.
3. Recommendation of advanced building operation (for a new one the category can
be recommended - school, residential, etc. with special emphasis on public-owned
buildings).
4. Change of schedule of public transport (time tables, transport services).
5. Change from the fuel to electric buses in public transport (environmental and
economic impact).
6. Change of traffic control strategy (green lines, environmental impact).
7. Optimization of parking slots (parking strategy, number and localization of slots,
operation, payments).
Multi-agent Approach for Smart Resilient City 223

8. Identification of the main problems in urban areas at different time intervals


(holiday, morning).
9. Recommendation for active intervention during rush hours.
10. Recommendation of better reaction to unexpected events (accidents, disasters,
black-out, crisis management, etc.).

Fig. 4. City equalizer

Fig. 5. The user interface as a combination of models

The strategic target is to build an urban virtual model playing the role of the digital
twin of the real city area, in which economic, environmental, and social parameters
are combined with common synergies among different sectors (transport, environment,
security, etc.) to be optimized. In future augmented reality connected with CSS above,
the 3D model will allow studying different “what-if” situations in parallel in all sectors
e.g., transportation, energy, and the environment in a unified presentation tool.
224 S. Kozhevnikov et al.

Fig. 6. Protocols of the agent’s negotiation

3.2 Smart Grid as Part of CSS


Almost all cities are characterized by a continually growing need for energy resources
that identify the number of existing constraints (infrastructure, network, the difficulty to
predict consumption, etc.). One of the micromodels developed for CSS is the Smart Grid
model that finds the optimal balance between reasonable consuming and smart sufficient
production (Fig. 6).
Multi-agent Approach for Smart Resilient City 225

In this system, all suppliers and consumers of resources (gas, electricity, water) are
united in a single information field that allows them to plan and optimize the delivery of
resources at the optimal price in real-time. Users and suppliers indicate the parameters
of consumption and production of resources. After that, agents representing the network
objects begin negotiations, which, through biddings and concessions, are completed by
reaching a consensus – that suits all parties. If this solution is not possible, Software
agents make recommendations to consumers and suppliers on how to changes the volume
of demand or supply. The recommendations are sent to users of the system via the
user interface (UI). This communication dialogue continues until the proposal that is
acceptable to all players is formulated. Working with the system, users can also get
access to various statistics on volumes of consumption, production, prices.
The DRN methodology was used to design and develop the Smart Grid multi-agent
model [25]. As stated in Sect. 2, the interactions between agents are only negotiations.
Based on predefined ontology, agents solve the task of resource and demand allocation.
Collaboration can be achieved on higher level between services (different MA systems)
if their resources and demands are represented as entities of common nature.

4 Conclusion

The presented concept and framework of the Smart Resilience City 5.0 provides a new
vision of a city as a digital platform and eco-system of smart services. In this ecosystem
agents of people, things, documents, robots, and other entities can directly negotiate
with each other on-demand resource and provide the best possible solution. The smart
environment of individuals, groups, or other objectives is created, making possible self-
organization in sustainable or, when needed, resilient way. The digital ecosystem of
smart services - an open, distributed system - can autonomously schedule resources of
large-scale applications and provide services; it shows resilience properties by design.
The presented outlines show the basics of the system and application services of
the framework and demonstrate the concept of real-life case studies. The authors plan
to continue working on the software. The next challenge is the integration of all ser-
vices in one dynamic adaptive real-time digital ecosystem with resilient properties and
implementation in the city of Prague.

Acknowledgment. This work was supported by the AI & Reasoning project CZ.02.1.01/0.0/0.0/
15_003/0000466, by the European Regional Development Fund and by the Technology Agency
of the Czech Republic (TACR), National Competence Center of Cybernetics and Artificial
Intelligence, TN01000024.

References
1. Meerow, S., Newell, J.P., Stults, M.: Defining urban resilience: a review. Landsc. Urban Plan.
147, 38–49 (2016)
2. Holling, C.S.: Resilience and stability of ecological systems. Annu. Rev. Ecol. Syst. 4, 1–23
(1973)
226 S. Kozhevnikov et al.

3. Chelleri, L.: From the “Resilient City” to urban resilience. a review essay on understanding
and integrating the resilience perspective for urban systems. Documents d’Anàlisi Geogràfica
58, 287–306 (2012)
4. Cutter, S.: Resilience to What? Resilience for Whom? Geogr. J. 182, 110–113 (2016)
5. Mehmood, A.: Of resilient places: planning for urban resilience. Eur. Plan. Stud. 24(2),
407–419 (2016)
6. Linkov, I., Bridges, T., Creutzig, F., Decker, J., Fox-Lent, C., Kröger, W., Lambert, J., Lever-
mann, A., Montreuil, B., Nathwani, J., Nyer, R., Renn, O., Scharte, B., Scheffler, A., Schreurs,
M., Clemen, T.: Changing the resilience paradigm. Nat. Climate Change 4, 407–409 (2014)
7. Welsh, M.: Resilience and responsibility: governing uncertainty in a complex world. Geogr.
J. 180, 15 (2014)
8. Elmqvist, T., Andersson, E., Frantzeskaki, N., McPhearson, T., Gaffney, O., Takeuchi, K.,
Folke, C.: Sustainability and resilience for transformation in the urban century. Nat. Sustain.
2 (2019)
9. Agudelo-Vera, C., Leduc, W.R.W.A., Mels, A.R., Rijnaarts, H.: Harvesting urban resource
towards more resilient cities. Resour. Conserv. Recycl. 64, 3–12 (2012)
10. Batty, M., Axhausen, K., Giannotti, F., Pozdnoukhov, A., Bazzani, A., Wachowicz, M.,
Ouzounis, G., Portugali, Y.: Smart cities of the future. Eur. Phys. J. Spec. Top. 214, 481–518
(2012)
11. Massei, M., Tremori, A.: Simulation of an urban environment by using intelligent agents
within asymmetric scenarios for assessing alternative command and control network-centric
maturity models. J. Defense Model. Simul. Appl. Methodol. Technol. 11, 137–153 (2013)
12. Brudermann, T., Yamagata, Y.: Behavioral aspects for agent-based models of resilient urban
systems. In: of the International Conference on Dependable Systems and Networks, pp. 1–7
(2013)
13. Mustapha, K., Mcheick, H., Mellouli, S.: Smart Cities and Resilience Plans: A Multi-Agent
Based Simulation for Extreme Event Rescuing (2016)
14. Rieger, C., Moore, K.L., Baldwin, T.L.: Resilient control systems: a multi-agent dynamic
systems perspective. In: International Conference on Electro Information Technology, p. 16
(2013)
15. Costin, A., Eastman, C.: Need for interoperability to enable seamless information exchanges
in smart and sustainable urban systems. J. Comput. Civ. Eng. 33, 04019008 (2019)
16. Ganzha, M., Paprzycki, M., Pawłowski, W., Szmeja, P., Wasielewska. K.: Semantic interop-
erability in the internet of things: an overview from the INTER-IoT perspective. J. Network
Comput. 81, 111–124 (2017)
17. Badii, C., Bellini, P., Cenni, D., Martelli, G., Nesi, P., Paolucc. M.: Km4City smart city
API: an integrated support for mobility services. In: IEEE International Conference on Smart
Computing, pp. 1–8 (2016)
18. Trucco, P., Petrenj, B., Bouchon, S., Dimauro, C.: Ontology-based approach to disruption
scenario generation for critical infrastructure systems. Int. J. Crit. Infrastruct. 12, 248 (2016)
19. Uribe-Pérez, N., Pous, C.: A novel communication system approach for a Smart City based
on the human nervous system. Future Gener. Comput. Syst. 76, 314–328 (2017)
20. Cavallaro, M., Asprone, D., Latora, V., Manfredi, G., Nicosia, V.: Assessment of urban
ecosystem resilience through hybrid social–physical complex networks. Comput. Aided Civ.
Infrastruct. Eng. 29, 608–625 (2014)
21. Svítek, M., Skobelev, P., Kozhevnikov, S.: Smart City 5.0 as an urban ecosystem of smart
services, service oriented, holonic and multi-agent manufacturing systems for industry of
the future. In: Borangiu, T., Trentesaux, D., Leitão, P., Giret Boggi, A. (eds.) Studies in
Computational Intelligence, vol. 853, pp. 426–438, Springer, Cham (2020)
Multi-agent Approach for Smart Resilient City 227

22. Kozhevnikov, S., Skobelev, P., Pribyl, O., Svítek, M.: Development of resource-demand net-
works for smart cities 5.0. In: Mařík, V. et al. (eds.) Industrial Applications of Holonic and
Multi-Agent Systems. HoloMAS 2019, Lecture Notes in Computer Science, vol. 11710.
Springer, Cham (2019)
23. Rzevski, G., Skobelev, P.O.: Managing Complexity. WIT Press, Boston (2014)
24. https://golemio.cz/
25. Vittikh, V.A., Skobelev, P.O.: The method of conjugate interactions for resource distribution
control. Avtometriya 45(2), 78–87 (2009)
Ethics and Social Automation
in Industry 4.0
Decision-Making in Future Industrial Systems:
Is Ethics a New Performance Indicator?

Lamia Berrah1(B) and Damien Trentesaux2


1 LISTIC - Université Savoie Mont Blanc, Annecy, France
lamia.berrah@univ-smb.fr
2 LAMIH UMR CNRS 8201, Université Polytechnique Hauts-de-France, Valenciennes, France

damien.trentesaux@uphf.fr

Abstract. This study deals with ethical aspects of decision-making in the con-
text of future industrial systems such as depicted by the Industry 4.0 principles.
These systems involve a great number of interacting elements, with more or less
autonomy. In this sense, ethics may become an important mean to ensure a long-
term viable joint integration of humans and artificial elements in future industrial
systems merged into the society. Two complementary views can be thus identified
to integrate ethics in such future industrial systems. The first view conventionally
defines ethics as a non-negotiable static set of conditions and rules to be met by
the considered systems throughout their lifecycle. The second view assumes that
ethics can be seen as a performance factor to which a KPI (Key Performance
Indicator) is associated and which can, therefore, be more or less directly mea-
sured and lead to improvement through time. Starting from an overall definition
of the concept of ethics, its conventional vision and its specifications regarding
future industrial systems, these two views are exposed and discussed, leading to
the establishment of some properties for the definition of a generic framework
to handle ethics throughout decision-making processes. Concluding remarks and
prospects are finally presented.

Keywords: Ethics · Industrial systems · Key Performance Indicators (KPIs) ·


Decision-making · Industry 4.0

1 Introduction
The world is constantly changing, and the rate at which it currently changes is accelerated
with the increasing rate of technological breakthroughs, especially in the digital and
computational worlds. In the industrial sector, programs such as Industry 4.0 [1] are
looking for the right approach to integrate digital technologies in the industry. The
maturity of the systems regarding the use of these technologies is being assessed, leading
to the definition of readiness levels [2], indexes or roadmaps [3, 4] constituting thus points
of reference for digitalisation improvement.
Therefore, if a part of the handled information is well-controlled regarding systems’
transformation, another part handles risks and uncertainties. Thus, a set of emerging
expectations established by society, politicians and regulators are being imposed on

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 231–245, 2021.
https://doi.org/10.1007/978-3-030-69373-2_16
232 L. Berrah and D. Trentesaux

industrialists and researchers when designing and controlling systems in order to cope
with this point. In that context, the risk is that ignoring these major societal expectations
that are rapidly emerging will lead to unsustainable and sterile contributions given, for
example, the usual inertia of the research world to make research topics evolve. From our
point of view, these expectations, relevant to the federative concept of sustainable devel-
opment, concern mainly: 1) the consideration of the environment and the limited amount
of hardly-renewable resources from our planet, 2) the insurance that every technological
development is useful and suitable to the human society.
This paper concerns the second point. It focuses, more specifically, on the notion
of ethics and its study in the context of future industrial systems, as fostered by the
concept of Industry 4.0, with a focus on ethical aspects that are relevant to decision-
making. Addressing ethics in future industrial systems is an urgent need. Indeed, the
rapid evolution of digital technologies in Industry 4.0, fostering the multiplication of
sensors and actuators (e.g. Internet of Things - IoT, mobile robotics) and decision and
learning abilities (e.g. Artificial Intelligence - AI) of digital or Cyber-Physical Systems
causes the emerging of a great number of new functionalities and potentialities along
with a great number of high ethical stakes for human decision-makers who are involved
in industrial systems.
Three factors put ethics at high stakes with regards to industrial decision-making, as
depicted Fig. 1.

Fig. 1. Decision-making in future industrial systems: ethical risks

The first factor concerns the fact that the digitalised entities are intended to facilitate
the augmentation, the monitoring or even the replacement of humans, allowing new
possibilities in production control as well as new investigations regarding data analysis
to be enacted. Industrial practices already show some unforeseeable and questionable
situations due to this advent. The second and the third factors come from the fact that
two types of complexity have to be handled: an internal complexity (second factor) due
to the fact that these entities become more and more complex to understand and control,
and an external complexity (third factor) due to the fact that it will become more and
Decision-Making in Future Industrial Systems 233

more complex to understand and control the interrelations among them and the human
society, their consequence and their possible diversion in an unpredictable environment.
Consequently, it is getting more and more important to focus on the ethical behaviour
of all the stakeholders involved in future industrial systems, with regards to the new
developments that have been defined or that will be defined in the future; in particular,
those to deal with digital technologies. Ethics is relatively well studied and deployed in
a deterministic universe with long-term and small progressive changes. Meanwhile, in
future industrial systems characterised as introduced by the rapid evolutions of digital
technologies and the increasing internal and external complexities interlaced with the
human world, operating and engineering ethics remains a great challenge.
Adopting an information processing point of view, ethics can be considered as an
evolving notion that implies multiple criteria and concerns different facets when mak-
ing decisions. Moreover, ethics can be progressively enriched in its deployment and
improved in its achievement, which is the purpose of the KPIs (Key Performance Indi-
cators) [5]. This paper raises thus the question of handling ethics as a KPI or not when
deciding. To illustrate the complexity of the question raised, two industrialists were
asked the same questions: “how does your company manage ethics?” and “what are
the relevant stakes?” Their testimonies are provided in Fig. 2 (their names have been
changed to preserve their anonymity).
This paper suggests the establishment of some properties to define a generic frame-
work to handle ethics when making decision in future industrial systems (FIS) and
especially their control. Section 2 brings shortly some elements about the concept of
ethics from a general point of view on the one hand, and regarding FIS on the other
hand. Then, Sect. 3 discusses the two possible answers to the question raised. Therefore,
based on this analysis, a preliminary architecture of a generic framework is proposed.

2 Operating Ethics in Future Industrial Systems


Initially, ethics concerned human achievement and has been introduced as the set of
moral values associated with it. Its science falls within the field of philosophy, and seeks
to dissociate “what is good” from “what is bad” [6]. Legal frameworks have then been
defined associating a set of deontological rules to apply and conditions to respect. Ethics
has thus always been an integral part of any approach involving human interest, its
framework being constantly adapted and enriched with regards to the occurred events
related to such an interest. While the concept of ethics was initially concerned with
human behaviour, its field of analysis has been recently broadened to take into account
the behaviour of highly autonomous systems in their operation and decision-making,
typically machines, robots and cars [7, 8]. From our perspective, ethical industrial sys-
tems are industrial systems that are ethically designed on the one hand and ethically
used and supported on the other hand [9]. Moreover, ethics in industrial systems can be
considered as a notion that covers all the steps of the lifecycle of an industrial system
and concerns all the involved stakeholder decision-makers throughout this cycle, that
can be, therefore, summarised in three significant steps: the design (a priori view, before
operations), the use (during operations) and the support (a posteriori view).
The three factors described in the introduction lead to various situations putting ethics
at risks for operators, managers, researchers and designers in future industrial systems
234 L. Berrah and D. Trentesaux

Fig. 2. Industrial testimonies: pros and cons about considering ethics as a KPI or not

[9]. A significant question is thus: how to handle ethics when making decision with
regards to future industrial systems, and especially its control? From our perspective,
two approaches can be adopted to answer this question. The first one consists of stating
that ethics is a kind of a new criterion when making a decision, and imply considering
ethics as a kind of KPI, while the second approach states that ethics cannot be just another
criterion when making a decision, and is more global. The following section studies these
two approaches. Even if the question and the approach are felt to be generic, the primary
application field of this study concerns future industrial systems.

3 Is Ethics Just Another KPI?


3.1 No, Ethics Cannot Be Just Another KPI
Under the assumption of a system operating in normal and routine conditions, leading
to well-known behaviours and consequences, a point of view consists of constructing
ethics on rules and norms to be applied. The application of these rules and norms is made
prior to or by the replacement of any decision made. In other words, any decision made
is ethical since it is compliant with these rules and norms. This approach reassures the
humans involved in the decision-making process since it guarantees that each decision
Decision-Making in Future Industrial Systems 235

is made within a static, stable and well-established ethical context. Moreover, it offers
legality and liability boundaries, limiting or clearly explicating the legal responsibilities
in case of accident or injuries.
Dealing with artificial systems such as enterprises, organisations, and productive
systems, some approaches that are close to that of ethics are already proposed accord-
ing to this point of view. Namely, the Corporate Social Responsibility (CSR) [10], as
mentioned in the testimony of one of the industrialists as given in Fig. 2 is one of them
[11]. Standards are also proposed, leading to the assignment of some definitions and
conditions to fulfil [12]. Ethics has also been approached through the quality facet, as
it has to be managed and measured, identifying it as a kind of compliance with what
needs to be [13]. In addition, ethics has also been deployed according to the major pil-
lars of society, in coherence avec the sustainability paradigm [14, 15]. In this sense,
environmental ethics has been introduced for the relationship of human beings to nature
[16]. And, as far as we are concerned, digital ethics is currently dealing with the use
of digital technologies [17]. However, even if it is the core activity of many researchers
and practitioners, to the best of our knowledge, no specific habits have been established
concerning ethics in future industrial systems, even if attention starts to be drawn in
this sense by considering potential symbiosis between humans and machine [18]. As a
consequence of the situation described in the introduction, the issue of ethics in future
industrial systems deserves to be carried out.
Ethics will be as such in the case of the opposite of the factors presented in Fig. 1,
namely:

• Clear definite rules are available;


• Scopes are known with certainty;
• Decisions are made independently from the “as-is” situation;
• Ethics is optimised “by design” in terms of liability.

Consequently, this view corresponds to the deontological paradigm, “where one


decides with the help of immutable ethical rules” [19]. Therefore, ethics has to be trans-
lated into a set of conditions, rules, parameters to check and apply. The only measure-
ments to get will concern the check-in of having subscribed to the predefined set of rules
and norms. From this point of view, ethics cannot be considered as a KPI.

3.2 Yes, Ethics Is Just Another KPI


A second point of view consists of stating that, in the digitalised 4.0 context, considering
ethics of industrial system design, use and support leads to deal, in some instances, with
some kinds of uncertainty regarding expected states and decisions made. This uncertainty
may arise in cases of non-routine operations of the system, when exceptional, unplanned
events occur or whose characteristics are not fully known. Such cases may concern the
situation where the control is made by human decision-makers as well as the situation
where the system controls itself. Human decision-makers can, therefore, be faced with
too many options, each of them having advantages and disadvantages, yet to be evaluated
in terms of societal or environmental impacts through PIs (Performance Indicators).
Consequently, in this context, making the correct “ethical” decision is not trivial. In the
236 L. Berrah and D. Trentesaux

case of self-operating systems, the reaction in front of new unconsidered event is totally
unknown. Choosing the correct a posteriori approach and making the “good” ethical
decision will be challenging. In the second case, having the correct a priori design will
be impacting.
Aligned with this view, there are some arguments militating to consider ethics as
a KPI, as emphasized by one of the industrialists in Fig. 2. As a preliminary to this
discussion, let us recall the general definition of a PI, as “a variable indicating the
effectiveness and/or efficiency of a part or whole of the process or system against a
given norm/target or plan” [20]. By its definition, a PI - and a KPI when it is overall
or major - provides performance expression, subscribes to the control loop principle
and deals with the “What you measure is what you get” [21] principle. A PI involves
thus an objective (expected state) and a measurement (reached state). The performance
expression is the result of the comparison of the objective and the measurement. In
a reactive control logic, such a measurement leads to improvement action launching.
Figure 3 illustrates the principle of considering a PI as a triplet (objective, measurement,
action) [22].

Fig. 3. Performance indicator elements

With this view, ethics handling by PI vision drives the definition of respectively the
corresponding objectives, measurements and actions that are associated with it. These
three definitions are described hereinafter.
Ethics Objectives: Is it possible to associate with ethics expected states to be achieved?
Ethics objectives are subscribing to the general concept of performance as: “the
capability to go where we want” [23]. Assigning ethics objectives is coherent thus with
the idea of achieving them. Ethics is something that is possible to get and to act on it.
Moreover, “The use of the term performance itself can come to mean “positive
progress” in itself, without any qualifying adjective applied to the term. The meanings
of performance where performance is used to denote an “exploit” or an “achievement”
is analogous to this [24]. Ethics objectives also convey this idea of progress, i.e. the
objectives are part of a desire for a better state than the previous ones, with a maximum
notion that makes little sense. As ethics is something that can be improved, objectives
can be associated with it.
Decision-Making in Future Industrial Systems 237

In this sense, ethics objectives obey the respective conditions of effectiveness, effi-
ciency and effectivity [22], since they are achieved by searching the best result possible
(effectiveness) with the best possible use of resource to do so (efficiency). As for the
effectivity of ethics objectives, it is a matter of common sense, since effectivity (or rel-
evance) by definition concerns in its broadest sense the value of assigning objectives to
the means and actions implemented to achieve them as well as to the expected results.
In essence, ethics is part of this logic and even goes beyond it.
Furthermore, associating objectives with ethics means dealing with the SMART
principle [25]. It remains thus to discuss the variables and the values to achieve. The
concerned variables are the ones of the industrial system in its considered lifecycle
step. Ethics issues are indeed concerning all or part of the system. Values and temporal
horizons are then assigned according to the corollary actions and the different semantics
such as improvement, lack, emergency, risks, which are conveyed by the considered
situation.
Finally, ethics objectives assignment is made as for the other performance criteria of
the industrial systems. The only difference is in the purpose of the objectives, which is
the ethics of industrial systems. However, ethics KPIs will have strong interactions with
other KPIs of the industrial systems, as discussed later in this section.

Ethics Measurements: Measurements follow the way the objectives have been
assigned, under the property that each objective is measurable, either quantitatively
or qualitatively.
Note that situations may arise in which a direct measurement may not be eas-
ily obtained. Approaches based on indirect measurements could then be used, using
aggregation mechanisms [26] that involve criteria that are interrelated with the given
situation.

Ethics Actions: As seen for the objectives, improvement is always possible regarding
ethics enactment.
Within this logic, no immediate optimum in ethics can be defined. Therefore, the idea
of a perfect ethical state will be the goal constantly sought, leading to more than one pos-
sible action to launch. Actions are associated with the assigned objectives, constituting
an overall action plan, and satisfying the condition of bijection between objectives and
actions, according to the PI triplet vision as depicted in Fig. 3. Obviously, the definition
of the action should handle the semantics of the corresponding objective. Typology of
actions will be then addressed (e.g. curative, preventive).
As an illustration of what has been suggested, Table 1 gives some cases that the
authors have discussed with two industrialists regarding PIs and ethics (the bearing
manufacturer introduced Fig. 2 and a kitchen and bathrooms manufacturer). From the
discussion led, it is clear that both are currently dealing with the digitalisation of their
production and are encountering situations for which they can still decide regarding the
conventional industrial control state-of-the-art logic. Meanwhile, it is also clear that they
cannot decide regarding the ethics point of view.
More specifically, ethics becomes something to handle, in a progressive way, by
making assumptions and analysing results and then concluding. Assigning objectives,
i.e. expected states, launching actions, getting measurement about the achieved results
238 L. Berrah and D. Trentesaux

Table 1. PIs and ethics: cases studies in the digitalisation of industrial systems

Objective Conventional industrial control Unknown ethics


decision-making and relevant
PIs
Kitchens and bathrooms manufacturer
Reduction of unproductive Development of a MES Well-being of an operator as
times (Manufacturing Execution he knows he is being watched
System) to control in real time during the labour time
the OEE (Overall Equipment PI: Security/Confidence of the
Effectiveness) operator
Increase of customer Process re-engineering and The increased pollution
satisfaction material/component change to degree within production
develop new and smart systems is not taken into
products account and the origin of the
material is not attested
PI: Environmental impact
PI: Material origin
Bearing manufacturer
Increase of reactivity Development of a digital The middle managers
(AI-based) tactical (engineers) feel as “losing
decision-making system their job”. They also consider
that the unusual situations are
not always correctly handled
PI: Ratio of Compliance
(effected missions/planned
missions)
PI: Number of unsatisfactory
handlings
Increase of the accuracy and Use of the augmented reality Operators are feeling lonely,
the speed of learning and with few exchanges of points
operating processes of view and discussions.
There is a risk of loss in the
collective knowledge and
collaborative work
PI: Social behaviour
PI: Part of collective
knowledge and individual
knowledge

then reacting will be thus the way of proceeding. As a summary, one can say that ethics
is associated with KPIs under the following considerations:

• No clear and unique rule can be retained and uncertainty in scope is observed.
• Measuring the reached situation is preliminary necessary for deciding.
• Ethics is something that can be continuously measured and improved.
Decision-Making in Future Industrial Systems 239

Lastly, considering ethics as a KPI amounts to being part of the consequentialist


paradigm of ethics, where, as it manifests in utilitarianism, “one decides according to
the possible ethical consequences” [19]. Naturally, this vision is complementary to the
first one, each applying under specific conditions, as synthesised in the next section.

3.3 Synthesis
As a synthesis, from our perspective, ethics is sometimes a KPI and sometimes not.
The two complementary positions held by the two industrialists illustrate this duality of
the concept. Ethics in future industrial systems must then be approached using the two
paradigms (consequentialism and deontology): it is not possible to adopt only one of
these paradigms while ignoring the other.
In that sense, the novelty is that deontology, which is the classical approach in the
human society, is no more sufficient because of the increasing level of unpredictability
and complexity of the interaction between the digital (cyber) world and the human one,
letting room to the use of other paradigms, such as consequentialism. This is not neutral.
For example, the question arises for the autonomous car: are we sure, for every
possible situation met in an open environment, that deontological rules (e.g. the highway
code) will always lead the autonomous car to take the optimal single ethical decision or
at least, the decision every human would have taken in that situation? [8]. Because of the
factors specified in the introduction, the difficulty to ensuring the answer “yes” is high.
On the opposite, consequentialism assumes that it is possible to quantify ethics,
which has been discussed for centuries by philosophers and others, and this is one of
the main issues to solve. How to evaluate that a situation, a decision or an action is
more ethical than another one? From our perspective, an accurate articulation of the two
paradigms could be an interesting approach.
As an illustration of this novelty in ethics handle, Table 2 contains several examples in
three application fields, including the one considered in this paper, indicating the different
points of view one can adopt in a given unconsidered situation to behave ethically. It
is worth noting that the healthcare context relates to the situation encountered with the
emergence of the Covid-19 pandemic.
In the case of a consequentialist approach, associating ethics with a KPI induces its
deployment in accordance with all the decisional levels of the system under considera-
tion. This deployment has to be carried out within the other considered KPIs, as practiced
in conventional 3.0 PMSs (Performance Measurement Systems) [27]. However, in view
of the nature of the ethics criterion, it necessarily interacts with the other criteria usually
considered.
The entire decision-making process will be impacted by the deontological aspect of
ethics. Namely, it is not so much a matter of producing in accordance with the C-Q-D-
E (Cost-Quality-Delivery-Ethics) tetraptych, but to integrate ethics in each considered
criterion. But, the deontological involved aspects may be based on more than one single
rule, leading to diversified strategies. The definition of the PMS will thus require pre-
liminary discussions of weight and interactions, and preferences policies, namely what
is compensatory and to what extent, what is veto, etc.
240 L. Berrah and D. Trentesaux

Table 2. Deontology and Consequentialism: a comparative analysis.

Situation Ethical paradigm adopted: conflicting situations


Application field Point of view Deontological view of the Consequentialism view
situation (ethics is not a of the situation (ethics is
KPI) a KPI)
Healthcare Adopting the ethical One must apply full No medicine exists up to
Context: an unknown paradigm experimental protocols to now for this disease.
coronavirus is spreading test different solutions Thus, if one evaluates all
across the world and treatments the alternatives, it seems
One must not prescribe that applying classical
medical treatment that has protocols will take time
not been fully validated we do not have:
and tested by the everything must be tested,
scientific community we are not in “normal
situations”. If a medical
treatment known for
years for other diseases
seems to work, while we
know the side effects, it
must be administered to
the patients that agree
Criticism of the other If not, this puts patients at Getting the results of the
paradigm risk since the experiments is too long,
consequences of people are dying!
medication are not Waiting for the
controlled. If one does completion of rigorous
that, people will die experiments as fostered
because of this lack of by deontology is
knowledge! criminal!
Administer blindly
different medical
treatments for which the
ratio benefit/risk is not
clearly stated, just to see
if it works, is criminal!
IoT Technology Adopting the ethical The GDPR (General Data This technology enables
Context: a new IoT-based paradigm Protection Regulation) to deploy human-aware
sensing technology of forbids the use of monitoring systems. If
human physiological personal data to evaluate the operator is stressed,
factors (e.g. temperature, the working performance has an accident or gets
stress) is available of a class of operators sick, this system will
anticipate any issue and
will enable rapid
response in case of it
(continued)
Decision-Making in Future Industrial Systems 241

Table 2. (continued)

Situation Ethical paradigm adopted: conflicting situations


Application field Point of view Deontological view of the Consequentialism view
situation (ethics is not a of the situation (ethics is
KPI) a KPI)
Criticism of the other This technology is too If a technology could
paradigm dangerous, too many help save life, it would be
possible diversions. A criminal not to use it
consequentialist approach because deontology asks
may lead to unethical questions and forbids any
decision: the deployment progressive view! Norms
of solutions where the must evolve
operator is constantly
monitored, and his data
stored, enables managers
to favour or disfavour one
gender for example, based
on physiological factors
Production Adopting the ethical Deontology is a key It is important to balance
Context: a new lean paradigm element of every short-term and long-term
approach, aiming to application of the decisions. Some
reduce cycle times, is principles of lean decisions at short term
deployed in the company management. It ensures may not be efficient at a
that everything is done to longer time range. It may
preserve operators. An be important for a
improvement must not be company to fire few
done if it leads to fire people than file for
workers bankruptcy
Criticism of the other Applying a It is not always possible
paradigm consequentialist paradigm to find an ethical rule or a
on a lean process could norm to apply in specific
help manager to make moments. Sometimes, no
money by firing operators rules apply or lead to a
worst situation

3.4 Proposal of a Generic Framework

As suggested, both deontology and consequentialism paradigms are required for han-
dling the ethics of decisions in future industrial systems, whatever the considered step
- design, use and support - of their lifecycle (even if the focus has been made here on
the support step). From the synthesis suggested in the previous section, it is obvious
that the deontological paradigm, even if it is relevant for a part, cannot totally handle
the overall ethics aspects of these systems. Indeed, general rules and principles can be
applied to normal operation context and deal with structural needs, but cannot cover new
unconsidered situations and all the consequences of the made decisions.
The suggested proposal is, therefore, to have a complementary manner for approach-
ing ethics. As the set of possibilities at each step of the industrial system is non-bounded
and since ethics become not only a result or a condition to check but a process to build,
242 L. Berrah and D. Trentesaux

the idea is to subscribe to a continuous and progressive improvement philosophy in order


to define new adequate behaviours and to enrich the existing framework, adopting the
idea that it is not possible to ensure “optimal” ethics from scratch, but ethics consid-
erations could be enhanced through iterative improvement processes. Dealing with the
Deming wheel principle [28], such a philosophy consists of continuously proceeding
through the following steps: i) observing the system according to ethics expectations, ii)
planning the expected states of the system regarding the occurred events, iii) choosing
the corollary actions and applying them, iv) checking the achieved results and finally
v) reacting by enriching ethical aspects and planning new expected states. This is the
methodological approach that is conveyed by PIs tools. Assuming that ethics can be seen
at some moments as a KPI and at other moments not, it will be consequently assumed
that it deals with either a continuous improvement approach or a set of pre-defined rules.
As discussed before, the choice of the line to be adopted depends on the nature of the
occurred event.
From a methodological point of view, the use of KPIs to deal with ethics is envisaged
when a non-routine event occurs, leading to an unconsidered situation regarding the
concerned part of the system. The approach adopted will be first to check the absence
or the inadequacy of the existing rules. If this is the case, the corresponding KPI will
have to be constructed, according to the triplet introduced in Fig. 3. This implies the
verification of the following properties.

• An objective and a measurement can be associated with the variables of the system.
• An action is possible to improve its performance expression.
• Feedback on the reached results is possible, in terms of establishment of new rules.

Ethics objectives will be considered until the situation is well-controlled, i.e. KPIs
return correct performance expressions. Therefore, rules and conditions will translate the
obtained results. “Conjunctural” ethics aspects, which temporarily have appeared, will
be thus replaced by structural ethics aspects, which will be intrinsically taking part in the
ethics deontological procedures, and, continuously, achieved objectives will be replaced
by new ones. Some avenues for approaching the ethics of future production systems are
given in the form of a generic framework, whose global architecture is described Fig. 4.
A known situation means that there exists a deontological rule for that situation. In
that case, the ethical decision made will be based on rules that are extracted from the
available database. These rules will correspond to the adequate handling of the occurred
known situation. Expert systems and formal logic modelling approaches could be used
in that context.
If the situation faced is unknown and put ethics at risks (e.g. threat, breakdown
of a critical system, cyber-attack, major evolution in the environment, application of
a new technology), meaning that no deontological rules apply, then a consequentialist
behaviour is triggered. The concerned variables are thus selected and objectives as well
as actions are associated to them, according to respectively the occurred event, the ‘as-
is’ situation and the expected one, as well as previously encountered similar situations
(when available).
The reached measurements will allow either the achievement of the expected state
or the redefinition of new objectives, in a continuous improvement logic. Indeed, as the
Decision-Making in Future Industrial Systems 243

Event
Iterative process (Deming principle)
monitoring
event

Event/state Contextual data


Classication of
classification (internal, external)
(event, state)
rules

Situation?

Unknown situation Known situation

Ethics-related
Consequentialist Deontological legal Deontological
Digital Indicators
based-behavior Based-behavior and legal rules
Twin Simulation (AI or Optimization) (expert system) database
request
Most ethical
calculated control decision
Multi-physics
Model dynamics Decision process
Update the
historization
deontological
(explainability)
set of rules

Apply the control


A posteriori evaluaƟon
decision

end

Fig. 4. A generic framework for ethical decision making in future industrial systems.

situation could be totally unknown, the reaching of the expected states could require
several iterative steps. At the end, the best ethical decisions to be made are defined
according to the analysis of the data that are provided by ethics KPIs. These KPIs could
gain from being associated to a digital twin of the industrial system that can simulate and
evaluate different strategies from its current state. Some parts of the framework can be
automated, while others not (e.g. the design of alternative decisions in a consequentialist
behaviour); the presence of the human remains therefore compulsory. It is also possible to
trigger systematically the consequentialist behaviour, even if deontological rules apply,
to suggest improvement either in the triggering situation and in the ethical decision
made.
This framework is clearly a first attempt. It remains to be improved, implemented
and tested in various situations. For example, it could be interesting to augment the
application of a deontological rule with a consequentialist study when a certain degree
of freedom remains available for decisions after the application of the deontological
rule. Another situation that could be studied is when it is not possible to evaluate all
the consequences of a decision from an ethical perspective. In that situation, clustering
technics could be used to find similar situations and evaluate the ethical degree of the
decision made then.
244 L. Berrah and D. Trentesaux

4 Conclusion and Prospects


The need to handle ethics in future industrial systems context may seem obvious regard-
ing the potential of new digital technologies on the one hand and the acceleration of
their use on the other hand. Even if it is the case, the complexity of such handling is
also admitted, leading to numerous issues in unknown situations, which are related to
the risks and the uncertainties of some made decisions.
This paper addressed the issue of handling ethics as a KPI or not, subscribing to a
continuous improvement philosophy and dealing with the consequentialism paradigm
as a complement to the conventional deontology paradigm. The deontological view is
adapted to known situations for which conventional rules and norms approaches can be
used. The consequentialist view is related to unknown situations where many solutions
can be considered. In this case, the idea is to associate with ethics objectives to achieve
and measurements for having pieces of information according to the different options
and reacting. In the first case, deontology bounds decisions as a set of constraints, while
in the second case ethics is a criterion to be integrated into the decision-making process.
This preliminary work on the subject and the definition of a unified framework will
be followed by a more in-depth definition of the ethics KPIs, their deployment and
their interactions with the other industrial KPIs, according to the different steps of the
future industrial system lifecycle. Integrating ethics as a KPI will certainly lead to a new
definition of PMSs. It may also give a broader sense of the effectivity condition of indus-
trial performance. Moreover, since it has been designed in a generic way, the proposed
framework could be extended to other areas such as health, transport or logistics.

Acknowledgement. Parts of the research work presented in this paper are carried out in the context
of Surferlab, a joint research lab with Bombardier and Prosyst, partially funded by the European
Regional Development Fund (ERDF), Hauts-de-France. Other parts of the work presented in this
paper are performed in the framework of the HUMANISM ANR-17-CE10–0009 research project.

References
1. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for Implementing the Strategic
Initiative INDUSTRIE 4.0: Securing the future of German Manufacturing Industry. National
academy of science and engineering, Wirtschaft und Wissenschaft begleiten die Hightech-
Strategie. Final report of the Industrie 4.0 Working Group (2013)
2. Akdil, K.Y., Ustundag, A., Cevikcan, E.: Maturity and readiness model for industry 4.0 strat-
egy. In Ustundag, A., Cevikcan, E. (eds.) Industry 4.0: Managing the Digital Transformation,
pp. 61–94 Springer International Publishing, Cham (2018)
3. Issa, A., Hatiboglu, B., Bildstein, A., Bauernhansl, T.: Industrie 4.0 roadmap: framework for
digital transformation based on the concepts of capability maturity and alignment. Procedia
CIRP. 72, 973–978 (2018)
4. Schuh, G., Gartzen, T., Rodenhauser, T., Marks A.: Promoting work-based Learning through
INDUSTRY 4.0. Procedia CIRP 32, 82–87 (2015)
5. ISO.: ISO 22400. Automation systems and integration - Key performance indicators (KPIs)
for manufacturing operations management (2015). https://www.iso.org/obp/ui/#iso:std:iso:
22400:-2:ed-1:v1:en
Decision-Making in Future Industrial Systems 245

6. Morahan, M.: Ethics in management. IEEE Eng. Manage. Rev. 43(4), 23–25 (2015)
7. Nath, R., Sahu, V.: The problem of machine ethics in artificial intelligence. AI Soc. 35(1),
103–111 (2020)
8. Trentesaux, D., Rault, R., Caullaud, E., Huftier, A.: Ethics of autonomous intelligent systems
in the human society: cross views from science, law and science-fiction. In: Borangiu, T.,
Trentesaux, D., Leitao, P., Cardin, O., Lamouri, S. (eds.) Studies in Computational Intelli-
gence, Springer, ParisProceedings of 10th SOHOMA Workshop on Service Oriented, Holonic
and Multi-Agent Manufacturing Systems for Industry of the Future (2020)
9. Trentesaux, D., Caillaud, E.: Ethical stakes of Industry 4.0. In: IFAC World Congress (2020)
10. Philip, R.: Corporate social reporting. Hum. Resour. Plan. 26(3), 10–13 (2003)
11. Goel, M., Ramanathan, P.E.: Business ethics and corporate social responsibility - is there a
dividing line? Procedia Econ. Finance 11, 49–59 (2014)
12. ISO.: ISO 26000 and the International Integrated Reporting <IR> Framework briefing
summary (2015). https://www.iso.org/files/live/sites/isoorg/files/store/en/PUB100402.pdf
13. Vinten, G.: Putting ethics into quality. The TQM Magazine 10(2), 89–94 (1998)
14. World commission on environment and development: Our common future. Oxford University
Press 13(4) (1987)
15. Purvis, B., Mao, Y., Robinson, D.: Three pillars of sustainability: in search of conceptual
origins. Sustain. Sci. 14(3), 681–695 (2018)
16. Palmer, C.: An overview of environmental ethics. In: Rolston, H., Light, A. (eds.) Environ-
mental Ethics: An anthology, pp. 15–37. Blackwell, Oxford, UK (2003)
17. Maggiolini, P.: A deep study on the concept of digital ethics. Revista de Administração de
Empresas 54(5), 585–591 (2014)
18. Longo, F., Padovano, A., Umbrello, S.: Value-oriented and ethical technology engineering in
industry 5.0: a human-centric perspective for the design of the factory of the future. Appl.
Sci. 10(12), 4182 (2020)
19. Bergmann, L.T., Schlicht, L., Meixner, C., König, P., Pipa, G., Boshammer, S., Stephan,
A.: Autonomous vehicles require socio-political acceptance: an empirical and philosophical
perspective on the problem of moral decision making. Front. Behav. Neurosci. 12 (2018)
20. Fortuin, L.: Performance indicators: Why, where and how? Eur. J. Oper. Res. 34(1), 1–9
(1988)
21. Kaplan, R., Norton, D.: The Balanced Scorecard: Measures that Drive Performance. Harvard
Bus. Rev. 83, 172 (1992)
22. Berrah, L., Clivillé, V., Foulloy, L.: Industrial Objectives and Industrial Performance. ISTE
Wiley, Hoboken (2018)
23. Lebas, M.: Performance measurement and performance management. Int. J. Prod. Econ.
41(1–3), 23–35 (1995)
24. Folan, P., Browne, J., Jagdev, H.: Performance: Its meaning and content for today’s business
research. Comput. Ind. 58(7), 605–620 (2007)
25. Doran, G.T.: There’s a SMART way to write management’s goals and objectives. Manag.
Rev. 70(11), 35–36 (1981)
26. Berrah, L., Mauris, G., Vernadat, F.: Information aggregation in industrial performance
measurement: rationales, issues and definitions. Int. J. Prod. Res. 42(20), 4271–4293 (2004)
27. Nudurupati, S.S., Bititci, U.S., Kumar, V., Chan, F.T.S.: State of the art literature review on
performance measurement. Comput. Ind. Eng. 60(2), 279–290 (2011)
28. Deming, W.E.: Out of the Crisis. MIT Press, Cambridge (1986)
Ethics of Autonomous Intelligent Systems
in the Human Society: Cross Views
from Science, Law and Science-Fiction

Damien Trentesaux1(B) , Raphaël Rault2 , Emmanuel Caillaud3 , and Arnaud Huftier4


1 LAMIH UMR CNRS 8201, Université Polytechnique Hauts-de-France, Le Mont Houy,
59313 Valenciennes Cedex, France
damien.trentesaux@uphf.fr
2 Alter Via Avocats, 7 rue de l’Hôpital Militaire, 59800 Lille, France
rrault@alter-via.fr
3 ICUBE UMR, Université de Strasbourg, 3, rue de l’université, Strasbourg, France
caillaud@unistra.fr
4 Laboratoire DeScripto, Université Polytechnique Hauts-de-France, Le Mont Houy,
59313 Valenciennes Cedex, France
arnaud.huftier@uphf.fr

Abstract. The objective of this paper is to discuss issues and insights relevant
to the ethical behaviour of future autonomous intelligent system merged in the
human society. This discussion is done at the frontier of three domains: science,
as the mean to imagine and design innovative technological solutions in the field
of autonomous artificial systems; law, as the mean to control, forbid and promote
what can be used in the human society or not from these technological solu-
tions; and science-fiction, as the imaginary world where scientists and lawyers
get consciously or not their inspirations, fears and dreams, driving their decisions
and actions in the real world. Four issues are specifically discussed. The crossing
of these domains illustrates that addressing ethics in AIS is an urgent need but
remains incomplete if addressed from a single discipline or domain point of view.

Keywords: Ethics · Autonomous systems · Artificial intelligence · Human ·


Science · Law · Science-fiction

1 Introduction
The context of this paper is relevant to the design and use of autonomous intelligent
systems (AIS) immersed in the human society, excluding military systems. Autonomous
robots and cobots in future industrial systems or autonomous cars in cities are illustrations
of such AIS. AIS are characterized by their ability to sense, decide and act (e.g., on the
physical world) [1]. They interact with other AIS and with humans. Artificial Intelligence
(AI) techniques enable them to learn and adapt to unforeseen events.
On the one hand, the developments of AIS are powered by several factors, and
among them the will to compensate for human errors. For example, it is estimated in the

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 246–261, 2021.
https://doi.org/10.1007/978-3-030-69373-2_17
Ethics of Autonomous Intelligent Systems in the Human Society 247

USA that 94% of car accidents have a human cause [2]. This type of statistic encourages
researchers and industrialists to develop AIS capable of outperforming the performances
of humans in various fields.
On the other hand, AIS will interact with others (AIS, humans). They will also evolve
in open and unpredictable environments. This complexifies the understanding and the
control of their behaviour. In addition, this behaviour can be potentially emerging in the
sense that it may not be possible to associate it explicitly to a statically programmed
computer code. As a consequence, risks relevant to their negative impact and even
potential dangerousness for the human society are induced. More, the development of
AIS may have a strong impact on human life, including works and working conditions in
our society. Consequently, the study of AIS behaviour, especially from an ethical point
of view, is getting importance. We thus state here the principle that, initially concerning
humans, ethics is a concept that will also concern AIS.
Ethics is initially a field of study in philosophy. This paper does not intend to discuss
the concept of ethics; philosophers have been working on it for centuries. Meanwhile,
it is important to set its definition. In our work, we selected the one of Ricoeur who
contextualizes it as “the strive for the good life, with oneself and others, in just/fair
institutions” (in French: “Une vie bonne, avec et pour autrui, dans des institutions
justes”) [3]. Ethics is seen here as a federative concept encompassing social expectations
described in terms of safety, security, integrity, explicability, altruism, kindness, caring,
trustworthiness, benignity, etc. [4].
Ethics is by essence a multi-disciplinary field of research. Written by authors working
in three different domains which are law, science and literature, this paper aims to discuss
issues and insights relevant to the ethical behaviour of future AIS merged in the human
society considering these three domains: science, as the mean to imagine and design
innovative technological solutions in the field of AIS; law, as the mean to control, forbid
and promote what can be used in the human society or not from these technological
solutions; and science-fiction (sci-fi), as the imaginary world where scientists and lawyers
get consciously or not their inspirations, fears and dreams, driving their decisions and
actions in the real world. Sci-fi is often the place for writers to put the science and the
technology at its limits and extrapolate consequences. While science builds “paradigms”
[5], sci-fi creates images from these paradigms, thus establishing itself as “transposition
literature” [6]. Figure 1 depicts the context of the paper.

2 Are We Sure that AIS Will Behave Ethically?

This section must be seen as a starting point for the discussion. Basic answers (yes/no)
to this question are here provided through the prism of sci-fi. This will be used to point
out the fact that getting in the real life a positive answer to this question, which one
obviously hopes for, will require the solving of major issues introduced in Sect. 3, issues
yet nearly unaddressed.
248 D. Trentesaux et al.

Fig. 1. The context of the paper

2.1 The Pessimistic Point of View

A large proportion of sci-fi production (writings, novels, games and films) transcribe
our fears and concerns (as human beings) about AIS whose autonomy may no longer
be controlled or even supervised. This loss of control would lead to the situation where
AIS would no longer act in the interest of humans or the environment. This fear was
expressed very early on, as soon as the term “Robot” appeared on the creation of the
play R. U. R. (Rossumovi univerzální roboti; Rossum’s Universal Robots in English)
written by Karel Čapek in 1920: “the robots are not people. Mechanically they are more
perfect than we are; they have an enormously developed intelligence, but they have no
soul”1 . This lack of “soul” runs through sci-fi literature and justifies in a way “disasters”
as well as the impossibility to deal with ethical adaptability. A good example is given in
this regard by Brian Aldiss’s Who Can Replace a Man? where an AI adapts, but it lacks
flexibility, and it then reports a limited ethical behaviour, since it comes back endlessly
to “hence” and “therefore”.
According to this pessimistic perception, a new intelligent species is identified, dif-
ferent from the human species. This new species, built on silicon, electronics and infor-
matics, has objectives hardly compatible with the ones of the human society, which
1 Karel Čapek, R.U.R. (tr. P. Silver & Nigel Playfair), New York: Doubleday, 1923, p. 17.
Ethics of Autonomous Intelligent Systems in the Human Society 249

leads to conflictual situations, human slavery or even wars. An important amount of


works is done on this subject in sci-fi: Kurt Vonnegut’s Player Piano, M. Alexis M.’s
Siscie, Harlan Ellison’s I Have No Mouth and I Must Scream, Bernard Wolf’s Limbo,
The Humanoids by Jack Williamson, Portrait du diable en chapeau melon by Serge
Brussolo, the Terminator, the Matrix or the intelligent machines that the Butlerian Jihad
imagined by Franck Herbert in Dune led to destroy. One can also rely on Philip K. Dick
when he says the android is “a thing somehow generated to deceive us in a cruel way, to
cause us to think it to be one of ourselves” [7]. In other words, if one looks carefully at
Dick’s schizophrenic robots and part of Asimov’s tales, the deceits, the many disasters
as well as the ethical issues can be explained by the fact that in androids we rediscover
the human and the (psychological) conflicts that go with it. According to Aaron Barlow,
“the danger androids represent (…) grow from the humans who constructed them, not
from the androids themselves” [8].
These sci-fi works translate fears that are very much present today and include, for
example, the fear that certain professions following the advent of the AI and these AIS
may disappear (accountants, lawyers, general practitioners, etc.). Fears also concern
the outperforming of human beings for a growing set of specialized functions, in the
entertainment field first (e.g., strategy games, go and chess and more recently, poker),
potentially in others yet to come.
One also finds the expression of these fears in the technical and technological spheres
where well-known personalities (Stephen Hawking, Bill Gates, Elon Musk) have been
warning for several years the society against the possible excesses of AI, whether its
civil or military application. The famous “big red button” blocking deviant, unethical
or dangerous behaviour from an AIS, a fleet of AIS or of an “electronic civilization” is
regularly mentioned by Elon Musk or Bill Gates. This big red button would be indis-
pensable and justified only from an ethical point of view as long as the AIS intelligence
remains outranked by the human intelligence, which means that the AIS is judged as
an inferior creature created by humans, the latter having as well as the right of life and
death concerning him.
This “fear” is also found in scientific spheres: the notion of the “safety bag” [9] plays
the role of a safeguards for AIS designed by researchers and industrialist. The aim is
to prevent the AIS from taking any action that does not respect deontological or legal
rules. Its behaviour is bounded by predefined safety zones. This approach can be seen
as a way to implement a kind of “big red button” but without going to the destruction
of the AIS. A classic example (considering that the man, the AIS) is that of the speed
limiter in car: no matter what the driver does, the speed cannot exceed the limit that he
set himself. The question behind it is that of the confidence in the AIS’s intelligence,
its acceptability [10] and trustworthiness [11]. Currently the level of trust in AIS is low
(would you be willing to entrust an autonomous car transporting your children?). This
is mainly because it is not currently feasible to prove a desired degree of behavioural
ethics (especially, reliability and explicability) for AI.

2.2 The Optimistic Point of View


The writings and contributions in science fiction that, on the other hand, highlight the
positive side for human society from the emergence of AIS are much rarer. One could
250 D. Trentesaux et al.

imagine that the reason is rather financial (the sensation of fear is always easier to sell)
or is guided by sensationalist purposes [12]. Very optimistic writings are still rare and
often lead to a lack of understanding from readers [13]. However, even in the mid-
twentieth century, when emerging sci-fi literature in this area was not as strongly guided
by these reasons, one notes very few works or writings where this positive side was
present. The fictional character of R. Daneel Olivaw imagined by Isaac Asimov and
Gabriel, the eponymous robot in Domingo Santos’ Gabriel, historia de un robot (1962)
remain special cases apart by their ethical desire to save the human species. This opti-
mistic view is also illustrated through the needed convergence of the interests of the two
“species” (human and silicon-based). For example, several public writings and novels,
e.g., Brown’s Origin or Herbert and Anderson’s Sandworms of Dune, describe, after a
phase where humans fear and suffer from robots, 1) the establishment of a community
of interest of the two “species”; or 2) the mutually beneficial integration (symbiosis) of
the two “species” (“benevolent cyborg”) in order to address common threats, or in order
to make these two “species” cohabitate better. Another type of convergence can be men-
tioned: a human having developed the processing capabilities of an AI, that the status
of “mentat” in the world of Herbert’s Dune illustrates perfectly. Finally, a specific vein
of sci-fi works puts in light robots having some of the human aspirations, in particular a
return to nature. The robots Jenkins in Clifford D. Simak’s City and Elmer in Cemetery
World are travellers, philosophers, historians…. They are a sort of robot-hobos which
reflect the aspirations towards a society where economic control is not the standard value.
These sci-fi works translate an optimistic view that is getting reality because of the
spreading of recent technological innovations. For example, the concept of augmented
human (e.g., the operator 4.0 in Industry 4.0 [14]) and symbiotic systems [15], aligned
with the principles of “transhumanism”, illustrates such a convergence in the industrial
sector. Its objective is to compensate for human deficiencies (or at least, that which is
perceived as such).

3 It Will Be in Fact Complicated to Answer This Question


Somewhat Manichean, sci-fi arguments discussed in the previous section impact on the
general public and should not be underestimated: in sci-fi, even though the narrative logic
designed to appeal to the widest possible audience, these writings and movies have a
strong media impact and thus have an effect on a very large proportion of human beings,
themselves in a position to influence on tomorrow’s scientists, tomorrow’s engineers
and tomorrow’s decision-makers in turn setting the policies of tomorrow [16–18]: being
situated between the “two cultures” [19], the “humanist” and the “scientific” ones, sci-fi
allows us to reflect on the “schizoidism of contemporary society” (in French: “schizoïdie
de l’univers contemporain”) [20].
Obviously, actors involved in the construction of future AIS work to ensure that AIS
will behave ethically and hope that they will (cf. the concept of “safety bag” previously
discussed). Meanwhile, this Manichean view should not be the tree that hides the forest:
the situation is going to be more complex than it seems: it will not only be hard to ensure
in an absolute manner that future AIS will behave ethically, but it may also be dangerous
if they behave too ethically. We discuss hereinafter four issues that illustrate this point
(cf. Fig. 2).
Ethics of Autonomous Intelligent Systems in the Human Society 251

Fig. 2. The four issues.

3.1 Issue 1: Closed-Loop AIS and Human Decision-Making

The advent of AIS will logically lead to inextricable situations due to the constant closed-
loop interaction between humans and AIS (and fleet of AIS), one deciding and acting, the
other reacting consequently and so on. Humans will learn through interactions with AIS
and AIS will learn through interactions with humans. This already exists: the concept
of human-as-a-service and micro-works assigned to humans by AI are clear examples
where an AI, needing to learn how to discriminate elements in pictures, asks human
to do the job, who in return tells the AI, knowingly or not, the reliable result of his
discrimination, e.g. in captchas. Sometimes captchas are provided to a user to check that
he is not a bot before accessing a website, while sometimes they are provided by an AI
to learn [21].
This interaction into a single, seamless, integrated world will lead to various percep-
tions, reasoning and actions whose responsibilities are hardly assignable. The sequence
of decisions taken by an AIS should lead, because of this constant interaction, to
behaviours that could be more or less ethical. An AIS could be forced by a human
to take a non-ethical decision (e.g., to kill people in case of non-avoidable accident).
252 D. Trentesaux et al.

Its actions could lead a human to behave unethically or to take risks to limit the conse-
quence of its actions (e.g., to bypass a security to avoid a hazardous situation provoked
by a defective AIS). This mutual interaction will blur the chain of responsibilities and
render expertise in the event of a problem or accident delicate: is an accident caused by
an autonomous car under its responsibility? Was it not the owner who did not respect
the maintenance logbook or the designer of the algorithm that has faulty coded a certain
behaviour? The work of Susan Calvin, a fictional character, robopsychologist in the
Robot series written by Isaac Asimov, consists in understanding such complex mutual
interactions. The aim is to solve paradoxical situations where a robot, also subject to the
laws of robotics designed to preserve the humankind, behaves in a specific, deviant or
hazardous way.

3.2 Issue 2: Time and Information Horizons for AIS Decision-Making

AIS will take decisions based on their own experience and knowledge by learning tech-
niques (e.g., AI). Their decisions will evolve according to their experiences and carried
out according to temporal and informational horizons yet to be determined statically
or dynamically. More, the quality of such decisions will depend on the time needed to
evaluate them given the time required to react to events. Consequently, assuming that it
is possible to quantify (measure) an ethical behaviour, which remains an open question
[22], the behaviour of an AIS may be more or less ethical, according to these horizons
(as the concept of “local optimum” vs. “global optimum” in operations research). Long
treated by sci-fi, the subject is new to scientists and lawyers working on AIS behaviour.
This subject is necessarily multidisciplinary, at the interface between science, technol-
ogy, psychology, law and philosophy. It concerns not only designers, operators, users
and maintainers of these AIS, but also the AIS themselves. In sci-fi, Law Zero Asimov’s
vision of the future has been designed to solve this kind of issue: at a short-term view, a
given behaviour is assessed very negatively (as it leads, for example, to several human
casualties), but becomes on a larger time window (centuries to come), evaluated very
positively as it leads to the survival of the human species. The AI Winston imagined
by Dan Brown in his novel Origin doesn’t hesitate to kill a character to serve a greater
purpose in the service of mankind, while the AI in Grant Sputore’s movie I Am Mother
decides to educate a human in order to entrust her with the mission of recreating the
humanity that it has just destroyed.
Consequently, various paradoxes can be imagined with Cornelian choices for the AIS
(example of the trolley dilemma) [2]. The issue is thus to assert in an absolute sense that
a given behaviour of an AIS is ethical since this assertion is not unequivocal: its “degree
of ethicality” depends on horizons and on the point of view taken. This is illustrated
in Federico D’Alessandro’s movie Tau, where an AI changes its behaviour and adapts
its “program” according to the person with whom it interacts: it therefore changes its
ethical point of view. Outside the scope of this paper, there is also the “political” and
“philosophical” question of a behaviour that for some people is ethical while not for
others. This depends on the culture of a country or community. In particular, what is
written here is highly correlated with the occidental culture.
Ethics of Autonomous Intelligent Systems in the Human Society 253

3.3 Issue 3: Difficulty to Develop AIS Morality in Unpredicted Situations

In the human society, ensuring morality is often driven by rules to comply with. The
main religions (the 10 Commandments), sci-fi (the laws of Asimov’s robotics), the legal
world (criminal laws), and the scientific world as well (concept of “safety bag”, or safety
level “SIL” to ensure a minimum level of operational safety) apply widely such principle
of “laws”.
Such principles can be extrapolated for AIS. Meanwhile, AIS will face unforeseen
situations, how to be sure that the behaviour of an AIS remains ethical? The “robustness”
of the analysis of the ethical dimension takes on its meaning: the issue is how to ensure
that all decisions made by an AIS, especially those that were not imagined by its designers
and applied by this AIS, be ethical.
In that context, there are moral and robust “safeguards” in the human mind that
applies to every decision he takes, an alarm that lights up when one is about to cross
a line and that alerts us about possible unethical behaviour or that helps us realizing
that one is crossing a moral barrier. Examples of such safeguards are the “Common
sense” (before acting) and the “bad conscience” (after acting). Not completely reliable
or even senseful, they however allow us to learn and apply as much as possible an ethical
behaviour, even in unforeseen situations, based on a set of beliefs to comply with and
to modify with time and experiences. This is a fundamental aspect of human education.
A “poorly educated” child is capable of trampling on a neighbour’s vegetable garden
to get his ball back. This can be extrapolated with an unethical AIS whose behaviour
can be associated to the one of a “bad” uneducated child. This point is clearly described
through the amoral proverb “the end justifies the means”: even if the “end” is ethical (in
the sense, moral and deontological), the means imagined and used by the AIS to reach
it may not be in unpredicted situations: with time, an AI may construct its own ethical
logic system, its own mental world, its own morality where ethical rules imposed by its
creator are translated into series of decisions and actions not imagined by its creator.
For example, the AI imagined by Antoine Bello in Ada develops a strategy of deci-
sions and actions in the objective of complying with the (legal) rule n°1 that governs it,
that of maximizing the profits from his owner’s business. Therefore, it discovers that it
could lie, violate several American laws and principles to reach its goals, while it has not
been programmed explicitly to do so (this behaviour emerges): it has not been properly
“educated” (voluntarily or involuntarily) and it does not “think” to behave unethically.

3.4 Issue 4: Risks of Skill and Emotional Slaveries

The theme of the slavery of man by a tyrannical machine is often studied in sci-fi. Beyond
this risk of slavery in the primary sense of the term (deprivation of liberty), two more
insidious and subtle risks of slavery must be considered by the researchers who work on
future AIS, obviously expected to behave ethically.
The first one is induced by the loss of skills and knowledge by humans that are trans-
ferred to AIS as time passes. This topic is already discussed, but not specifically regarding
ethics, in the field of human-machine cooperation [23]. [24] showed that navigational
aids impair spatial memory by dividing attention rather than selective interference of
verbal working memory. In a few years, with the autonomous car, humans will lose their
254 D. Trentesaux et al.

ability to drive (the concept of driving license will disappear). Develop a sense of direc-
tion will no longer be required. The human will thus be subject to a “skill slavery” by the
autonomous car as he will be completely dependent on its transportation ability he will
have no more. Generally speaking, AIS, regardless of their degree of ethical behaviour,
will absorb more and more knowledge and skills that human beings will forget (the brain
helps us to forget information that we don’t understand or no more need): the more time
passes, the more AIS will know how to make, transport, care for, cultivate, etc. and the
less the human will remember it. This is clearly illustrated by Pierre Boulle in his Planet
of the apes [25], if one considers the analogy of apes and AI: apes learnt the skills the
humans were forgetting. The issue is thus: what knowledge and skills do we assume to
lose? What power do we assume to leave? Are we going to be outclassed by ethical AIS?
Will they still need us? In this line, one could easily arrive at what we could call the
concept of “tyranny of benevolence”, as illustrated by some of the tales of Asimov, or
in Jack Williamson’s With Folded Hands (later rewritten Humanoids) where the Prime
Directive of any android (“To Serve and Obey, and to Guard Men from Harm”) will lead
to deprive men and women of their free will, creating a new kind of danger: “I found
something worse than war and crime and want and death… Utter futility” [26], p. 188.
One can identify a second kind of insidious slavery, regarding the risk of developing
an emotional dependence. One can easily imagine that, in an industrial system, the
presence of a companion AIS may relieve an operator who may stress and misbehave if
his companion AIS is not available or cannot help him. A step further: a human being
may be captivated, hypnotized by the kindness and caring of an AIS (cf. The concepts of
altruistic robot [27] and emotional robot [28]). One can imagine that if an autonomous
car takes care of everything, passengers afraid of driving may relax and get a positive
feeling along with the transportation experience. This risk can be again pushed forward.
For example, a driver falls in love with his autonomous car in Courtois’ Suréquipée.
A policeman is seduced by the AI Ada and develops a “bad conscience” because he
feels like he’s cheating on his wife, even virtually. If an AIS finally acquires enough
knowledge that an operator loses at the same time, an ethical behaviour on its part would
be to protect him in spite of himself, which would lead to prevent him from acting in a
way because of his lack of the knowledge. It remains to imagine an AIS having such an
aura that it unintentionally creates a kind of a cult welcoming men and women in search
of an ideal, much like a digital god willing.
In order to limit these two kinds of slavery, would it be useful to voluntarily limit
the ethical behaviour of an AIS to prevent it from being too altruistic (kind, caring)?
One can even imagine that a balance in the sharing of skills and knowledge between
human and AIS will be the best desirable ethical situation. The idea would be to let
humans maintain their level of skills and knowledge at the risk of making mistakes the
AIS would not have done but enabling them to keep learning to remain autonomous. In
the end, too much ethics kills ethics. A new paradox…

4 Examples
In this section, we detail application domains where ethical questions are crucial for AIS
and their designers. The reader will find for each several references to the four issues
introduced.
Ethics of Autonomous Intelligent Systems in the Human Society 255

First of all, surgical robots. They help the surgeons to be more precise and enable
medical acts to be proceeded remotely. At the current stage of their development, they are
neither autonomous nor intelligent and one can hardly imagine they will become soon.
Indeed, how can a human trust an AIS to practice surgery or at least to be used to interact
with his body [11]? But the robot, even under control, acts under the responsibility of the
surgeon, in reaction to his command, the surgeon adapting his command according to the
reaction of the AIS and so on. If there is a faulty reaction, even the responsibility of the
researchers that designed it may be engaged. Meanwhile, if, in the near future a surgical
robot operates with a precision, a speed and a vision that no human will never reach,
wouldn’t we prefer to enable it to work sometimes autonomously under the control of a
surgeon?
Transportation is obviously a flagship application of AIS. Autonomous cars are
being developed, but not only. Autonomous boats, planes and trains are also being
developed [29]. Ethical issues are raised by the development of these systems but still
no consensual solution has been found while the first autonomous cars, with various
degrees of autonomy are already tested on roads and marketed. From our perspective, it
will not be possible to remotely control or supervise all the autonomous vehicles, thus a
certain level of autonomy will be necessarily left to them since they will evolve and react
in real time in open environments. More, numerous AIS will be merged in fleets, each
of them interacting with a lot of humans. This will generate complex situations where
diagnosis of responsibility and decision chain identification will be hard to realize.
Future industrial systems, based on Industry 4.0 technologies, is a critical application
field of AIS, especially through the development of cyber-physical-human production
systems and human-in-the-loop cyber physical production systems [30], which fosters
the use of AIS, being products, robots, production resources, connected with the Internet
through sensors and actuators [31]. These AIS interact, interoperate and cooperate with
humans (e.g., cobots). How to ensure that the welfare and privacy of operators will be
preserved or that their job will not be suppressed with no compensation? How to avoid
the diversion of monitoring data?
Social robotics is also a critical application field of AIS questioning ethics. Having
a social robot to help elderly people is an option to allow them to be monitored and to
maintain social interactions when the resources allocated to take care of them is decreas-
ing or diverted for other issues (pandemics). Can one let a robot take care of children?
Are AIS a good substitute for human relations? Can a human depend (physically or
emotionally) on AIS? In this situation, what kind of relationship will be constructed?

5 Working on the Ethical Behaviour of AIS: Where Are We?

The four issues and the various application fields of AIS described in the previous section
clearly highlight the complexity and the intrinsic multidisciplinary aspect of the debate
to be constructed. We describe hereinafter a few lines of thought, contributions and
reviews from different spheres (legal, legislative, academic, social…) that nourish the
discussion.
256 D. Trentesaux et al.

First of all, the legal sphere is at the forefront of these topics. Lawyers have been
debating for a long time about AI and AIS. However, no real consensus emerges [32–35].
In a very interesting article, [36] describes not only the legal vacuum that AI potentially
generates in the event of damage to property or persons, but also the inadequacy of the
founding principles (some of them unwritten, such as “there can be no damage without
liability”) that led to the construction of all the French and European legislation. One of
the main problems it raises is to designate who is liable in the event of a damage caused
by an AI (the designer? the integrator? etc.) [4]. Will a judge apply the theory of “the
equivalence of cause”, sanctioning with an equal manner each of the actors involved in
the damage (including the researcher!), or is he interested instead in seeking for the root
cause of the damage? Is the civil liability regime the most relevant tool, knowing that the
autonomy of AIS will render them more and more independent of humans? For Glaser,
“The Paris Court of First Instance rules that the algorithms, their combination and the
data provided are indeed the result of human will” (translated from French) [36]. The
potentially incorporeal nature of AI complicates matters a little more, as legislators are
used to pointing the finger at whoever they consider to be responsible. However, one
avenue that he considers interesting is that of an article of the French Civil Code which
provides that “in the event of damage caused by the defect of a product incorporated
into another, the producer of the component part and the person who carried out the
incorporation are jointly and severally liable” (translated from French). Yet, it is still
necessary to be able to consider, as an illustrative example, that an AI integrated into a
cobot should be considered as a standalone product: for a standalone product, a defect that
led to an accident must be effectively diagnosed according to the state of scientific and
technical knowledge at the moment when the product was marketed. This is technically
feasible if the algorithms are all deterministic and explainable, but what happens if the
AI of the cobot has learned by itself and applied a wrong decision? He concludes his
article by noting that the legislator has not yet grasped the whole issue and is still too
reluctant in the face of the advent of AI. In his view, the real trigger for the legislator’s
questioning will come when the first judges will be unable to decide in the current state
of laws and rules. To speed up the evolution of mentalities in the legal sphere, mock
trials are organized. For example, the mock trial of the “pile-up of the century” tested
the current legal arsenal of lawyers in a futuristic context in 2041 where, following the
triggering of an emergency stop in an autonomous car, a gigantic accident takes place.
The purpose of the trial was then to determine responsibilities. This idea of a mock trial
is becoming widespread. This makes it possible to test the behaviour of legal actors
facing new situations, to identify the limits of current legislation and imagine how the
responsibilities could be shared.
This debate also takes place at political and legislative spheres. For example, the
European Parliament Resolution of 16 February 2017 envisages the definition of civil
law rules on robotics based on the status of “electronic person” [37]. The report “Making
sense of Artificial intelligence” or the Villani report issued in March 2018 by the French
politician and mathematician Cédric Villani is dedicated to the need to incorporate
ethics in the development of AI. One of the challenges for ethics of AI and AIS is the
transparency of the algorithms, which are currently opaque to the public, and sometimes
even to people who have designed the AI. The Villani report recommends the creation of
Ethics of Autonomous Intelligent Systems in the Human Society 257

a body of sworn public experts, which could be referred to an auditing body, which could
be seized in the context of a judicial litigation. Incorporating ethics in the development
of AI means that ethics must be present from the very beginning of the design of the AI.
It would then be a question of devoting ethics from design, as mentioned in the Article
25 of the GDPR on data protection by design. To this end, the report stresses the need to
raise awareness and teach researchers and producers in the field of AI and ethics from the
beginning of their training. Universities shall integrate ethics courses into their scientific
curricula and AI courses into their humanities curricula (at the time of writing, some
have started, such as the Université Polytechnique Hauts-de-France). Data protection
rules are also proposed. The idea is to carry out non-discrimination impact assessments,
in the same vein as those provided for in the GDPR. More recently, the ethical issues
of robotics and AI were once again discussed within the European Parliament. This
resulted in a resolution of 12 February 2019 on a comprehensive European industrial
policy on AI and robotics. Many aspects highlighted in the report Villani are found in
this resolution, which demonstrates a certain political and legislative consensus at the
European level on the way forward. Thus, the deployment of an AI must necessarily,
according to this resolution, be ethical from its design. What, then, are the ethical values
of the European Union to which the actors of AI shall comply with? The resolution
emphasizes on principles such as the ones of justice, human dignity, equality, non-
discrimination, informed consent, privacy and family life, protection of personal data,
non-stigmatization, transparency, individual responsibility and accountability.
The debate also takes place at social and societal spheres. Let us mention for example
the European Parliament’s public consultation on “Robotics and Artificial Intelligence”
of July 2017 and the development of “Charter and Ethics codes for robotics engineers”.
Communication medias have also appropriated the subject, as many newspaper stories
are written about the recent cases where partially autonomous cars have been involved
in accidents, that have accentuated in public opinion a sense of concern.
The debate is also being conducted into academic spheres [38, 39]. The basic ques-
tion is that of the qualification of the intelligence of AIS and robots: what is an AIS?
Classically, the law distinguishes different categories to identify legal or juridical per-
sonalities (mainly: objects, goods, human beings and sensitive living beings such as
animals). At a first glance, several options are available: an AIS is either a thing or an
animal or something else that has yet to be defined. This debate is increasingly animating
in the legal academic sphere. Two currents of thought are currently being identified. The
first one is opposed to the idea of treating robots legally as animals [40]. In this logic,
and aligned with the report [37] or the most avant-garde work of [41], some consider
that the AIS owns a legal personality, signifying the recognition of a new species, aside
human species. The second one, as in the work of [42], considers rather that the existing
legal arsenal is sufficient, for example by applying the laws about pet owners to AIS
and robots owners. From our point of view, this debate is far from being over. More,
it depends on countries where laws are applied. The scientific academic sphere, which
had fallen behind on this theme, perhaps considering that to bring up a sci-fi subject is
not serious, starts to structure itself. Examples of activities include projects and initia-
tives such as RoboEthics [43] and the MIT Moral Machine Initiative. The IEEE, which
258 D. Trentesaux et al.

regularly addresses the subject in its publications [44] has created in June 2020 an inter-
national technical committee on the subject: the IEEE IES TC on Technology Ethics and
Society. The USA is highly involved on this theme [45], but other countries also work on
it (cf. The ANR EthicAA project in France). The discussions focus on the establishment
of behavioural rules, the use of deep learning, the modelling of ethical behaviours, the
definition of the status of AI (weak AI: highly specialized on one function, or strong AI:
generalist, capable of copying human intelligence and its ability to adapt to deal with
unforeseen problems), etc. The study of paradoxes such as that of the trolley case [46]
often triggers research activities in this area. An innovative current of thought emanates
from the scientific sphere working on emotional robots [47], notably in contact with the
elderly or children [28]. Almost all the robots that are currently built for public demon-
strations are designed considering an emotional dimension (child face, soft artificial
skin, colours…): humans will develop more easily an empathy towards these technical
beings, which will be used to artificially increase the confidence regarding them and
prepare the society to the future integration of AIS, which points out ethical questions
as well at this level.
Whatever the sphere at which this debate takes place, a first consensus concerns the
required involvement of all the stakeholders during the lifecycle of the AIS, from the
designers to the end users (so far, there is one) [48]. A second important and consensual
point concerns the urgent need for the establishment of administrative and political
regulation systems, see for example [35]. This could be ideally set at an international
level under the auspices of international organizations over states. The idea would be
to oblige stakeholders engaged in the development of AIS, and especially researchers,
to comply with a set of constraints, standards and regulations in order to protect the
populations or to limit side effects (disappearance of trades in particular).

6 Conclusion

The arrival of AIS will profoundly change our society. Even if the ethics of the human
being has been studied for centuries by philosophers, the ethics of AIS is now making
sense but remains insufficiently addressed and must be studied through a multidisci-
plinary prism. From our perspective, it is important that each researcher working on AIS
evaluates the ethical impact of his research activity. The authors thus foster for exam-
ple the development of the concept of “Ethical Lifecycle Assessment” for an AIS, as it
already exists for environmental aspects.

Acknowledgement. The work described in this chapter was conducted under the auspices of the
project “Law of robots and other human avatars” funded by the IDEX Strasbourg Université et
Cité and in the framework of the joint laboratory “SurferLab” founded by Bombardier, Prosyst and
the Université Polytechnique Hauts-de-France. This Joint Laboratory is supported by the CNRS,
the European Union (ERDF) and the Hauts-de-France region. Parts of the work are also carried
out in the context of the HUMANISM No ANR-17-CE10-0009 research program, funded by
the French ANR “Agence Nationale de la Recherche”. The authors would like to thank warmly
Bérangère Kieken, Fabien Bruniau and Sébastien Caudrelier for discussions that nourished this
paper. Finally, the authors testify that no AI was used or mishandled in the writing of this chapter.
Ethics of Autonomous Intelligent Systems in the Human Society 259

References
1. Trentesaux, D., Karnouskos, S.: Ethical behaviour aspects of autonomous intelligent cyber-
physical systems. In: Service Oriented, Holonic and Multi-agent Manufacturing Systems for
Industry of the Future. Studies in Computational Intelligence, vol. 853, pp. 55–71. Springer,
Cham (2020)
2. Jenkins, R.: Autonomous vehicle ethics and laws: toward an overlapping consensus. New
America (2016)
3. Ricoeur, P.: Soi-même comme un autre. Seuil (1990)
4. Trentesaux, D., Rault, R.: Ethical behaviour of autonomous non-military cyber-physical
systems. In: 19th International Conference on Complex Systems: Control and Modeling
Problems, Samara (2017)
5. Kuhn, T.S.: The Structure of Scientific Revolutions. University of Chicago Press, Chicago
(1970)
6. Stolze, P.: La Science-Fiction: littérature d’images et non d’idées. In: Nicot, S. (ed.) Les
Univers de la Science-Fiction - Essais, pp. 183–202. Galaxies (1998)
7. Dick, P.K.: Man, android and machine. In: Nicholls, P. (ed.) Science Fiction At Large. Harper
& Row, New York (1976)
8. Barlow, A.: Philip K. Dick’s androids: victimized victimizers. In: Kerman, J.B. (ed.)
Retrofitting Blade Runner. The University of Wisconsin Press, Madison (1997)
9. Arnold, T., Scheutz, M.: The “big red button” is too late: an alternative model for the ethical
evaluation of AI systems. Ethics Inf. Technol. 20, 59–69 (2018)
10. Karnouskos, S.: Self-driving car acceptance and the role of ethics. IEEE Trans. Eng. Manag.
1–14 (2018). https://doi.org/10.1109/TEM.2018.2877307
11. Rajaonah, B., Sarraipa, J.: Trustworthiness-based automatic function allocation in future
humans-machines organizations. In: 2018 IEEE 22nd International Conference on Intelligent
Engineering Systems (INES), pp. 371–376 (2018). https://doi.org/10.1109/INES.2018.852
3876
12. Kirby, D.: The future is now: diegetic prototypes and the role of popular films in generating
real-world technological development. Soc. Stud. Sci. 40, 41–70 (2010)
13. Alexandre, L., Besnier, J.-M.: Les robots font-ils l’amour? Le transhumanisme en 12
questions, Dunod (2018)
14. Romero, D., Bernus, P., Noran, O., Stahre, J., Fast-Berglund, Å.: The operator 4.0: human
cyber-physical systems & adaptive automation towards human-automation symbiosis work
systems. In: IFIP Advances in Information and Communication Technology, pp. 677–686.
Springer, Cham (2016)
15. Longo, F., Padovano, A., Umbrello, S.: Value-oriented and ethical technology engineering in
industry 5.0: a human-centric perspective for the design of the factory of the future. Appl.
Sci. 10, 4182 (2020). https://doi.org/10.3390/app10124182
16. Schwarz, J.O.: The ‘narrative turn’ in developing foresight: assessing how cultural products
can assist organisations in detecting trends. Technol. Forecast. Soc. Chang. 90, 510–513
(2015). https://doi.org/10.1016/j.techfore.2014.02.024
17. Bina, O., Mateus, S., Pereira, L., Caffa, A.: The future imagined: exploring fiction as a means
of reflecting on today’s grand societal challenges and tomorrow’s options. Futures 86, 166–184
(2017). https://doi.org/10.1016/j.futures.2016.05.009
18. Anderson, S.L.: Asimov’s “three laws of robotics” and machine metaethics. AI Soc. 22,
477–493 (2008). https://doi.org/10.1007/s00146-007-0094-5
19. Snow, C.P.: The Two Cultures: And a Second Look. Cambridge University Press, Cambridge
(1964)
260 D. Trentesaux et al.

20. Hottois, G.: SF ou l’ambiguïté d’une littérature vraiment contemporaine. In: Science-fiction
et fiction spéculative, Editions de l’Université de Bruxelles (1985)
21. Tubaro, P., Casilli, A.A.: Micro-work, artificial intelligence and the automotive industry. J.
Ind. Bus. Econ. 46, 333–345 (2019)
22. Berrah, L., Trentesaux, D.: Decision-making in future industrial systems: is ethics a new
performance indicator? In: 10th SOHOMA Workshop on Service Oriented, Holonic and
Multi-Agent Manufacturing Systems for Industry of the Future. Studies in Computational
Intelligence, vol. 952, 1–2 October, Paris. Springer, Cham (2020)
23. Pacaux-Lemoine, M.-P., Trentesaux, D.: Ethical risks of human-machine symbiosis in indus-
try 4.0: insights from the human-machine cooperation approach. IFAC-PapersOnLine 52,
19–24 (2019). https://doi.org/10.1016/j.ifacol.2019.12.077
24. Gardony, A.L., Brunyé, T.T., Mahoney, C.R., Taylor, H.A.: How navigational aids impair
spatial memory: evidence for divided attention. Spatial Cogn. Comput. 13, 319–350 (2013).
https://doi.org/10.1080/13875868.2013.792821
25. Huftier, A.: Pierre Boulle: présentation. ReS Futurae, Revue d’études sur la science-fiction
(2015). https://doi.org/10.4000/resf.781
26. Williamson, J.: The Best of Jack Williamson. Ballantine, New York (1978)
27. Billingsley, R., Billingsley, J., Gärdenfors, P., Peppas, P., Prade, H., Skillicorn, D., Williams,
M.-A.: The altruistic robot: do what i want, not just what i say. In: Moral, S., Pivert, O., Marín,
N. (eds.) Scalable Uncertainty Management, pp. 149–162. Springer, Cham (2017)
28. Wu, Y.-H., Pino, M., Boesflug, S., de Sant’Anna, M., Legouverneur, G., Cristancho, V.,
Kerhervé, H., Rigaud, A.-S.: Robots émotionnels pour les personnes souffrant de maladie
d’Alzheimer en institution. NPG Neurologie Psychiatrie Gériatrie 14, 194–200 (2014). https://
doi.org/10.1016/j.npg.2014.01.005
29. Trentesaux, D., Dahyot, R., Ouedraogo, A., Arenas, D., Lefebvre, S., Schön, W., Lussier, B.,
Chéritel, H.: The autonomous train. In: 2018 13th Annual Conference on System of Systems
Engineering (SoSE), pp. 514–520 (2018). https://doi.org/10.1109/SYSOSE.2018.8428771
30. Gaham, M., Bouzouia, B., Achour, N.: Human-in-the-loop cyber-physical production sys-
tems control (HiLCP2sC): a multi-objective interactive framework proposal. In: Service
Orientation in Holonic and Multi-agent Manufacturing, pp. 315–325. Springer, Cham (2015)
31. Rüßmann, M., Lorenz, M., Gerbert, P., Waldner, M., Justus, J., Engel, P., Harnisch, M.:
Industry 4.0: The Future of Productivity and Growth in Manufacturing Industries (2015)
32. Rault, R., Trentesaux, D.: Artificial intelligence, autonomous systems and robotics: legal
innovations, service orientation in Holonic and multi-agent manufacturing. In: Borangiu, T.,
et al. (eds.) Studies in Computational Intelligence, vol. 762, pp. 1–9. Springer, Cham (2018)
33. Palmerini, E., Bertolini, A., Battaglia, F., Koops, B.-J., Carnevale, A., Salvini, P.: RoboLaw:
towards a European framework for robotics regulation. Robot. Auton. Syst. 86, 78–85 (2016).
https://doi.org/10.1016/j.robot.2016.08.026
34. Nagenborg, M., Capurro, R., Weber, J., Pingel, C.: Ethical regulations on robotics in Europe.
AI Soc. 22, 349–366 (2007). https://doi.org/10.1007/s00146-007-0153-y
35. Dreier, T., Döhmann, I.S.: Legal aspects of service robotics. Poiesis Prax. 9, 201–217 (2012).
https://doi.org/10.1007/s10202-012-0115-4
36. Glaser, P.: Intelligence artificielleet responsabilité: un système juridique inadapté? Bulletin
Rapide Droit des Affaires (BRDA), pp. 19–22 (2018)
37. Delvaux, M.: Civil law rules on robotics, European Parliament Legislative initiative procedure
2015/2103 (2016)
38. Marty, A.: Legal and ethical considerations in the era of autonomous robots, University of St.
Gallen, Zurich, Switzerland (2017)
39. Barfield, W.: Liability for autonomous and artificially intelligent robots. Paladyn J. Behav.
Robot. 9, 193–203 (2018). https://doi.org/10.1515/pjbr-2018-0018
Ethics of Autonomous Intelligent Systems in the Human Society 261

40. Johnson, D.G., Verdicchio, M.: Why robots should not be treated like animals. Ethics Inf.
Technol. 20, 291–301 (2018). https://doi.org/10.1007/s10676-018-9481-5
41. Bensoussan, A., Bensoussan, J.: Droit des robots, Larcier, Bruxelles (2015)
42. Nevejans, N., Hauser, J., Ganascia, J.-G.: Traité de droit et d’éthique de la robotique civile.
Les Etudes Hospitalières édition, Bordeaux (2017)
43. Alsegier, R.A.: Roboethics: sharing our world with humanlike robots. IEEE Potentials 35,
24–28 (2016). https://doi.org/10.1109/MPOT.2014.2364491
44. Allen, C., Wallach, W., Smit, I.: Why machine ethics? IEEE Intell. Syst. 21, 12–17 (2006).
https://doi.org/10.1109/MIS.2006.83
45. Anderson, M., Anderson, S.L.: GenEth: a general ethical dilemma analyzer. Paladyn J. Behav.
Robot. 9, 337–357 (2018). https://doi.org/10.1515/pjbr-2018-0024
46. Bergmann, L.T., Schlicht, L., Meixner, C., König, P., Pipa, G., Boshammer, S., Stephan,
A.: Autonomous vehicles require socio-political acceptance - an empirical and philosophical
perspective on the problem of moral decision making. Front. Behav. Neurosci. 12, 31 (2018).
https://doi.org/10.3389/fnbeh.2018.00031
47. Norman, D.A.: Emotional Design: Why We Love (or Hate) Everyday Things. Basic Books,
New York (2005)
48. Trentesaux, D., Rault, R.: Designing ethical cyber-physical industrial systems. IFAC-
PapersOnLine 50, 14934–14939 (2017). https://doi.org/10.1016/j.ifacol.2017.08.2543
Analysis of New Job Profiles for the Factory
of the Future

Lucas Sakurada1 , Carla A. S. Geraldes1(B) , Florbela P. Fernandes1 , Joseane Pontes2 ,


and Paulo Leitão1
1 Research Centre in Digitalization and Intelligent Robotics (CeDRI), Instituto Politécnico de
Bragança, Campus de Santa Apolónia, 5300-253 Bragança, Portugal
{lsakurada,carlag,fflor,pleitao}@ipb.pt
2 Universidade Tecnológica Federal do Paraná (UTFPR), Campus Ponta Grossa, Paraná, Brazil
joseane@utfpr.edu.br

Abstract. Industry 4.0 is being promoting the digitisation of manufacturing sec-


tor towards smart products, machines, processes and factories. The adoption of
disruptive technologies associated to this industrial revolution will lead to re-
shaping the manufacturing environment, decreasing the low-skilled activities and
increasing the high-skill activities, being expected to grow the complexity and
number of new job profiles. In this context, this paper aims to analyse the lit-
erature and recruitment repositories to identify the new job profiles in the fac-
tory of the future (FoF) across six industrial technological sectors, namely Col-
laborative Robotics (Cobots), Additive Manufacturing (AM), Mechatronics and
Machine Automation (MMA), Data Analytics (DA), Cybersecurity (CS) and
Human-Machine Interface (HMI). The performed analysis allowed to compile
a catalogue of 100 new job profiles that were characterised and analysed in terms
of technical and soft skills, type and level of profile, as well as the frequency
demand.

Keywords: Job profile · Factory of the future · Digital skills · ICT ·


Automation

1 Introduction
Occupations and job profiles are evidenced in several areas of study, e.g., economics,
sociology, history and management, which shows their social, economic and relevance
to the job market in general. In this way, the topic is notorious and constantly required
to be updated, once there has been a continuous evolution of occupations and work
profiles since prehistory. In this sense, it is known that the evolution of job profiles is
linked to some important factors, such as social changes focused on human behaviour,
changes in policies and legislation, recession, and new technologies and means of com-
munication. Such factors were present in the several industrial revolutions and changed
the professional profiles across each era, as illustrated in Table 1. As illustrated in the
previous table, job profiles follow the demanded characteristics for each industrial rev-
olution, mainly related to technologies, means of communication and transportation,
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 262–273, 2021.
https://doi.org/10.1007/978-3-030-69373-2_18
Analysis of New Job Profiles for the Factory of the Future 263

Table 1. Characterization of the different industrial revolutions.

1st industrial 2nd industrial 3rd industrial 4th industrial


revolution revolution revolution revolution
Approximated
1750 1870 1973 2013
dates
Localization UK USA / Germany USA Germany
Robotics and Information Cyber-physical Systems,
Main disruptive
Steam machine Eletricity and Communications Internet of Things and
technologies
Technology (ICT) Artificial Intelligence
Means of
Telegraph Telephone Commercial Internet Real-time Internet
communication
Means of
Train Automobile Airplane Autonomous vehicle
transportation
Mass/customized
Process type Artisanal Mass production Customized production
production
Factory operator, Computer technician,
Telegraph operator, What will be the
Factory book reader, Robotics technician,
Examples of Ice cutter, Lamp
job profiles lighter, Stoker, Horse
Knocker-upper, Maintenance technician, revelant job
whip manufacturer
Coil changer, Microelectronics profiles?
Telephonist,Typist technician

and types of production. Some job profiles are relevant to the respective industrial rev-
olution, emerging to face the current market challenges, but others are non-existent in
the subsequent industrial revolution. As example, the lamplighter was highly demanded
during the 1st industrial revolution, but with the emergence of electricity and light bulbs
in the 2nd industrial revolution, this job profile was extinguished. However, some job
profiles remain in the next industrial revolution and others need to be adapted, e.g., the
robotic technician appears in the 3rd industrial revolution but will need to expand his
skills to be adapted to the digital characteristics required in the 4th industrial revolution.
In this sense, it is clear that the relevant characteristics of each industrial revolution
directly influence the job market scenario, professions, careers, job profiles, types of
profiles, and in particular the necessary skills for the workers carry out their responsi-
bilities and duties according to the demanded requirements. Several reports show that
75 to 375 million people around the world may change their professional category by
2030 due to the new job market scenario [1], and 8–9% of 2,66 billion workforce will
have new occupations by 2030 [2].
This situation is most evident in pandemic periods, such as COVID-19, where there
is a greater demand for technology and digital resources to mitigate the effects of phys-
ical distancing. According to [3], COVID-19 is the most serious health crisis that the
world is facing in this century, provoking a strong impact in the world job market, par-
ticularly causing the loss of 195 million jobs. However, due to the COVID-19 pandemic
in United States from February to March 2020, there was an increase in the demand by
some professionals focused on digitisation, e.g, approximately an increase of 20% of
cybersecurity engineers and 12% of net developers [4, 5]. Also illustrative is the increase
of 775% in demanding cloud services, reported by Microsoft, in regions where the phys-
ical distancing is being more impacting [6]. As a result, millions of people may need
to acquire new digital skills, and others will need to change careers and improve their
skills to be adapted to the new job market reality. In this context, there is a need to
264 L. Sakurada et al.

understand the challenges and trends about new job profiles, in order to help employ-
ers and employees to perform the necessary up-skilling initiatives to implement/attend
according to their individual needs.
Having this in mind, this work aims to identify the new job profiles for the FoF
across six technological sectors, namely Collaborative Robotics (Cobots), Additive
Manufacturing (AM), Mechatronics and Machine Automation (MMA), Data Analyt-
ics (DA), Cybersecurity (CS) and Human-Machine Interface (HMI), that emerge with
the introduction of digitisation in the context of the 4th industrial revolution. For this
purpose, several pieces of data were extracted and analyzed from different information
sources, namely technical and scientific literature and recruitment repositories, using
proper data analytics techniques and the feedback from experts. This analysis allowed
to characterize the requirements for the specialized training of the current workforce in
terms of technical and soft skills, and type and level of profile.
The rest of the paper is organized as follows: Section 2 describes the methodology
used to identify the new job profiles and Sect. 3 summarizes the preliminary catalogue
of 100 new job profiles for the target six sectors. Section 4 provides a characterization of
the new job profiles, particularly analysing the distribution per sector, type of profile and
level of profile, as well identifying the most relevant soft skills for each type of profile
and technical skills per sector. Section 5 rounds up the paper with the conclusions and
points out the future work.

2 Methodology
As previously described, Industry 4.0 is re-shaping the FoF and also contributing for
the emergence of new jobs or new profiles with skills and competences associated to
information and communications technology (ICT) and automation emergent technolo-
gies. Under the scope of the FIT4FoF project (https://www.fit4fof.eu/), the definition
of new job profiles will assist in informing education and training requirements for the
current workforce, allowing professionals around the world to adapt and develop skills
based on the FoF requirements, particularly in the six sectors aforementioned.
The adopted methodology to identify the new job profiles, illustrated in Fig. 1, fol-
lows an iterative approach that comprises three distinct phases: collection and analysis
of the data to identify at least 100 job profiles, consolidation of the characterization of
the identified job profiles and identification of relationship with technological trends
and relevant skills. Furthermore, this methodology considers the use of automatic data
analysis techniques, such as text mining, and also feedback from experts of each of the
addressed areas.
Analysis of New Job Profiles for the Factory of the Future 265

list of
technological
trends

catalogue
of new job relationship
literature data analysis between trends
review profiles
and job profiles

recruitment relationship
repositories between skills
(e.g., GlassDoor) and job profiles

characterization
refinement

list of
experts relevant
new skills

Fig. 1. Methodology to identify new job profiles for the factory of the future.

The first phase comprises the analysis of different data sources, namely a literature
review and an analysis of recruitment repositories, which should be mapped with the
results of the gaps in technical and soft skills described in [7]. The literature review
consists of a detailed systematic review of reports from consultancy companies or orga-
nizational entities to extract a list of job profiles that reflects the recent tendencies in this
field. The analysis of the recruitment repositories uses advanced data analytics tech-
niques combined with natural language processing to complement the identification of
new job profiles, taking into consideration the actual demand by the job market in each
one of the six sectors.
The analysis of these two data sources allows to compile a list of 100 new job pro-
files across the six industrial areas, being each one characterized by a short description,
a list of relevant soft skills, a list of technical skills, and the type and level of the profile.
The second phase aims to consolidate the characterization of the new job profiles,
namely refinement the list of soft and technical skills through the feedback collected
from experts and stakeholders, i.e., professionals with expertise in at least one of six
target sectors. Finally, and performed in parallel, the third phase is related to analysing
the catalogue of new job profiles in a perspective that allows to identify the relationship
with technological trends and emergent skills. This analysis allows to identify the skills
that most impact the job profiles, supporting stakeholders to prepare their skills agenda
towards training their workforce towards these new profiles.

3 Catalogue of New Job Profiles


The catalogue of the new job profiles across the six technological sectors related to
the FoF was obtained by performing first a literature review followed by an analysis
of recruitment repositories (e.g., www.glassdoor.com). The catalogue of 100 new job
profiles, compiled after an iterative process, is presented per sector in Table 2. At this
stage, it is important to notice that a new job profile can be a completely new job profile
that does not exist in the past or an existing job but now with a new profile enhanced
with emergent skills.
266 L. Sakurada et al.

Table 2. Catalogue of identified new job profiles

Sectors Job Profiles


AM (1) 3D printer technician, (2) Nanotechnology engineer, (3) Advanced materials
specialist, (4) Modeling engineer, (5) 3D model expert, (6) Modular engineering
expert, (7) Micrometallurgy engineer, (8) Modeling and microcenography
specialist, (9) Parametric designer, (10) Digital construction and manufacturing
engineer, (11) Digital fabrication sculptors, (12) Bio-microfrabrication engineer,
(13) Biotech products and processes expert, (14) Multi-sensor data fusion expert,
(15) Customer experience specialist, (16) Customer relationship management
specialist, and (17) Mimicry and biomimicry engineer
Cobots (18) Intelligent robotics expert, (19) Artificial body programmer, (20) Drones
engineering expert, (21) Drone route developer, (22) Drone route technician, (23)
Cobots expert, (24) Industrial cobots developer, (25) Cobots technician, (26)
Cobots responsible, (27) Humanoid expert and (28) I4.0 project manager
DA (29) Data analyst, (30) Big data analyst, (31) Data scientist, (32) Data analytics
consultant, (33) Data analytics manager, (34) AI engineer, (35) ML engineer, (36)
Data engineer, (37) Quantum systems engineer, (38) Cloud services engineer, (39)
Risks analyst, (40) Cloud services manager, (41) Digital marketing manager, (42)
Business analyst, (43) Business Intelligence developer, (44) Industrial process data
analytics engineer, (45) Route logistics specialist, (46) Benchmarcking metrics
manager, (47) Predictive maintenance expert, (48) Data infrastructure architect,
(49) Digital Twin architect, (50) Smart grids specialist, (51) Circular economy
specialist, (52) Industry 4.0 architect, (53) Chief digital architect, (54) Digital
development specialist, (55) Business chief developer and (56) Industrial process
optimizer
CS (57) CS architect, (58) CS specialist, (59) CS manager, (60) Vulnerability manager,
(61) Threat landscape analyst, (62) Forensics analyst, (63) Malware analyst, (64)
Defensive security technician, (65) Cyber internal auditor, (66) Security
incident-handling designer, (67) Security monitoring specialist, (68) Data
detective, (69) Data protection officer, (70) Data security administrator, (71)
Blockchain expert and (72) Test engineer
MMA (73) CPS architect, (74) Smart sensors developer, (75) Smart clothes expert, (76)
IoT engineer, (77) IoT solution technician, (78) Real time systems expert, (79)
Digital systems integrator, (80) I4.0 PLC programmer, (81) Machine decision
supervisor, (82) Smart factory designer, (83) Factory operation assessment expert,
(84) Condition monitoring expert, (85) Resource-efficient intralogistics engineer,
(86) Scheduling and planning expert and (87) System engineer
HMI (88) Operator 4.0, (89) Augmented operator, (90) Smarter operator, (91) Digital
worker, (92) Collaborative operator, (93) Virtual and augmented reality developer,
(94) Factory virtual system designer, (95) VR technician, (96) AR/VR Immersive
content developer, (97) Extended reality architect, (98) Extended reality software
engineer, (99) Industrial UI designer and (100) Industrial UX designer

The most important literature used in this review is mentioned in [8–20], from where
the majority of the 100 job profiles were identified. For instance, 10 different job pro-
files covering different sectors were retrieved from the Catálogo de Perfı́s Profesionais
Analysis of New Job Profiles for the Factory of the Future 267

de Futuro report [15], namely Advanced materials specialist, Intelligent robotics expert,
Drones engineering expert, Smart grids specialist, Cybersecurity specialist, Blockchain
expert, Real time systems expert, Extended reality architect, Circular economy special-
ist and Customer experience specialist. On the other side, some job profiles are identi-
fied in more than one reference, e.g., Big data analyst [12, 18], Factory virtual system
designer [8, 9, 12] or IoT solution technician [8, 10].
The job profiles were classified according to the type of profile, being considered
the following five categories (definitions adapted from www.dictionary.com):

• Architect: a person professionally engaged in the design and conception of a certain


idea, system or product that must be innovative and skilled in different fields.
• Developer: a person who develops or innovates, with a creative thinking and spe-
cialised in some subject.
• Engineer: a person trained and skilled in the design, construction and use of engines
or machines, or in any of various branches of engineering.
• Specialist: a person who has special skills or knowledge in some particular field.
• Technician: a person who is trained or skilled in the technicalities of a subject.

In the same manner, the new job profiles are also classified in three levels: opera-
tional, tactical and strategical. These three levels are related to the scope of the decision
making over the time; if the decision has a short-term impact and related to individual
employees/units, the job profile will be categorized as operational. On the other hand,
if decisions persist over the time and influence the performance of the plant as a whole
then the levels will be classified as tactical and strategical, with strategical having a
longer influence in the performed decisions.
Additionally, each job profile has a set of soft skills and a list of technical skills,
that represents the requirements for the job position. The jobs shown in Table 2 are
associated to one sector due to simplicity of representation but the majority of them
are relevant to more that one sector. As examples, in a general way, there are profiles
related with Cobots that are also related with MMA and job profiles in DA that are also
related with CS.

4 Characterisation of the New Job Profiles


In this section, a characterization of the identified 100 new job profiles will be presented
and discussed. Figure 2 illustrates the categorisation of the job profiles, and some rel-
evant aspects can be described. Regarding the type of profile, the most representative
category, with approximately 42% of the job profiles, is the “Specialist”, while the
less representative is the “Developer” with only 11% of the jobs of the catalogue. The
remaining categories count with 19% of the jobs labeled as “Engineer”, 16% as “Tech-
nician” and 12% as “Architect”.
268 L. Sakurada et al.

Fig. 2. Categorisation of the new job profiles.

Looking to the level of profiles, the majority of the new job profiles may be consid-
ered as “Tactical” jobs (62%), 28% were considered “Operational” jobs, and only 10%
as “Strategical” jobs. These percentages support the observed distribution of the types
of profiles since the majority of them are related with tactical level jobs positions, with
a minority being considered as strategic level jobs, and only the “Technician” type of
profile being categorised as an “Operational” level job position.
Considering the distribution of the new jobs across the six industrial sectors
included in this study, approximately 28% of the listed jobs are on Data Analytics
and 16% on the Cybersecurity areas. Together both areas comprise 44% of the new
job profiles revealing the “value of data” in FoF. Also 39% of the identified new job
profiles are distributed by Mechatronics/Machine Automation, Collaborative Robotics
and Human-Machine Interface areas. Finally 17% of the jobs are related to the Additive
Manufacturing sector. All of this categorisation revels the importance of the listed new
job profiles in the smart factories in the context of Industry 4.0.
Additionally, a deeper analysis was performed to identify if each identified job pro-
file is a really “new job” and consequently has a “new profile”, or if it is an “existing
job” but with a “new profile”. The result of this classification is illustrated in scattering
diagrams (see Fig. 3) where each number corresponds to a specific job profile listed in
Table 2, and the colour specifies the type of profile.
Analysis of New Job Profiles for the Factory of the Future 269

Fig. 3. Dispersion of new job profiles: left) industrial sector and type of profile, and right) type
and level of profile.

The analysis of both diagrams shows the dispersion of the job profiles included
in the catalogue, and brings together all the performed categorisation. It is possible to
verify that 64% of the job profiles were considered to be “New Job/New Profile”, and
36% of the jobs in the catalogue are existing jobs positions but with a new profile. As
an example, we can refer that we have sixteen job profiles labelled as “Technician”,
and so they are categorised as “Operational” level job profiles. Moreover, other types
of profiles were considered of “Operational” level, e.g., the “(72) Test engineer” job
position demanded by the cyber-security sector is considered as an “existing job” but
with a “new profile”, and categorised as an “Engineer” profile type. A similar analysis
can be performed for all the job profiles included in the catalogue.
In summary, it is important to point out that although the majority of the jobs identi-
fied in the catalogue are new jobs profiles positions needed by the employers, there are
also existing job positions that will have new profiles as such requiring new skills and
competencies.
Another aspect that can be highlighted is the fact that the FoFs will require more
workers with specific competencies since a significant number of the analysed job posi-
tions (72%) were labelled as tactical or strategic level were several specific skills and/or
competencies may be mandatory.
With the aim to emphasize the more relevant skills for the identified new job pro-
files, the soft and technical skills required for each job profile were also identified.
Figure 4 illustrates, in a network graph, the relationship between the required soft skills
and the different types of job profiles.
270 L. Sakurada et al.

Fig. 4. Required soft skills per type of profile.

Our analysis revealed that some of the listed soft skills are cross-cutting among the
considered types of profiles. For example, “critical/analytical thinking”, “team work”,
“capacity to adapt to new situations”, and “communication skills” are required in all
the considered types of profile. Nevertheless, some skills are more often required
by employers. On the other hand, “creativity”, “communication skills”, “leadership”,
“problem-solving”, and “team work” soft skills are of great importance to the “special-
ist”, the “architech”, the “developer”, and the “engineer” type of profiles. For a “tech-
nician” job profile, it is possible to notice that the set of the most demanded soft skills
is quite different because this is an operational level job profile and skills such as “team
work”, “capacity to adapt to new situations”, “continuous lerning”, and “continuous
skill development” are more often required.
A similar analysis was also conducted to understand the relationship between the
most often demanded technical skills and each one of the six industrial sectors included
in this study. Figure 5 illustrates the most required technical skills for each target sector.
Analysis of New Job Profiles for the Factory of the Future 271

Fig. 5. Required technical skills per industrial sector.

Taking into account the technical skills that are demanded in the new jobs pro-
files, a large number of different technical skills was found since the job profiles cover
six different technical sectors. However, it is possible to emphasize some cross-cutting
skills and also some of the most relevant technical skills for each one of the six studied
industrial sectors.
We may point out that some technical skills such as “scheduling”, “smart sensors”,
“IoT”, “ML”, “programming”, “AI”, “digital skills”, “virtual reality”, “augmented real-
ity”, “optimisation”, “simulation”, “statistics”, and “communication networks” may
be considered cross-cutting skills since they are demanded on job profile positions
announcements of the different industrial sectors. For example, “AI” is a required skill
on job profile positions for all the industrial sectors and “digital skills” is a necessary
skill of AM, HMI, and Cobots sectors. Additionally, it is also possible to observe the
relevance of each skill for a specific industrial sector. According to the network graph
shown in Fig. 5, the greater the thickness of a line, the greater the relevance of that skill
for the sector. For example, considering the HMI industrial sector, the “augmented real-
ity”, “virtual reality” and “digital skills” skills, together with both “AI” and “program-
ming”, appear as the most relevant technical skills since they were frequently required
among the job profiles requirements of this sector.
272 L. Sakurada et al.

5 Conclusions
Along several industrial revolutions, the job profiles evolved to face the disruptive tech-
nological changes. Also presently, at the fourth industrial revolution, the introduction
of Industry 4.0 principles and technologies are re-shaping the workforce profiles, being
noticed a decrease in the demand for low-skilled activities and an increase of high-skill
activities. This change causes that a significant number of existing job profiles will be
obsolete and new job profiles will emerge.
This paper aims to identify the new job profiles for the FoF across six industrial
technological sectors, namely Cobots, AM, MMA, DA, CS and HMI. The performed
analysis allowed to compile a catalogue of 100 new job profiles that were characterized
and analysed in terms of technical and soft skills, type and level of profile. The charac-
terization of these new job profiles allowed to analyse the distribution by type and level
of profile, as well as industrial sector. A deeper analysis allowed to conclude about
the relevance of soft skills and technical skills for these new job profile, particularly
analysing the most relevant soft skills per type of profile and most relevant technical
skills per industrial sector.
It is also important to notice that the developed analysis can answer to what jobs
profiles the future holds in the FoF field. This information may assume a crucial role
to support companies’ managers and stakeholders to decide what upskilling initiatives
should be attended by their workforce according to the needs, particularities and goals
for their organization. In fact, having identified the relationship between new job pro-
files and relevant skills, the decision-makers can look for the positioning of relevant
skills in the desired type of profile and sector, and select the proper training programs’
topics.
Future work will be devoted to analyse the relationship of new job profiles with
technological trends and with the demand over the time found in job recruitment repos-
itories.

Acknowledgement

This work is part of the FIT4FoF project that has received funding from the
European Union’s Horizon 2020 research and innovation programme under grant
agreement n. 820701.

References
1. Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., Sanghvi, R., Saurabh, K.:
Jobs Lost, Jobs Gained: Workforce Transitions in a time of a automation. Mckinsey Global
Institute, December 2017
2. Lin, S.J.: Technological adaptation, cities, and new work. Rev. Econ. Stat. 93(2), 554–574
(2011)
3. Fine, D., Klier, J., Mahajan, D., Raabe, N., Schubert, J., Singh, N., Ungur, S.: How to rebuild
and reimagine jobs amid the coronavirus crisis. Mckinsey & Company, April 2020
4. Perry, T.S.: Tech Jobs in the Time of COVID: cybersecurity job openings explode, while the
job market gets tougher for Web developers and Ruby experts. IEEE Spectrum, April, 2020
Analysis of New Job Profiles for the Factory of the Future 273

5. Lund, S., Ellingrud, K., Hancock, B., Manyika, J.: COVID-19 and jobs: Monitoring the US
impact on people and places. Mckinsey Global Institute, April 2020
6. Verbist, N.: How COVID-19 is accelerating digitalization. https://hello.global.ntt/en-us/
insights/blog/how-covid-19-is-accelerating-digitalization. Accessed 20 May 2020
7. Leitão, P., Geraldes, C., Fernandes, P., Badikyan, H.: Analysis of the workforce skills for the
factories of the future. In: Proceedings of the 3rd IEEE International Conference on Industrial
Cyber-Physical Systems (ICPS 2020), pp. 353–358 (2020)
8. Ras, E., Wild, F., Stahl, C., Baudet, A.: Bridging the skills gap of workers in industry 4.0
by human performance augmentation tools: challenges and roadmap. In: Proceedings of
the 10th International Conference on PErvasive Technologies Related to Assistive Environ-
ments, pp. 428–432 (2017)
9. Kaji, J., Hurley, B., Devan, P., Bhat, R., Khan, A., Gangopadhyay, N., Tharakan, A.G.: Tech-
nology, Media and Telecommunications Predictions 2020. Deloitte Report (2019)
10. Mcafee, A., Brynjolfsson, E.: The Second Machine Age - Work, Progress, And Prosperity In
A Time Of Brilliant Technologies. WW NORTON & CO (2016)
11. Tseng, M.-L., Tan, R.R., Chiu, A.S.F., Chien, C.-F., Kuo, T.C.: Circular economy meets
industry 4.0: can big data drive industrial symbiosis? Resour. Conserv. Recycl. 131, 146–
147 (2018)
12. Mechanical Engineering Industry Association, “Industrie 4.0 in practice-Solutions for indus-
trial applications”, report (2016)
13. Olivan, A.D., Ser, J., Galar, D., Sierra, B.: Data fusion and machine learning for industrial
prognosis: trends and perspectives towards Industry 4.0. Inf. Fus. 50, 92-111 (2019)
14. Ruppert, T., Jaskó, S., Holczinger, T., Abonyi, J.: Enabling technologies for operator 4.0: a
survey. Appl. Sci. 8(9), 1650 (2018)
15. de Galicia, X.: Catálogo de Perfı́s Profesionais de Futuro. Technical report (2019)
16. ManuFuture High level group. Manufuture vision 2030, report (2018)
17. Basco, A.I., Beliz, G., Coatz, D., Garnero, P.: Industria 4.0: fabricando el futuro (2018)
18. Queiroz, J, Leitão, P., Barbosa, J., Oliveira, E.: Distributing intelligence among cloud, fog
and edge in industrial cyber-physical systems. In: Proceedings of the 16th International Con-
ference on Informatics in Control, Automation and Robotics (ICINCO 2019), vol. 1, pp.
447–454 (2019)
19. Mabkhot, M.M., Al-Ahmari, A.M., Salah, B., Alkhalefah, H.: Requirements of the Smart
Factory System: A Survey and Perspective. Machines 6(2), 23 (2018)
20. Benesova, A., Tupa, J.: Requirements for education and qualification of people in industry
4.0. Procedia Manuf. 11, 2195–2202 (2017)
Evaluation Methods of Ergonomics Constraints
in Manufacturing Operations for a Sustainable
Job Balancing in Industry 4.0

Nicolas Murcia1,2 , Abdelmoula Mohafid1 , and Olivier Cardin1(B)


1 LS2N, UMR CNRS 6004, Université de Nantes, IUT de Nantes, 44 470 Carquefou, France
{nicolas.murcia,abdelmoula.mohafid,olivier.cardin}@ls2n.fr
2 AIRBUS, 60 Rue Anatole France, 44550 Montoir-de-Bretagne, France

Abstract. Over the years the human factors are becoming increasingly decisive in
the organization of the manufacturing industry production process. In this article
we are overviewing how ergonomics are integrated in the complete job-scheduling
optimization process; we are specifically focusing on the collection of ergonomic
data. A large variety of tools and methods have been developed to assess physi-
cal and psychosocial risks in a working environment. In this article we review the
principal methods described in the literature, labelled under three main categories:
observational, self-evaluation and direct measurement. This large diversity of eval-
uation methods is directly linked with the flexibility required by health experts
to analyze precisely various situations in the field. Most of the ergonomic-based
job scheduling applications reviewed are using a different method which makes
it difficult to compare directly the efficiency of the subsequent optimization.

Keywords: Human factors · Ergonomics · Industry 4.0 · Human resource


planning · Ergonomic assessments methods

1 Introduction
Following the societal goal of Industry 4.0 to introduce the human factors in the man-
ufacturing process management, safety at work has become an even more important
concern for the manufacturing industry in order to improve performance and sustain-
ability. Over the last twenty years, the development of the manufacturing process has
increased work-related disease risk for workers, partially due to the transition of man-
ufacturing process towards the lean management. The lean management contributes to
the intensification of work and the reduction of cycle time in industry [1]. Workers in
manufacturing industries are usually performing repetitive tasks that expose them to an
intense physical workload that can induce work-related disorder such as musculoskeletal
disorders [2]. Musculoskeletal disorders (MSDs) are injuries and disorders that affect
the soft tissues of the human body (i.e., muscles, nerves, tendons, ligaments, etc.) and
restrict the body’s movement [3].
MSDs have a huge social impact, as a third of Europeans workers from every activity
sector are currently suffering from MSDs, which are affecting about 45 million workers

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 274–285, 2021.
https://doi.org/10.1007/978-3-030-69373-2_19
Evaluation Methods of Ergonomics Constraints in Manufacturing Operations 275

in Europe [4]. In Europe MSDs risks are amplified by the phenomena of “Ageing Work-
force” [5] which is the disparity between the proportion of workers aged 50 or more
and workers aged 25 or younger; the first category is currently the double of the second.
MSDs are causing deleterious effects on life quality for the workers and are the main
cause of sick leave and lost work days [6]. The cost of the lost productivity due to MSDs
is estimated to be close to 2% of the gross domestic product in Europe [5]. The main
risk factors for MSDs are the biomechanical constraints, but it is widely admitted that
occupational disease at work are caused by multifactorial constraints [7].
Arduousness of a task is the perceived effort to realize this task. It is the main
ergonomic data used to evaluate the physical characteristic of a job. In order to evaluate
this arduousness and the physical risk, three class of methods exist: observational meth-
ods, self-evaluation questionnaires and direct measurement methods [7]. The choice of
the ergonomic assessment method is based on the purpose of the evaluation, the char-
acteristics of the work to be assessed and the resources available for collecting and
analysing the evaluation. In order to improve their societal sustainability, manufacturing
organizations managers are working with ergonomists aiming to find solutions about
these ergonomic problematics to reduce physicals and psychological risks. In literature,
most of the ergonomic evaluation process follows the steps presented in Fig. 1.

Fig. 1. Example of ergonomic evaluation process

Before taking any ergonomic action, the first step is to identify the physical risk such
as bad posture, heavy weight load or the repetitiveness aspect of tasks that could cause
MSDs for the workers. This identification can be done by registering the complaints
from workers who highlight an arduous situation during the operations they perform.
Important physical risks can also be highlighted with an augmentation of occupational
diseases or important absenteeism in the manufacturing plant or more precisely in the
workstation. These identifications lead to an ergonomic evaluation done with the help
of a health expert who selects a measurement method in order to expose the physical
risks and evaluate the risk level on the identified situation. When the evaluation has been
made, the objective is to find a solution that respects a budget and that reduces the risks
for the workers. The solution proposed is often an amelioration of the workstation design
276 N. Murcia et al.

or adding a technical solution, for example an exoskeleton to relieve the worker when
he carries a load. The ergonomic evaluation ends with the feedback from the worker and
the evaluation of the method used to improve the working situation.
Today, many technical solutions involving a modification of the workstation are
either already used in practice or, if not, too expensive to be achievable in manufacturing
industry. In both cases, the risks are still present as no solutions are 100% efficient. In
this context, we are interested by the methods developed by the managers after this full
ergonomic phase in order to reduce again the physical risks for the workers.
In this paper, we describe the global process of ergonomic-based job balancing
in the literature. We then compare the different methods used to gather data on the
different ergonomic risks; these methods are classified in three sub-categories which
are observational methods, self-evaluation methods and measurement-based methods.
We then propose a discussion on these methods and their use in ergonomic-based job
balancing.

2 Ergonomic-Based Job Balancing Process in Literature

Studies and investigations have shown that doing repetitive movements at work is one
of the principal risks factors for occupational disease, along with awkward posture and
heavy weight lifting [8]. First instances of job-rotation appeared in 1975 as a development
of the new Toyota production system and the first use of lean management [9]. This job
rotation idea came from the workers that wanted more flexibility for the time window
of their break. Whenever someone wanted to take his break another worker was filling
the gap at the workstation. Ergonomists and managers in the manufacturing industry
then started to develop job rotation as a solution to reduce the repetitive strain on the
workers. Most of the ergonomics-based job balancing process in the literature and their
application have a similar process.
For this process we identified three different phases highlighted in Fig. 2. The first
one is constructed around the identification and the measurement of the ergonomic risk
factors. The first important choice is the selection of ergonomic risk factors used in
the optimization, for example it can be centred around physical risks such as postural
risks, or psychosocial risks like job satisfaction. Either way, these ergonomic risks are
measured with the help of a health expert following a precise assessment method selected
beforehand. Once the ergonomic risk variables are defined and associated with a value
representing the risk level, the ergonomic-based optimization problem can be defined.
Given that ergonomic measures are often qualitative values, the data may require an
adaptation in order to fit in an optimization problem. For example, a colour-method
measure can be transformed into a score [10] or an exposure level can be transformed
into injury risk [11]. Job-Affectation optimization problems are mainly formulated as
a Line Balancing or a Job-Rotation problem [12–14]. The repartition of the physical
risk among the production operators is the most common objective in these different
formulations. Some studies also consider the economic aspect, making a multi-objective
formulation between economic and ergonomic variables [12].
When the problem has been formulated the remaining phase is to find a way to solve
the problem and to evaluate the result of the optimization which consists mostly in using
Evaluation Methods of Ergonomics Constraints in Manufacturing Operations 277

Fig. 2. Overview of an ergonomic-based job-balancing process

a solver or a meta-heuristic. The evaluation of the result is not trivial because ergonomic
variable are still hard to quantify but is still possible. A comparison with the initial
situation can also be realized and linked with the feedback the production operators can
give for a good evaluation of the optimization method. In the next chapters we will take
a deeper look into the phase 1 which is the evaluation of ergonomic risks at work and
make a review of the different ergonomic assessment methods.

3 Observational Methods
Observational methods are commonly used to evaluate the physical risks. Used by health
experts, it consists of examining the work process and evaluating the risk factors for the
worker according to a checklist or a grid. Duration of exposition, intensity of a task,
repetitive exposure to a risk and uncomfortable postural positions are evaluated.
These methods are commonly used by ergonomists to find dangerous situations.
In the manufacturing industry and especially in the automotive industry, observational
methods are widely used to assess ergonomic constraints for each workstation individ-
ually. The result is represented by a risk level on a given criteria, quantitatively with a
score or qualitatively with a colour code, red representing an important physical risk,
yellow a moderate risk and green a safe situation. There are more than 30 observational
methods [15] with sensible differences on the part of the body evaluated and threshold
for the different criteria [16]. Among these methods some of the most commonly used
are exposed in Table 1.
These observation-based assessments of the physical risks are mostly measuring the
risks of the postures and the intensity of a task on the whole body or some regions.
However, these methods often provide different results and are not directly comparable,
mostly because it depends on the observation and because there are no defined standards
for ergonomic measures [16, 17, 31]. It is indeed impossible for an observation method
278 N. Murcia et al.

Table 1. Comparison of the most common observational risks assessment methods in literature

Risk assessment Description of the Criteria measured Studies using this


methods method method
RULA: Rapid Upper Survey method for Posture of the neck [18, 19]
Limb Assessment [17] inspection of trunk and legs, muscle
work-related upper use, repetition and force
limb disorders
specific to repetitive
tasks
REBA: Rapid entire Assessment method Posture of the body, [21]
body assessment [20] for strained posture intensity, movement,
on the whole body repetition and coupling
(Trunk, Neck, Legs,
Upper arms, Lower
arms and Wrists)
OWAS: Ovako Method for analysing Posture of the body [23]
working posture strained posture effect
Analysis System [22] on the whole body
OCRA: Occupational Method for assessing Posture, weight lifted [25]
Repetitive actions [24] the effect of workload and vibration
on the upper body
EWAS: European Screening tool for the Posture, intensity of the [26]
Assembly impact of the action, Movement and
Worksheet workload on the Manual handling
whole body
Revised NIOSH lifting Observation method Posture and weight [19, 28]
Equation [27] for analysing the handling
physical and
psychological impact
of weight lifting on
the body
QEC: Quick Exposure Observation method Posture and intensity of [29]
Check for the whole body the action performed
linked with the
worker evaluation
SI: Strain Index Method to analyze Position of the hand, [30]
the risk of a task on repetition, duration and
the wrist and hand intensity of the action

to be optimal for all purposes and the parameters of the measure affect the selection of
the best method [16]. More advanced methods including video analysis of tasks have
been developed to improve the accuracy of the ergonomic assessment. Video analysis
allows ergonomists to check a given working situation multiple times and also allows to
have a reflection with the worker about the arduousness of this situation.
Evaluation Methods of Ergonomics Constraints in Manufacturing Operations 279

4 Self-evaluation Methods
Self-evaluation methods have been developed to collect data directly from the workers
by asking questions about their health and their perception of physical risks at work.
These methods are mostly used in studies to evaluate risks factors at work and their
impact over the subjects’ health. During studies on evaluation of ergonomic risks at
work, a self-evaluation questionnaire can be used to identify the different risk factors
and their impact on the workers’ health.

Table 2. Studies using self-evaluation methods

Reference Description of the Population Sample MSDs


self-evaluation method Risks highlighted
[36] Evaluation of the worker Workers in Automotive Job-induced strain and age
population age, length of industry (n = 1700)
service, height, BMI, job
satisfaction and perceived
strain
[37] Questionnaire measuring Nursing personnel (n = Number of highest risk
the pain/discomfort at 113) tasks per hour, number of
work during the last week, overweight patient and
its intensity and the nurse status
interference with the
ability to work
[32] Questionnaire on Randomly selected Localization of the MSDs
musculoskeletal symptoms workers from risk on the body
for each body part New-Zealand
according to the age and (n = 3003)
the gender
[38] Use of standardized Nordic Workers in meat Physical psychological
questionnaire in order to processing industry (n and individual indicator
compute the prevalence of = 174) variables
musculoskeletal
discomforts
[39] Use of Nordic Female workers in an Employment duration and
musculoskeletal hazelnut’s factory (n = results at Nordic
questionnaire to determine 162) questionnaire
MSDs symptoms for each
body region

These ergonomics risks include physical risks such as incorrect posture, important
force exertion, repetitive movement and important weight lifting [31]. Personal infor-
mation like age, gender or height, are also demanded in a self-evaluation questionnaire
in order to highlight the links between these individuals information and the muscu-
loskeletal syndromes [32]. Self-evaluation questionnaires can also assess organizational
280 N. Murcia et al.

and psychosocial risk factors [33, 34]. The advantages of self-evaluation methods are
the possibility of making a survey over a large population and gather data over time.
In most articles using self-evaluation methods during an ergonomic investigation in a
manufacturing industry, the questionnaires are used as a starting point in order to identify
the prevalence in MSDs symptoms in the studied population. However the reliability of
these surveys might be altered by the feelings and out of work activities of respondents
and the possible misinterpretation of the questions [35]. Some examples of studies using
self-evaluation method are detailed in Table 2.
One of the most frequently used questionnaire is the Musculoskeletal Nordic Ques-
tionnaire, which collects data on the pain experienced by the workers over the seven
last days and the last 12 months for each body part, and relates them with their personal
information [40]. The Karasek questionnaire is another self-evaluation method aiming
to measure the stress at work by collecting data about psychosocial aspect of the job
[41].
Self-evaluation questionnaire are also used in large-scale medical cohorts studies in
order to evaluate the prevalence of MSDs in the general population and the associated
risk factors [42].

5 Measurement-Based Methods

5.1 Direct Measurement Methods

Direct measurement methods consist in attaching sensors to the subject’s body segments
to measure the exposures variables at work [7]. Some of the tools used are electromyo-
graphy, accelerometers, force measurement tool and more recently the motion capture
technology is used in order to assess the exposures constraint of the subject while doing
a technical operation.
Whereas the two other methods are measuring a subjective interpretation of the
physical risks, direct measurement methods are giving an objective assessment about
the physical exposure. However, this physical exposure is assessed without taking into
account the operator feelings which makes it impossible to normalize this given physical
exposure among all the workers. These direct measurement methods were developed in
order to measure athlete’s capacities.
However, these methods are rarely applied in the manufacturing industry because
it is too costly to set in practice and it is almost impossible to gather data from a large
population. These methods are also inconvenient in practice because they collect a huge
amount of data that is difficult to process over a short time; it is often not enough to
show an evidence between the measure and the possible physical risk for the operator.
Actually, this high-quality precision is often not needed to select a technical solution for
reducing the physical risk on a workstation.
Other inconveniences of these methods are the difficulties to gather data on a long
time and the possible bias that the workers wearing the device might not do the task
during the experience as he would in practice.
Evaluation Methods of Ergonomics Constraints in Manufacturing Operations 281

5.2 Virtual Measurement Methods


Traditionally, ergonomics evaluation in industry is a reaction tool used to identify areas
of improvement of the working situation. However, a new trend is emerging; instead of
developing a solution to fix a physical risk for the production operator, the objective is
to take into account the ergonomics constraints during the design of the workstation.
Using numerical simulation and virtual reality tools it becomes possible to forecast a
possible physical risk for the production operator directly during the design phase of the
workstation [43]. Industries are interested in integrating ergonomics constraints directly
into the design phase of the workstation as it would cost less than doing corrective actions
to ease the physicals constraints of a workstation afterward [44]. Motion capture is a
promising tool for measuring ergonomics constraints. Captor-based motion methods
to assess ergonomics constraints have started to be developed 15 years ago; however
this method has not bloomed in the manufacturing industry because the equipment is
expensive and considered unpractical to process manufacturing operations [45]. More
recently, sensor-less methods have been developed and have attracted the attention of
researchers. An example of tool used is Microsoft Kinect which is derived from the
gaming sector [46].
The motion capture technologies bring an important help in order to capture and
track the exact actions done during a manufacturing operation process. These measures
can be processed in order to virtually simulate a digital twin of the production operator
physical situation in a virtual working environment. This simulation can determine at-
risk situations according to predetermined criteria and be used to compute modification
ideas of the working environment. As this technology gets more mature, we can expect
an increasing interest from the manufacturing companies in the following years.

6 Discussion
MSDs are developing over a lifetime; a proof of physical risk reduction with job-
balancing is hard to obtain because it requires a long-term study. However, the pro-
duction worker perception is a good indicator on the benefits of the scheduling meth-
ods and at least can show the efficiency of these optimization methods over short and
medium-terms.
In literature, there are over a hundred different identified methods in order to measure
the ergonomic risks [15]; in practice this number tends to grow because health experts can
adapt these methods to their exact situation. Furthermore, these methods often produce
different results for the same situation [16]. Some methods aim to measure the importance
of the physical risks whereas some methods measure a discomfort value or even the work-
related pain over a given period. This important disparity in ergonomic measurement
methods is echoed on the data used for the different ergonomic-based job-balancing
methods existing in the literature [12]. This disparity hardly allows directly comparing
the different solutions enumerated. However, the advantage of having a wide range of
ergonomic risks assessment method is to let health experts choose the best fitting method
with their needs and the environment constraints in which the measure is done. This gives
the opportunity to get more accurate data for a possible optimization taking into account
ergonomic-based data.
282 N. Murcia et al.

In the manufacturing industry, these methods are not often exploited, and mostly
at an experimental scale. Paradoxically, within the 4th industrial revolution, the most
used ergonomics assessment methods currently seem to be self-reported questionnaires
and observational methods, which are more traditional. This gap with the technology
development can be explained with the high cost and the time consumed of the most
recent measurement-based tools. This trend can also be explained with the fact that
ergonomic data is hard to be processed and used in mathematical models, and greatly
increases the complexity of ergonomic-based optimization algorithm.
To sum up, Industry 4.0 intends to take into account ergonomic-based criteria in the
real-time factory management, but the whole process is not ready yet. Managers would
like an integrated tool solving the problem in an automated manner, but the metrics are
not standardized, the technology is not ready and the models are too complex. Therefore,
a conclusion is that, at least in a transient phase, a focus should be brought to decision
support tools aiming at helping the decision to assign the tasks to the operators with an
evaluation of their past constraints.

7 Conclusion
This paper proposed an overview of the different tools used to do ergonomic assessments
in the manufacturing industry. Over the years, many tools have been developed which
come from a wide range of ergonomic assessment methods. On the field an ergonomic
assessment is often expensive because it requires time and the expertise of ergonomists.
Hence it is primordial to have a good knowledge on the environment to determine
the frame of the study and the potentials risks. This large range of tools explain the
diversity of the ergonomic data measurement methods in ergonomic-based job balancing
[12]. However, the complexity of line balancing algorithms taking into account such
constraints is generally preventing managers to integrate such tools in their workshops.
Therefore, decision support tools might be developed in order to cope with this current
technological deadlock.

References
1. Koukoulaki, T.: The impact of lean production on musculoskeletal and psychosocial risks: an
examination of sociotechnical trends over 20 years. Appl. Ergon. 45, 198–212 (2014)
2. Antwi-Afari, M.F., Li, H., Edwards, D.J., Pärn, E.A., Seo, J., Wong, A.Y.L.: Biomechanical
analysis of risk factors for work-related musculoskeletal disorders during repetitive lifting
task in construction workers. Autom. Constr. 83, 41–47 (2017)
3. Bernard, B.P., Putz-Anderson, V.: Musculoskeletal disorders and workplace factors; a critical
review of epidemiologic evidence for work-related musculoskeletal disorders of the neck,
upper extremity, and low back, U.S. Department of Health and Human Services (1997)
4. Parot-Schinkel, E., Descatha, A., Ha, C., Petit, A., Leclerc, A., Roquelaure, Y.: Prevalence
of multisite musculoskeletal symptoms: a French cross-sectional working population-based
study. BMC Musculoskelet. Disord. 13, 122 (2012)
5. Bevan, S.: Economic impact of musculoskeletal disorders (MSDs) on work in Europe. Best
Pract. Res. Clin. Rheumatology 29, 356–373 (2015)
Evaluation Methods of Ergonomics Constraints in Manufacturing Operations 283

6. Roux, C.H.: Impact of musculoskeletal disorders on quality of life: an inception cohort study.
Ann. Rheumatol. Dis. 64, 606–611 (2005)
7. David, G.C.: Ergonomic methods for assessing exposure to risk factors for work-related
musculoskeletal disorders. Occup. Med. 55, 190–199 (2005)
8. Van Tulder, M., Malmivaara, A., Koes, B.: Repetitive strain injury. Lancet 369, 1815–1822
(2007)
9. Muramatsu, R., Miyazaki, H., Ishii, K.: A Successful application of job enlarge-
ment/enrichment at Toyota. IIE Trans. 19, 451–459 (1987)
10. Moussavi, S.E., Zare, M., Mahdjoub, M., Grunder, O.: Balancing high operator’s workload
through a new job rotation approach: application to an automotive assembly line. Int. J. Ind.
Ergon. 71, 136–144 (2019)
11. Sobhani, A., Wahab, M.I.M., Neumann, W.P.: Incorporating human factors-related perfor-
mance variation in optimizing a serial system. Eur. J. Oper. Res. 257, 69–83 (2017)
12. Otto, A., Battaïa, O.: Reducing physical ergonomic risks at assembly lines by line balancing
and job rotation: a survey. Comput. Ind. Eng. 111, 467–480 (2017)
13. Padula, R.S., Comper, M.L.C., Sparer, E.H., Dennerlein, J.T.: Job rotation designed to prevent
musculoskeletal disorders and control risk in manufacturing industries: a systematic review.
Appl. Ergon. 58, 386–397 (2017)
14. Grosse, E.H., Calzavara, M., Glock, C.H., Sgarbossa, F.: Incorporating human factors into
decision support models for production and logistics: current state of research. IFAC-Papers
Online 50, 6900–6905 (2017)
15. Takala, E.-P., Pehkonen, I., Forsman, M., Hansson, G.-Å., Mathiassen, S.E., Neumann, W.P.,
Sjøgaard, G., Veiersted, K.B., Westgaard, R.H., Winkel, J.: Systematic evaluation of observa-
tional methods assessing biomechanical exposures at work. Scand. J. Work Environ. Health.
36, 3–24 (2010)
16. Chiasson, M.-È., Imbeau, D., Aubry, K., Delisle, A.: Comparing the results of eight methods
used to evaluate risk factors associated with musculoskeletal disorders. Int. J. Ind. Ergon. 42,
478–488 (2012)
17. McAtamney, L., Nigel Corlett, E.: RULA: a survey method for the investigation of work-
related upper limb disorders. Appl. Ergon. 24, 91–99 (1993)
18. Jaturanonda, C., Nanthavanij, S.: Heuristic Procedure for Two-Criterion Assembly Line Bal-
ancing Problem (2007). https://www.researchgate.net/publication/228366470_Heuristic_Pro
cedure_for_Two-Criterion_Assembly_Line_Balancing_Problem
19. Bautista, J., Alfaro-Pozo, R., Batalla-García, C.: Maximizing comfort in assembly lines with
temporal, spatial and ergonomic attributes. Int. J. Comput. Intell. Syst. 9, 788–799 (2016)
20. Hignett, S., McAtamney, L.: Rapid Entire Body Assessment (REBA). Appl. Ergon. 31, 201–
205 (2000)
21. Yoon, S.-Y., Ko, J., Jung, M.-C.: A model for developing job rotation schedules that elimi-
nate sequential high workloads and minimize between-worker variability in cumulative daily
workloads: application to automotive assembly lines. Appl. Ergon. 55, 8–15 (2016)
22. Karhu, O., Kansi, P., Kuorinka, I.: Correcting working postures in industry: a practical method
for analysis. Appl. Ergon. 8, 199–201 (1977)
23. Hellig, T., Mertens, A., Brandl, C.: The interaction effect of working postures on muscle
activity and subjective discomfort during static working postures and its correlation with
OWAS. Int. J. Ind. Ergon. 68, 25–33 (2018)
24. Occhipinti, E.: OCRA: a concise index for the assessment of exposure to repetitive movements
of the upper limbs. Ergonomics 41, 1290–1311 (1998)
25. Boenzi, F., Digiesi, S., Facchini, F., Mummolo, G.: Ergonomic improvement through job rota-
tions in repetitive manual tasks in case of limited specialization and differentiated ergonomic
requirements. IFAC-Papers Online 49, 1667–1672 (2016)
284 N. Murcia et al.

26. Otto, A., Scholl, A.: Reducing ergonomic risks by job rotation scheduling. Spectr. 35, 711–733
(2013)
27. Garg, A., Boda, S., Hegmann, K.T., et al.: The NIOSH Lifting Equation and Low-Back Pain,
Part 1, Human Factors, vol. 23 (2014)
28. Otto, A., Scholl, A.: Incorporating ergonomic risks into assembly line balancing, European.
J. Oper. Res. 212, 277–286 (2011)
29. Li, G., Buckle, P.: A practical method for the assessment of work-related musculoskeletal risks
- quick exposure check (QEC). In: Proceedings of the Human Factors Ergonomics Society
Meeting, vol. 42, pp. 1351–1355 (1998)
30. Moore, J.S., Garg, A.: The strain index: a proposed method to analyze jobs for risk of distal
upper extremity disorders. Am. Ind. Hyg Assoc. J. 56(5), 443–458 (1995). https://doi.org/10.
1080/15428119591016863
31. Yildirim, Y., Gunay, S., Karadibak, D.: Identifying factors associated with low back pain
among employees working at a package producing industry. J. Back. Musculoskelet. Rehabil.
27, 25–32 (2014)
32. Widanarko, B., Legg, S., Devereux, J., Stevenson, M.: Interaction between physical and
psychosocial risk factors on the presence of neck/shoulder symptoms and its consequences.
Ergonomics 58, 1507–1518 (2015)
33. Abubakar, M.I., Wang, Q.: Key human factors and their effects on human centered assembly
performance. Int. J. Ind. Ergon. 69, 48–57 (2019)
34. Bugajska, J., Żołnierczyk-Zreda, D., J˛edryka-Góral, A., Gasik, R., Hildt-Ciupińska, K.,
Malińska, M., Bedyńska, S.: Psychological factors at work and musculoskeletal disorders: a
one year prospective study. Rheumatol. Int. 33, 2975–2983 (2013)
35. Barrero, L.H., Katz, J.N., Dennerlein, J.T.: Validity of self-reported mechanical demands
for occupational epidemiologic research of musculoskeletal disorders. Scandinavian J. Work
Environ. Health 35, 245–260 (2009)
36. Landau, K., Rademacher, H., Meschke, H., Winter, G., Schaub, K., Grasmueck, M., Moelbert,
I., Sommer, M., Schulze, J.: Musculoskeletal disorders in assembly jobs in the automotive
industry with special reference to age management aspects. Int. J. Ind. Ergon. 38, 561–576
(2008)
37. Menzel, N.N., Brooks, S.M., Bernard, T.E., Nelson, A.: The physical workload of nursing
personnel: Association with musculoskeletal discomfort. Int. J. Nurs. Stud. 41, 859–867
(2004)
38. Márquez Gómez, M.: Prediction of work-related musculoskeletal discomfort in the meat
processing industry using statistical models. Int. J. Ind. Ergon. 75, 102876 (2020)
39. Acaröz Candan, S., Sahin, U.K., Akoğlu, S.: The investigation of work-related musculoskele-
tal disorders among female workers in a hazelnut factory: Prevalence, working posture,
work-related and psychosocial factors. Int. J. Ind. Ergon. 74, 102838 (2019)
40. Kuorinka, I., Jonsson, B., Kilbom, A., Vinterberg, H., Biering-Sørensen, F., Andersson,
G., Jørgensen, K.: Standardised Nordic questionnaires for the analysis of musculoskeletal
symptoms. Appl. Ergon. 18, 233–237 (1987)
41. Karasek, R., Brisson, C., Kawakami, N., Houtman, I., Bongers, P., Amick, B.: The Job
Content Questionnaire (JCQ): An instrument for internationally comparative assessments
of psychosocial job characteristics. J. Occup. Health. Psychol. 3, 322–355 (1998)
42. Zins, M., Goldberg, M., CONSTANCES team, : The French CONSTANCES population-
based cohort: design, inclusion and follow-up. Eur. J. Epidemiol. 30, 1317–1328 (2015)
43. Micheli, G.J.L., Marzorati, L.M.: Beyond OCRA: predictive UL-WMSD risk assessment for
safe assembly design. Int. J. Ind. Ergon. 65, 74–83 (2018)
44. Hu, B., Ma, L., Zhang, W., Salvendy, G., Chablat, D., Bennis, F.: Predicting real-world
ergonomic measurements by simulation in a virtual environment. Int. J. Ind. Ergon. 41, 64–71
(2011)
Evaluation Methods of Ergonomics Constraints in Manufacturing Operations 285

45. Oyekan, J., Prabhu, V., Tiwari, A., Baskaran, V., Burgess, M., McNally, R.: Remote real-
time collaboration through synchronous exchange of digitised human–workpiece interactions.
Future Gen. Comput. Syst. 67, 83–93 (2017)
46. Bortolini, M., Faccio, M., Gamberi, M., Pilati, F.: Motion Analysis System (MAS) for pro-
duction and ergonomics assessment in the manufacturing processes. Comput. Ind. Eng. 139,
105485 (2020)
Toward a Social Holonic Manufacturing Systems
Architecture Based on Industry 4.0 Assets

Etienne Valette1,2(B) , Hind Bril El-Haouzi1,2 , and Guillaume Demesure1,2


1 UMR 7039, Université de Lorraine, CRAN, Campus Sciences, BP 70239,
54506 Vandœuvre-lès-Nancy cedex, France
{etienne.valette,hind.el-haouzi,
guillaume.demesure}@univ-lorraine.fr
2 CNRS, CRAN, UMR7039, Vandœuvre-lès-Nancy cedex, France

Abstract. For the last decade, the question of anthropocentric approaches has
made its way through the research, and fully techno-centred approaches have
been questioned. The integration of social relationships between the components
of systems has already been identified as a crucial issue for the future development
of reference architectures. However, the current research lacks a global approach
based on both consideration of the human as an integrated agent of the system
and the use of social concepts to characterize inter-agents’ relationships. The
purpose of this paper is to offer an overview of these aspects considered for
manufacturing control architectures and to outline some guidelines allowing to
revise the architecture of reference PROSA.

Keywords: Industry of the future · MAS · HMS · CPS · IoT · Social approach ·
Human integration

1 Introduction
Over twenty years have passed now since the proposition of the PROSA architecture
for Holonic Manufacturing Systems (HMS) [1]. Based on the constant evolutions of the
market and of the technology (especially Information and Communication Technologies
- ICT), many agile, adaptative and reconfigurable architectures for HMS control have
emerged [2–9]. These architectures are using various approaches like the centralized,
decentralized, hybrid or product-driven ones.
These architectures have been classified by Cardin et al. [10] as “generic”, “multi-
agent oriented”, “holonic architectures’ extensions”, “service and cloud oriented” or
“dynamic”. According to the authors data processing has been as well integrated into
these systems, although some issues were insufficiently tackled. Among them are: the
ability to adapt to unplanned issues, sustainability, data warehousing, and integration
of the human in the loop. In short, these architectures are still mainly techno-centred.
Therefore, they are difficult to implement within actual industrial systems and their
adaptation to future ones might not be obvious.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 286–295, 2021.
https://doi.org/10.1007/978-3-030-69373-2_20
Toward a Social Holonic Manufacturing Systems Architecture 287

Hence, future work should aim to answer the two following questions:

1. How to make today’s reference architectures suitable for concrete implementation


within real industrial systems?
2. How can these architectures take advantage from new concepts introduced by
paradigms such as Industry 4.0?

Most of the Intelligent Manufacturing System (IMS) community researchers agree on


the lack of considering human aspects in manufacturing control architectures to answer
these questions. Indeed, concrete industrial systems face with legacy systems, where mul-
tiple aspects of human factors need to be considered, e.g. cooperation support between
human or technological entities, development of human-inspired social interactions, inte-
gration of humans on different decisional levels. Cooperation between human and tech-
nological entities has been widely studied, especially in air traffic, car/train driving and
robotic applications domains. Concerning human-machine cooperation, many efforts
have been made to balance human-machine interactions, improving human-machine
communication, or understanding human behaviour [11].
Anyway, human integration in manufacturing systems is a challenging issue [12].
Cooperation must be established between several types of entities/agents of differ-
ent nature such as intelligent products, intelligent resources, or humans. Because of
these differences of nature, cooperation between agents can be achieved through differ-
ent approaches. Our idea is that a social approach for the description of relationships
between different agents (human or artefact) of a system might ease the future design
and implementation of human-adapted Holonic and Multi-Agent Systems (MAS) control
architectures into real systems as well as their apprehension by human agents.
We believe that integrating the human feedback in the loop into industrial systems,
along with other agents is at the heart of the initiative Industry 4.0. As a matter of fact,
Industry 4.0 is today taken as background for almost all current research and publications
concerning the architectural design of these systems. For this reason, we have chosen
to investigate an approach based on the notions of Cyber-Physical Systems (CPS) and
Internet of Things (IoT), which are essential for Industry 4.0, applied to MAS.

2 Industry 4.0 and Its Main Assets

2.1 Industry 4.0

Industry 4.0 [13] as a technological vision has burst out in 2011 during the Hanover Fair,
forwarding the effort that Germany was doing to promote computerization in industry.
In 2013, the Final Report of the Working Group Industrie 4.0 [13] was submitted, iden-
tifying several keys for successful implementation. If the concept was born in Germany,
many national initiatives are currently led across the world. We can especially men-
tion the USA’s “National Network for Manufacturing Innovation (NNMI)”, the Unit-ed
Kingdom’s “High Value Manufacturing Catapult (HVMC), the South Korean “Manu-
facturing Industry Innovation 3.0 Strategy”, the French “Industry of the future” and the
Chinese “Made in China 2025” [14]. The innovative vision and concepts that all these
288 E. Valette et al.

initiatives brought have marked the beginning of what is now considered as the fourth
industrial revolution, simply referred as “Industry 4.0”.
Today’s literature is mainly focused on the technological aspects and developments
that will be needed to support the transition to future industrial systems. Artificial Intel-
ligence and all its aspects (neural networks, Big Data, data mining & refining, Deep
Learning, etc.) might be the most widely known ones. These new technologies have
brought out the two new paradigms of Internet of Things (IoT) and of Cyber-Physical
Systems (CPS). In what follows, we will focus on these two frameworks that have already
been widely studied for the last 15 years. Our focus will be on the integration of human
factors that is at the heart of Industry 4.0’s considerations.

2.2 CPS and IoT

The CPS paradigm is commonly recognized as the main pillar of Industry 4.0. Since
the popularity of this concept is rather new in the scientific world (first enunciated by
Lee in 2006 [15]) and because of the wide range of its potential applications despite
standardization attempts, its definition and limits are still fuzzy and unclear. In fact, the
term of CPS is often associated with the one of Internet of Things (IoT) that appeared
in the 2000s [16]. Hence, IoT is the eldest concept, while the CPS term only appeared
7 years later.
For the Internet of Things, Madakan et al. [17] gave the definition of: “an open and
comprehensive network of intelligent objects that have the capacity to auto-organize,
share information, data, resources, reacting and acting in face of situations and changes
in the environment”. In this definition, the IoT is clearly considered as a link between
physical objects within a system.
Cyber-Physical Systems were defined by Lee [15] as “physical and engineered sys-
tems whose operations are monitored, coordinated, controlled and integrated by a com-
puting and communication core. This intimate coupling between the cyber and physical
will be manifested from the nano-world to large-scale wide-area systems of systems. And
at multiple time-scales”. Here, the CPS concept is related to the notion of “coupling”
between physical and computational objects, and this is the definition we will be willing
to stick to.
Considering the previous definitions, the Bagheri & Lee’s conception seems ade-
quate to represent CPS and IoT as we see them. IoT linking objects through horizontal
connectivity / synchronization and CPS using cloud and sensors connection to link phys-
ical objects to their digital twin through vertical connectivity / synchronization (Fig. 1)
[18].
El Haouzi [19] as well as Bordel and Alcarria [20] stated that the definition and use
of the terms CPS and IoT differ depending on the scientific community (mechatronic
engineering uses the CPS term) or the geographical area considered (America: CPS;
Europe and Asia: IoT). Hence, it is sometimes difficult to fully gets an author’s purpose,
for its comprehension and use of these terms might be unclear. For this reason, we will
not do any dichotomy based on the previous elements between CPS and IoT in the
Subsect. 2.3 and the Sect. 3.
Toward a Social Holonic Manufacturing Systems Architecture 289

Fig. 1. CPS and IoT [18]

2.3 The Industrial Aspect

In the previous subsection, we have presented CPS and IoT as pillars of Industry 4.0.
However, their industrial application is not obvious. The concepts of Industrial Internet
of Things (IIoT), Industrial Internet (II) or Cyber-Physical Production Systems (CPPS)
have emerged as an implementation of IoT and CPS within industrial contexts. Schneider
[21] stated that the IoT could be seen as divided into two main subsets: the Consumer
IoT and the Industrial IoT (respectively CIoT and IIoT). The CIoT would concern the
connectivity of things around humans while the IIoT would exclusively be concerned in
the connectivity around industrial things.
IIoT is defined as “A system comprising networked smart objects, cyber-physical
assets, associated generic information technologies and optional cloud or edge comput-
ing platforms, which enable real-time, intelligent, and autonomous access, collection,
analysis, communications, and exchange of process, product and/or service informa-
tion, within the industrial environment, so as to optimize overall production value”. The
IIoT is then considered as a fully techno-centred application of the IoT’s concepts in the
restricted area of an industrial system. In Boyes et al. publication [22], this approach is
confirmed. The functions of the IIoT were there defined as “to monitor, collect, exchange,
and analyse information so as to enable them to change their own behaviour, or else
instruct other devices to do so, without human intervention”.
Hence, authors like Schneider, Boyes or Gilchrist [21–23] are putting forward a
dichotomy between “humans” and “things” connectivity, humans being only considered
as the “customers” of connected industrial things. These reasonings are differentiating
the human operator from other agents and are accentuating the lack of human-oriented
considerations during the system’s design. This leads to a system where human agents
might face physical or mental overloads, lower their situational awareness, etc. perturbing
the completion of their tasks and the global system itself. These issues are at the very
basis of the “Magic human” phenomena [24] and are incompatible with Industry 4.0 s
considerations.
On the other side, Monostori’s development of CPS [25] presents the CPPS as an
interconnection of cooperative elements and subsystems “in situation-dependent ways,
on and across all levels of production, from processes through machines up to produc-
tion and logistics networks” that would be the communication’s enabler and support
290 E. Valette et al.

between “humans, machines and products alike”. In this conception, hu-man’s integra-
tion into systems is implicit: human-machine symbiosis is even enunciated as one of
the future R&D challenges for CPPS. Though, it has to be taken care-fully, for a too
strong dependence of humans on the system could raise important issues [26]. In the
next section, we will study some of the main approaches that have been initiated for
human integration.

3 Human Integration
3.1 Human’s Current Consideration

A little while before the advent of Industry 4.0, the lack of consideration of the hu-man
factors into the development of CPS led Wang [27] to promote the concept of Cyber-
Physical Social System (CPSS). With CPSSs, he assesses the importance of human fac-
tors’ integration within systems. In order to achieve this integration, along with cyber and
physical spaces, physiological, psychological, social and mental spaces are considered
[28, 29].
This concern has been shared by Schirner et al. [30] and Pirvu et al. [31] who have
respectively worked on the concepts of Human-In-The-Loop Cyber-Physical-Systems
(HITL-CPS) and Anthropocentric Cyber-Physical-Systems (A-CPS). The later devel-
opment of these concepts is represented by Cimini et al.’s Social Human-In-The-Loop
Cyber-Physical Production System (Social-HITL-CPPS) [32].
HITL-CPS consists in an embedded system enhancing a human being’s ability to
interact with his physical environment and A-CPS is defined as a reference architecture
integrating the three physical, computational/cyber and human components. The purpose
of these works is the integration of human factors into systems (mainly but not exclusively
industrial ones).
But even if the need to consider elements such as physiology, psychology, social and
mental aspects is recognized, a neat distinction is made between humans and “things”
constitutive of the system. The human is considered as a stranger needing to be integrated
within a system through interfaces and then stays distinct from other agents. These
approaches are forwarding technological development to link the human to the systems.
Concerning the Social HITL-CPPS, Cimini et al. [32] define humans as agents fully
integrated to the system. The authors have identified the interpretation of human agent’s
behaviour and their coordination with other agents as the two main challenges in the
integration of humans into social environments (and not only manufacturing ones). To
answer these challenges, a three-layer architecture has been proposed. This architecture
is connecting on one hand human users to the cyber part through user interfaces, and on
the over hand physical parts (i.e. non-human agents and environment) to the cyber part
through a network.
In all these approaches, human integration is achieved through human-machine or
human-system interfaces. Hence, these can be considered as techno-centred approaches
for human integration. If the term “social” is here used as a keyword to mark the
human-centred considerations of the authors, it can also refer to a completely different
conception.
Toward a Social Holonic Manufacturing Systems Architecture 291

3.2 The Social Approach


In order to structure the IoT despite the growing number of objects composing it, Atzori
et al. [33] have used the idea to cross Social Network Services (SNSs)’ patterns to
the IoT concept, initializing the concept of Social Internet of Things (SIoT). Inspired
from the Social Web of Things (SWoT) [34], the SIoT is built on the transposition
to human’s relationships into society to intelligent objects into Multi-Agent Systems
(MAS). According to Mala [35], this paradigm can be defined as a “social network
of intelligent objects bounded by social relationships”. From the work of Fiske [36],
the five following relationships are established: Parental Object Relationship (POR),
Ownership Object relationship (OOR), Co-Work Object Relationship (C-WOR), Social
Object Relationship (SOR) and Co-Location Object Relationship (C-LOR) (Table 1).

Table 1. Social relationships [33]

In this vision, social relationships are established and exploited among things, but
not among their owners. The supportive architecture enabling object-object interactions,
services and resources discovering in order to relieve humans from any intervention is
concretely excluding it. Nevertheless, social relationships are an interesting way to set
up a better integration of the human holon into the holonic architecture, and to facilitate
the system’s acceptance by human operators.
Then, we have detailed on one hand techno-centred approaches for human’s inte-
gration within manufacturing systems, and on the other hand social approaches for the
integration of artefact agent. Our idea consists in exploiting the social concepts enun-
ciated by Atzori et al. [33] and to associate them to the holonic reference architectures
PROSA for human-integration.

4 Future Directions Toward a Social Holonic Systems


Today, the relevance of holonic architectures is commonly recognised for manufactur-
ing systems. Based on the support literature, we propose an evolution regarding our
292 E. Valette et al.

approach previously presented in this paper. To do this, we will rely on a recognized


reference architecture - PROSA - a reference architecture for holonic manufacturing
systems, released by Van Brussel et al. in 1998 [1]. This was the first attempt to define
a holonic control architecture dedicated to an industrial system. PROSA was conceived
on a holonic modular basis, structured in a bottom-up aggregation, and based on three
main interconnected basic holons plus an optional, high level one: Resource, Product,
Order and Staff Agents.
The holons are only coordinated by the sharing of different data sets: Process knowl-
edge is shared between Product and Resource holons, Execution knowledge is shared
between Order and Resource holons, and Production knowledge is shared between
Product and Order holons.
Some recent revisions of this architecture have emerged in order to better integrate
human factors. This is especially the case for the works of Leuvennink, Kruger and
Basson [37]. In their considerations, two approaches for human integration with HMS
are detailed with the PROSA model as a basis. The first one, called Interface Holon
Architecture (IHA) is considering the worker outside of the holon; it is considered as an
interchangeable tool used by the holon to perform various tasks. The second one, called
Worker Holon Architecture (WHA), is considering the worker as part of the holon. But
this integration is only relying on technical features (its interfacing with the system, the
system’s protocols, etc.).
Another interpretation has been proposed by Valckenaers and Van Brussel [38] in
the book Design for the Unexpected: from Holonic Manufacturing Systems towards a
Human Mechatronics Society [38], the authors show the importance of humans for the
agility of future production systems and the shift from holonic production systems to
a more human mechatronic society. Hence, they propose an extension of their ARTI
holonic systems architecture [36] with a new holon called e-Person. This one would be
an aggregation of the roles that can be played by the human in his environment and could
either be a resource, a decision maker for an activity or for other resources, a part of an
activity, etc.
In our opinion, heading toward a social holonic system implies the exploration of
three axes:

1. Consideration and integration of human holons along with artefact ones. Scientific
locks concerning the way to model these holons - humans or artefact, the consider-
ation of physical, energetic or data transformations as well as the consideration of
human factors (physiological, psychological, etc.) will then arise.
2. Consideration of the relationships between these holons as more than data exchanges,
but with other forms of relationships that will be able to govern this holarchy. For
example, we can cite the service providing or customer - supplier relationships that
are usually found in Service Oriented Architectures (SOA) [39], symbiosis relation-
ships as proposed by Monostori et al. [25], or other forms of social organization like
Fiske’s Communal Sharing, Authority Ranking, Quality Matching & Market Pricing
[36]. These will have to be defined and formalized in order to be tested onto exper-
imental CPS-based platforms. The nature of the relationship would, for example,
give information about shared objectives, requirements, results, states, or actions.
Toward a Social Holonic Manufacturing Systems Architecture 293

3. Consideration of a recursive aspect of the holonic structure that goes beyond the
simple composition/decomposition of a holon into other ones, and that considers
the notions of social relationships. The same holon can belong to several different
holons depending on the nature of the social relationships between them (an operator
can belong to the work shift X AND to the work shop Y). This raises the issue of
the formalization of these relationships within the framework of HMS and of the
specification of their impact (for example on the nature of the information shared,
trust, etc.).

5 Conclusion

In 40 years, industrial systems have significantly evolved. Research has focused on


system’s automation, integration, centralization, decentralization, human integration,
socialization, etc. Considering the past research and industrial evolution, we believe that
future development should be directed to the elaboration of hybrid holonic architecture
based on social relationships allowing the human to be no more considered as a distur-
bance factor but as a fully integrated agent to the systems. For this reason, we will direct
our future research on the notion of social relationships, their definition and characteri-
zation and their implementation into reference architectures. A lot of work is still to be
done, starting with the exploitation of the three directives that we have presented below
and their testing through simulation or real models on our research platforms.

References
1. Brussel, H.V., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture for
holonic manufacturing systems: PROSA. Comput. Ind. 37(3), 255–274 (1998)
2. Morel, G., Panetto, H., Zaremba, M., Mayer, F.: Manufacturing enterprise control and man-
agement system engineering: paradigms and open issues. Annu. Rev. Control 27(2), 199–209
(2003). https://doi.org/10.1016/j.arcontrol.2003.09.003
3. Leitão, P., Restivo, F.: ADACOR: a holonic architecture for agile and adaptive manufacturing
control. Comput. Ind. 57(2), 121–130 (2006). https://doi.org/10.1016/j.compind.2005.05.005
4. Verstraete, P., Germain, B.S., Valckenaers, P., Brussel, H.V., Belle, J.V., Hadeli, N.A.:
Engineering manufacturing control systems using PROSA and delegate MAS. Int. J.
Agent-Oriented Softw. Eng. 2(1), 62 (2008). https://doi.org/10.1504/IJAOSE.2008.016800
5. Pujo, P., Broissin, N., Ounnar, F.: PROSIS: An isoarchic structure for HMS control. Eng.
Appl. Artif. Intell. 22(7), 1034–1045 (2009). https://doi.org/10.1016/j.engappai.2009.01.011
6. Le Mortellec, A., Clarhaut, J., Sallez, Y., Berger, T., Trentesaux, D.: Embedded holonic fault
diagnosis of complex transportation systems. Eng. Appl. Artif. Intell. 26(1), 227–240 (2013).
https://doi.org/10.1016/j.engappai.2012.09.008
7. Pach, C., Berger, T., Bonte, T., Trentesaux, D.: ORCA-FMS: a dynamic architecture for the
optimized and reactive control of flexible manufacturing scheduling. Comput. Ind. 65(4),
706–720 (2014). https://doi.org/10.1016/j.compind.2014.02.005
8. Barbosa, J., Leitão, P., Adam, E., Trentesaux, D.: Dynamic self-organization in holonic multi-
agent manufacturing systems: the ADACOR evolution. Comput. Ind. 66, 99–111 (2015).
https://doi.org/10.1016/j.compind.2014.10.011
294 E. Valette et al.

9. Jimenez, J.-F., Bekrar, A., Zambrano-Rey, G., Trentesaux, D., Leitão, P.: Pollux: a dynamic
hybrid control architecture for flexible job shop systems. Int. J. Prod. Res. 55(15), 4229–4247
(2017). https://doi.org/10.1080/00207543.2016.1218087
10. Cardin, O., Derigent, W., Trentesaux, D.: Contribution des Architectures de Contrôle
Holoniques à l’Ind. 4.0, p. 9 (2018). https://hal.archives-ouvertes.fr/hal-01985716/document
11. Flemisch, F., Abbink, D., Itoh, M., Pacaux-Lemoine, M.-P., Weßel, G.: Shared control is the
sharp end of cooperation: towards a common framework of joint action, shared control and
human machine cooperation. IFAC-Papers 49(19), 72–77 (2016). https://doi.org/10.1016/j.
ifacol.2016.10.464
12. Valette, E., El-Haouzi, H.B., Demesure, G., Bou, V.: Toward an anthropocentric approach for
hybrid control archit.: case of a furniture factory, arXiv.org > cs > arXiv:1812.10395
13. Acatech, Securing the future of German manufacturing industry: Recommendations for imple-
menting the strategic initiative INDUSTRIE 4.0 - Final report of the Industrie 4.0 Working
Group, German Academy of Science and Engineering, Germany, April 2013
14. Bidet-Mayer, T.: L’industrie du futur à travers le monde, Synthèses Fabr., no. 4, March 2016
15. Lee, E.A.: Cyber-Physical Systems - Are Computing Foundations Ade-quate?, 2006, p. 10
16. Ashton, K.: That “Internet of Things” Thing, RFID J. 1 (2009)
17. Madakam, S., Ramaswamy, R., Tripathi, S.: Internet of Things (IoT): a literature review. J.
Comput. Commun. 03(5), 164–173 (2015). https://doi.org/10.4236/jcc.2015.35021
18. Bagheri, B., Lee, J.: Big future for cyber-physical manufacturing systems, Design
World (2015). https://www.designworldonline.com/big-future-for-cyber-physical-manufactu
ring-systems/
19. El Haouzi, H.B.: Contribution à la conception et à l’évaluation des architectures de pilotage
des systèmes de production adaptables : vers une approche anthropocentrée pour la simulation
et le pilotage, Habilitation à diriger des recherches, Université de Lorraine (2017)
20. Bordel, B., Alcarria, R., Robles, T., Martín, D.: Cyber–physical systems: extending pervasive
sensing from control theory to the Internet of Things. Pervasive Mob. Comput. 40, 156–184
(2017). https://doi.org/10.1016/j.pmcj.2017.06.011
21. Schneider, S.: The Industrial Internet of Things (IIoT): applications and taxonomy. In: Geng,
H. (ed.) Internet of Things and Data Analytics Handbook, pp. 41–81. Wiley, Hoboken (2016)
22. Boyes, H., Hallaq, B., Cunningham, J., Watson, T.: The Industrial Internet of Things (IIoT): an
analysis framework. Comput. Ind. 101, 1–2 (2018). https://doi.org/10.1016/j.compind.2018.
04.015
23. Gilchrist, A.: IIoT Reference Architecture, in Industry 4.0, pp. 65-86. Apress, Berkeley (2016)
24. Trentesaux, D., Millot, P.: A human-centred design to break the myth of the “Magic Human” in
intelligent manufacturing systems. In: Borangiu, T., Trentesaux, D., Thomas, A., McFarlane,
D. (eds.) Service Orientation in Holonic and Multi-agent Manufacturing, vol. 640, pp. 103–
113. Springer, Cham (2016)
25. Monostori, L.: Cyber-physical production systems: roots, expectations and R&D challenges.
Procedia CIRP 17, 9–13 (2014). https://doi.org/10.1016/j.procir.2014.03.115
26. Pacaux-Lemoine, M.-P., Trentesaux, D.: Ethical risks of human-machine symbiosis in Indus-
try 4.0: insights from the human-machine cooperation approach. IFAC-Pap. 52(19), 19–24
(2019). https://doi.org/10.1016/j.ifacol.2019.12.077
27. Wang, F.-Y.: The emergence of intelligent enterprises: from CPS to CPSS. IEEE Intell. Syst.
25(4), 85–88 (2010). https://doi.org/10.1109/MIS.2010.104
28. Liu, Z., Yang, D., Wen, D., Zhang, W., Mao, W.: Cyber-physical-social systems for command
and control. IEEE Intell. Syst. 26(4), 92–96 (2011). https://doi.org/10.1109/MIS.2011.69
29. Shi, X., Zhuge, H.: Cyber physical socio ecology. Concurr. Comput. Pract. Exp. 23(9), 972–
984 (2011). https://doi.org/10.1002/cpe.1625
30. Schirner, G., Erdogmus, D., Chowdhury, K., Padir, T.: The Future of Human- in-the-Loop
Cyber-Physical Systems, p. 10, January 2013
Toward a Social Holonic Manufacturing Systems Architecture 295

31. Pirvu, B.-C., Zamfirescu, C.-B., Gorecky, D.: Engineering insights from an anthropocentric
cyber-physical system: a case study for an assembly station. Mechatronics 34, 147–159 (2016).
https://doi.org/10.1016/j.mechatronics.2015.08.010
32. Cimini, C., Pirola, F., Pinto, R., Cavalieri, S.: A human-in-the-loop manufac-turing control
architecture for the next generation of production systems. J. Manuf. Syst. 54, 258–271 (2020).
https://doi.org/10.1016/j.jmsy.2020.01.002
33. Atzori, L., Iera, A., Morabito, G.: SIoT: giving a Social Structure to the Internet of Things.
IEEE Commun. Lett. 15(11), 1193–1195 (2011). https://doi.org/10.1109/LCOMM.2011.090
911.111340
34. Guinard, D., Fischer, M., Trifa, V.: Sharing using social networks in a com-posable Web of
Things. In: 2010 8th IEEE International Conference on Per-vasive Computing and Communi-
cations Workshops (PERCOM Workshops), Mannheim, Germany, March 2010, p. 702–707
(2010). https://doi.org/10.1109/PERCOMW.2010.5470524.
35. Mala, D.J. (ed.): Integrating the Internet of Things Into Software Engineering Practices. IGI
Global (2019)
36. Fiske, A.P.: The four elementary forms of sociality: framework for a uni-fied theory of social
relations. Psychol. Rev. 99(4), 689–723 (1992)
37. Leuvennink, J., Kruger, K., Basson, A.: Architectures for human worker integration in holonic
manufacturing systems. In: Borangiu, T., Trentesaux, D., Thomas, A. Cavalieri, S. (eds.) Ser-
vice Orientation in Ho-lonic and Multi-agent Manufacturing, vol. 803, p. 133–144. Springer,
Cham (2019)
38. Valckenaers, P., Brussel, H.V.: Design for the Unexpected: From Holonic Manufacturing
Systems towards a Humane Mechatronics Society. Butter-worth-Heinemann (2015)
39. Indriago, C., Cardin, O., Rakoto, N., Castagna, P., Chacòn, E.: H2CM: a ho-lonic architecture
for flexible hybrid control systems. Comput. Ind. 77, 15-28 (2016). https://doi.org/10.1016/j.
compind.2015.12.005
New Organizations Based on Human
Factors Integration in Industry 4.0
Interfacing with Humans in Factories
of the Future: Holonic Interface Services
for Ambient Intelligence Environments

Dale Sparrow, Nicole Taylor, Karel Kruger(B) , Anton Basson, and Anriëtte Bekker

Department of Mechanical and Mechatronic Engineering,


Stellenbosch University, Stellenbosch 7600, South Africa
kkruger@sun.ac.za

Abstract. Human roles in manufacturing are changing along with developments


in Industry 4.0. Integrating human workers into their shifting Industry 4.0 work
environment is important and by no means easy. This paper proposes a method of
interfacing with workers in factories of the future that facilitates their interaction
with the digital management systems and machines around them. Holonic interface
services are discussed as promising means to realize ambient intelligence environ-
ments, which are comprised of intelligent interfaces supported by computing and
embedded networking technology. An architecture that manages communication
through available interfacing services is presented as part of a digital adminis-
tration shell for integrating a human worker into an Industry 4.0 environment. A
manufacturing process case study is also presented to demonstrate the interfacing
component of the architecture and the effectiveness of holonic interface services
in an ambient intelligence environment.

Keywords: Human interfaces · Holonic systems · Industry 4.0 · Factory of the


future · Manufacturing · Ambient intelligence environment

1 Introduction
Industry 4.0 (I4.0) is a revolution in which developments in information and communi-
cation technology (ICT) are used to integrate and organize assets in a value chain. The
intended benefits of I4.0 are robustness, agility, and continuous improvement through
data analytics and prediction. I4.0 research has focused on digital and robotic assets
due to the digital nature of ICT [1–3]. Not much attention has been given to the human
aspect of I4.0, although many authors state the importance of designing human-centric
I4.0 systems [4].
Human workers are still unmatched in dexterity, flexibility, intelligence, and diversity
[5, 6]. The human role in manufacturing is increasing as a decision maker and strategist
and decreasing as a laborer, but problems still exist with the smooth integration of
human workers and their digital factory environment [7]. The objective of this paper is
to propose a method of interfacing with workers in factories of the future that facilitates
their interaction with digital management systems and machines.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 299–312, 2021.
https://doi.org/10.1007/978-3-030-69373-2_21
300 D. Sparrow et al.

In this paper, brief background on humans in I4.0 and trends in interface development
is given. The concepts of holonic interface services are discussed as promising means
to realize ambient intelligence environments (AmIEs). An architecture that manages
communication through the available interfacing services is then presented as part of a
digital administration shell for a human worker. Lastly, a case study is used to demonstrate
the interfacing component of the architecture and the effective use of holonic interface
services in an AmIE.

2 Humans in Industry 4.0

Exploring the matter of human integration, Rey, Carvalho and Trentesaux [8] report that
careful consideration needs to be given to the difference in how artificial systems and
humans interact. Pacaux-Lemoine et al. [9] discuss a human-centred approach to the
design of intelligent manufacturing systems - pointing out that modern manufacturing
systems must have human awareness, while keeping human decision making in the loop
at different levels of automation.
Peruzzini, Grandi and Pellicciari [10] identify that the integration of human inter-
faces needs a human-centred design along with human factors engineering. Advanced
interfacing technologies are then considered as a key enabler for the I4.0 vision [11].
Many interfaces, however, restrict the user physically by having to be in a specific loca-
tion or needing to wear cumbersome equipment, which negatively affect the qualities of
flexibility, dexterity, and mobility that humans have.
While identifying the new roles humans play in modern manufacturing systems, the
concept of Operator 4.0 has emerged [12]. Romero et al. [13] identified eight augmenta-
tions for I4.0 operators. Requirements for interfacing with humans may be extracted from
development of the Smarter Operator, an Operator 4.0 typology in which the operator
uses an intelligent personal assistant [14].
Humans are expected to play a larger part in decision making and problem solving
and the balance of ability, authority, control, and responsibility is becoming more crit-
ical and complex [15, 16]. Frameworks and models that try to address balancing these
complexities at higher level have been created such as the human-centred approach for
intelligent manufacturing systems [9]. These frameworks, require platforms that allow
for flexible and robust connections between interchangeable components - especially
those between the digital components and humans, which this paper explores in detail.

3 Human-Machine Interfacing Technology

The earliest applicable interface to a data processing machine would be the 1822 Babbage
Analytical Engine, where the interface was physical manipulation of cams, clutches,
and other mechanical components. Ever since then, computing density, ergonomics,
technology, and hardware have improved to serve two purposes:

• Increase the bandwidth of information flow between the human and the machine.
• Give the operator more physical, creative, and mental freedom.
Interfacing with Humans in Factories of the Future 301

Using fine motor skills, voice commands, and gesture detection improves the amount
of information humans can send to machines; hence, the invention of the keyboard,
mouse, game-pad, and now haptic gloves and language processors. Screens allowed
machines to utilize the highest bandwidth information delivery to humans – their eyes
– and improved with the introduction of virtual reality technology accompanied by the
use of speech synthesis and haptic feedback.
Up until the rise of I4.0, interface design was focused on being specific to a task or
machine. This allowed designers to tighten their scope and accommodate the machine’s
limits and optimize it to the human user, since the amount of communication between
the two was limited to the task they were cooperating on. I4.0 brings new challenges in
interface design as entities in an I4.0 environment are expected to work in a changing
environment, on changing products and services, while optimizing their processes.
Flemisch et al. [15] discussed the importance of balancing the four cornerstones of
human-machine cooperation: ability, authority, control and responsibility. This requires
that the available options of communication between human and machine be flexible
and adaptable depending on the situation. A bottleneck of information flow often arises
between humans and machines, which is likely to worsen with the increased complexity
of accommodating flexibility and adaptability.
Like other human-in-the-loop frameworks, this higher level problem can only be
addressed when information is presented in more ergonomic ways and multiple channels
are available for capturing data from, and delivering data to, the human. It is crucial for
supporting real-time human decision making and achieving a successful balance of
ability, authority, control, and responsibility.
The Internet of Things, modern wearable interfaces, tablets, phones and smart envi-
ronments (e.g. with connected screens, projectors, cameras, lights, etc.) offer the required
flexibility and redundancy. However, configuring these interfaces to support HiLCPS
remains a challenge.
To address this challenge, humans must be supplemented by a digital administration
shell – a concept similar to a personal assistant in the form of a software robot, or softbot,
as developed by Rabelo, Romero and Zambiasi [14] that will elevate the human to CPS
level. The digital administration shell will need to select and use various interfaces, as
required by the location, pose, current activity, and attributes of a worker. To accommo-
date this functionality and information within a monolithic system could easily result in
unmanageable complexity. This paper thus proposes the use of holonic design principles
to achieve the required flexibility, reconfigurability, self-adaptation, and distribution that
will be needed.

4 Ambient Intelligence Environments and Modern Interfacing


Technologies

This section briefly describes the concept of ambient intelligence environments (AmIEs)
and how these may support the freedom of movement, creativity, and personalization
that is demanded from modern human-machine interfaces (HMIs). Furthermore, HMIs
are categorized into personal and environmental interfaces.
302 D. Sparrow et al.

4.1 Ambient Intelligence Environments


Ubiquitous computing was envisioned as the unobtrusive, human-centric integration
of numerous computers pervasively into the environment [17, 18]. Computers were
proposed to be interconnected in an omnipresent network, “know” about the environment
they are embedded in (i.e. exhibit context awareness) and by some means influence this
environment and a human’s interpretation of it. The concept of ubiquitous computing
echoes through the similar concepts of pervasive computing and, as used in this paper,
AmIEs [19].
According to the Information Society Technologies Advisory Group vision state-
ment, an AmIE is a surrounding in which humans live and work, comprised of intelligent
interfaces supported by computing and embedded networking technology [20, 21]. The
AmIE should recognize the presence of humans, be aware of their unique characteristics,
and adapt to the specific needs of users.

4.2 Personal and Environmental Interfaces


Interfaces to humans can occur in two forms – as personal interfaces or environmental
interfaces, as shown in Fig. 1. Environmental interfaces will form the greater part of the
AmIE, while personal interfaces will be dedicated to a single human and provide more
specialized functions.

Fig. 1. Examples of environmental and personal interfaces

Personal interfaces are maintained by devices that belong to a specific human, for
either a given activity or general use, and can provide other systems or humans a direct
means of interfacing with that human. These interfaces can be customized and optimized
to fit the specific user and could be directly accessible by their digital administration
shell. Some examples of personal interfaces are those encountered in smart watches,
tablets, heart rate monitors, eye tracking devices, and cell phones.
Environmental interfaces, on the other hand, do not belong to a specific human.
Instead, environmental interfaces are used to gather data from, or present data to, a spec-
ified environment humans are in. Examples of environmental interfaces are closed-circuit
television cameras, digital displays, floor path lights and speakers. Table 1 highlights
identified differences in how personal and environmental interfaces would be used and
does explicitly segregate the functions.
Interfacing with Humans in Factories of the Future 303

Table 1. Environmental vs. personal interfaces

Environmental interfaces Personal interfaces


General environment information Specialized personal information
Able to project or augment how the Interfaces offer specialized and personalized
environment is perceived through generalized information to the user with access to more
interfaces, but is not personalized senses
Collaborative interaction Dedicated interaction
Information rendered in the environment is Wearable interfaces can deliver instructions to
available to all collaborating humans at the a specific worker and can be dedicated to a
same time specific function (In collaborative tasks, a
specific human can be addressed)
Location specific information Location independent connection
Information can be focused to a specific area Information can be delivered regardless of
or object to reduce information clutter location or visibility of an object (location
specific information can also be shown with
personalization)
Low interruption High attention
Workers can choose when to look at Interfaces that a worker is wearing, or that are
information displayed in the environment and augmenting his reality, can demand attention
when to switch focus and ensure information is noticed
Offers implicit communication Mainly explicit communication
The environment can modify itself to High attention and high bandwidth can
communicate a certain “atmosphere” to any convey information for understanding and
human, for example red lighting means danger explicit responses, such as acknowledging
or flashing panel means pay attention receipt of instructions

5 Realizing AmIEs Through Holonic Interface Services

This section describes the development of an AmIE through the use of holonic design
principles. The AmIE aims to improve user freedom, flexibility in communication,
and optimized information delivery. Furthermore, the use of holonic design principles
can support scalability, robustness, and self-organization of the interface components.
The section is structured according to the distinguishment that is made with regards to
information flow in an AmIE: information flow to, and from, the human.

5.1 Information to the Human: Semiotic Services

Conveying information to human workers can be achieved through many channels, for
example by word of mouth, using mobile phones or even flashing lights. This section
describes a means of delivering information to the human with flexibility and robustness
through what will be called semiotic services.

Multimodal Semiosis. Semiosis is the production and communication of meaning


through different modalities of signs. Modality refers to the form of media in which
304 D. Sparrow et al.

the sign is presented. Signs, in the sense of semiotics, are not just pictorials and sym-
bols, but sounds, words, lights, or any stimulation through human senses that represent
some meaning to the human [22].
This multimodal interaction enables optimization and robustness of data delivery
since it allows equivalent information to be presented through different channels [1, 23].
For example, multimodal interaction could be achieved at a workstation by providing
instructions to a worker via a tablet (through text or sound) and an overhead projector
(by highlighting relevant areas of the workspace).
While screens, numerical displays, lights, and speakers are widely available tech-
nologies that deliver information to the human senses, these technologies are limited by
single modality and close vicinity. Smart glasses and head mounted displays are a form
of visually augmented reality that display computer generated scenes, and have been
demonstrated to facilitate training, stock management and maintenance [24, 25].
The World Wide Web Consortium (W3C) standards for multimodal media applica-
tions were created to consolidate information delivery to humans on different devices.
These standards could form the basis on how to expand this ontology to work for other
environmental and personal interface types in manufacturing, and not just screen-based
media [26]. Other promising technology in this regard is the Resource Description
Framework along with graph databases that can provide rich descriptions and com-
binations of interface services to achieve specific goals, similar to the IoT-Lite Ontology
and Manufacturing’s Semantics Ontology (MASON) [27–29].
Holons Providing Semiotic Services. It is expected that I4.0 environments should be
capable of multimodal semiosis through the integration and utilization of interfacing
technologies. Should these environments be represented as holonic systems, these inter-
faces would be integrated as Interface Resource Holons providing a semiotic service.
Each Interface Holon will be specialized in its particular modality to optimize and
personalize the delivery of a requested piece of information.
Apart from the Resource Holon responsibilities presented by Valckenaers and Van
Brussel [30], holons providing semiotic services should also perform responsibilities
pertaining to:

• Owning, managing, and controlling its physical rendering component (e.g. screen,
speaker, projector, etc.); and
• Optimizing the information delivery using knowledge of its modalities, human
information processing, the type of data given to it, and needs of the targeted human.

Various Interface Resource Holons providing different modalities to deliver infor-


mation will provide robustness and redundancy. In a manufacturing environment, this
can offer improvement in human situational awareness, safety, and overall connectivity
with the factory’s digital environment.
Optimizing Information Delivery to Humans. Models have been developed to aid in
optimizing information delivery, minimizing errors, and providing an enjoyable expe-
rience to humans using interfaces. Wickens [31] provided models on how humans bias
information they seek, or are presented with, as well as how selective attention affects
the processing of that information.
Interfacing with Humans in Factories of the Future 305

Models based on the cognition of information to avoid using the same processing
centres for two data streams, which processing centres are suited to which type of
information, and the bandwidth limitations of each sensor/processing centre pair are
well documented in texts such as Human Factors for Engineering [32], Cybersemiotics
[33], various other research from psychology, linguistics, graphic design, and works
such as those by Kahneman (author of Thinking, Fast and Slow [34]). This means that
a digital administration shell, or similar digital system accompanying the worker, can
make decisions on what available services to choose based on their worker, and the
situation.

5.2 Information from the Human: Observation Services


Since humans have no means of digitizing and communicating their own state in real-
time, they require dedicated systems to take up this responsibility. Pose, position, tem-
perature, and heart rate are examples of variables that may be useful for systems working
with, and around, the human. Observation services are proposed to digitize variables
like these, providing the human (through their administration shell) or other external
holons in the system this information through subscription.
Measurement and Fusion of State Variables. Obtaining accurate measurements of
dynamic systems with complex behaviour, such as human workers in manufacturing
environments, is a challenge. As such, measurement values should be considered along
with associated confidence and timestamps. Confidence, as a representation of uncer-
tainty, can be expressed in different ways – as standard deviation, intervals, or accuracy
and precision pairs. Subscribing to multiple observation services for the same variable
can provide a more accurate estimate if the data is fused properly. This also allows redun-
dancy and scalability of measured state variables, as well as data fusion when multiple
sources are available. The human may have control over captured data when inaccuracy
is identified or can be presented better. It is suspected that this information is, in most
cases, not subjective (heart-rate, temperature, position or what tool they have equipped).
Research into emotion detection, psychological state and action prediction may require
more user influence and response to accurately digitise these values.
Holons to Provide Observation Services. Within a holonic system, it is assumed that
physical sensors will be part of a holon that advertises and performs observation services.
Other holons in the system can then obtain and use the information from these obser-
vation services. Beyond the general Resource Holon responsibilities, holons providing
observation services must fulfil the following specific responsibilities:

• Own and manage its physical sensor components; and


• Refine and interpret the sensor data to provide its observations in a value-timestamp-
confidence tuple.

5.3 Interacting with the AmIE Services


An architecture for a digital administration shell to integrate human workers into I4.0
manufacturing environments was developed by Sparrow, Kruger and Basson [35]. The
306 D. Sparrow et al.

administration shell has an internal Execution component, responsible for interfacing


with the human. It consists of sub-components – the Observer, Informer and State
Blackboard – as shown in Fig. 2. The Observer and Informer components are responsible
for subscribing to, and managing the use of, interface services. Observed and derived
state variables of the human are written to the State Blackboard (SBB) that acts as the
single source of truth for any internal or external component that may want to know
something about the human.

Fig. 2. Internal structure of the Execution component

The State Blackboard. The SBB serves as a synchronous, single source of truth on the
Human Resource Holon’s (HRH) current state. The human’s current physical, mental,
and biological state is updated by a modular component called the Observer, discussed
next. The SBB also reflects the state of the world of interest (WOI), as specified by
the Activity-Resource-Type-Instance (ARTI) architecture [36]. This ensures any critical
data for execution and safety monitoring will be available to the components of the HRH
and can be communicated to external holons.
The Observer. The Observer is responsible for gathering information on the human
from any available observation services. When a particular variable, say position or
heart rate, needs to be known, the Observer finds and subscribes to services that can
provide this information.
The Informer. The Informer serves to deliver information to the human using available
semiotic services. It can make decisions (based on the human’s observed state) on which
combination of semiotic services would be most effective. Any request to communicate
information to the human, from internal or external components, asks the Informer
to deliver the data. This ensures that message filtering and prioritization to available
semiotic services can be intelligently handled.
Interfacing with Humans in Factories of the Future 307

6 Demonstration Through a Case Study


The developed architecture for the integration of human workers into an I4.0 man-
ufacturing environment provides a human worker with a digital administration shell.
With the use of this administration shell, the human is effectively elevated to Resource
Holon status, becoming a Human Resource Holon [35, 36]. The Human Resource Holon
Administration Shell (HRH-AS) would make use of observation services to detect the
state of the human, make process decisions and then make use of semiotic services to
give updated instructions to the user. This section details the successful use of such
holonic interface services in a case study.

6.1 Case Study Description

To demonstrate the interfacing component of the HRH-AS, as well as the effectiveness


of holonic interface services, this case study had two objectives:

• Show that interfaces can be dynamically accessed based on their capabilities; and
• Show it is possible to make smart decisions on which interface services are chosen.

The case study required a worker be guided through steps of a composite layup
activity. The overseeing HRH-AS decided when to give the worker the next instruction or
correct the worker if a mistake was detected. However, the HRH-AS did not possess any
knowledge on the use of the interfaces available to its corresponding worker. The HRH-
AS subscribed to available Interface Holons in the environment that provided semiotic
services, which allowed it to communicate instructions through different channels and
modalities. Similarly, the HRH-AS required certain state variables of the human and
subscribed to the observation services of Interface Holons in the environment to obtain
the required information.
The case study utilized two types of semiotic services: workstation projection, which
provided a visual overlay on the surface of the workstation to indicate instructions and
offer guidance; and text notifications, which delivered information as text to the human.
Furthermore, the case study required the HRH-AS to observe multiple state variables
from the human worker – the observed variables are listed in Table 2.
308 D. Sparrow et al.

Table 2. State variables observed in the case study

Variable Description
Worker pose The position of his hands and face while performing an
activity
Worker location The physical location of the worker within the
workstation
Work area state The state of components within the worker’s
workspace
Worker’s response to a question If a question is posed to the worker, his response is
observed and noted on the SBB
Error state and associated information Notifications obtained from the worker that something
is wrong during the activity and their report on the error

6.2 Implemented Holonic Interface Services


The interface services were developed to follow the I4.0 principles of interoperability,
decentralization, and modularity, along with the service-oriented nature of the proposed
interfaces. Three Interface Holons that were implemented in this case study will be dis-
cussed, namely: the Pose Observer Service Holon (POSH), Tablet Holon and Workstation
Projector Holon.
The POSH provides the service that gathers the “worker pose” state variable. Only
the internal structure of the POSH will be discussed, as the other Interface Holons
have a similar internal structure. Figure 3 shows a simplified internal architecture of the
POSH that was divided into an administration shell component and physical resource
component. The service subscriber component, shown in the green block, formed part
of the activity execution handling component of the HRH-AS.

Fig. 3. POSH internal architecture

The Tablet and Workstation Projector Holons use Erlang for their administration
shell. The administration shell component for the POSH was written in JavaScript and
Interfacing with Humans in Factories of the Future 309

ran on a Node.js server. Network components could connect to the administration shell as
either a worker or a client. A worker for the POSH could be any HTML5 enabled device
with camera capability. When a device connected as a worker, the worker code was
served by the administration shell and the device became the sensor (physical resource)
component of the POSH.
When a network component connected as a client, it opened a service subscription
contract for providing the information. This information could, for example, be pose or
location information. The rate at which the subscriber desires the information formed
part of the contract. Multiple subscribers could obtain data from the POSH (limited by
the hardware it is run on).
Another Interface Holon, maintaining a personal interface in the form of a tablet,
hosted the semiotic and observation services listed in Table 3. A semiotic service, through
an environmental interface, was provided by the Workstation Projector Holon, which
allowed the rendering of text, images, and workstation overlays.

Table 3. Interfacing capabilities of the Tablet Holon

Semiotic capabilities Observation capabilities


Text Binary answer
Image Process error button
Schedule display Schedule change
Audio Microphone

6.3 Activity Execution with Interface Services

Figure 4 shows three different actions that demonstrate the developed interface services
in the activity workflow. In Fig. 4 (a) a question was rendered, asking the worker if the
activity could start. Due to the tablet offering combined services of text rendering and
binary answer observation, the tablet was chosen to render the question and observe the
answer. Although not developed here, the tablet’s text to speech capability and speech
recognition capability could automatically have been substituted for, or used in parallel
to, the text rendering and button observation depending on the state of the worker. As an
example, if the worker was busy mixing a pot of resin, and would not be able to touch
the screen, the HRH-AS would request the audio modality be used instead.
When the user selected START on the dialog shown in Fig. 4 (a), the observation was
noted on the SBB by a process execution component of the HRH-AS and it requested
that the workstation projection service render the first instruction as an overlay (shown
in Fig. 4 (b) and (c)). The HRH-AS subscribed to the pose observation service and the
workstation state observation service, which made the “worker pose” and “work area
state” state variables available on the SBB. The HRH-AS could make decisions, based on
this information, on how to render instructions. Both the Tablet Holon and Workstation
Projector Holon were chosen to display the textual part of the work instruction.
310 D. Sparrow et al.

Fig. 4. Three actions of a case study activity shown with their interface services in action

7 Conclusions and Future Work

Though a significant research contribution towards I4.0 has focused on connecting


machines and digital systems, further human-centred research efforts are still required.
Considering the integration of humans into I4.0 environments, the roles of operators are
likely to become more dynamic and decision-oriented. Operators in factories of the future
require freedom from laborious tasks, flexibility in communication, and personalized and
optimized information delivery.
The paper proposed the development of ambient intelligence environments (AmIEs)
in order to facilitate worker freedom and multimodal interfacing. The use of holonic
design principles can support the realization of AmIEs with inherent scalability,
robustness, and self-organization of the interface components.
In order to achieve the holonic AmIEs, the paper categorized information flow to
the human, envisioned through holonic semiotic services, and from the human, envi-
sioned through holonic observation services. In a case study, a Human Resource Holon
Administration Shell (HRH-AS) is shown to make effective use of the holonic interface
services in an AmIE to make decisions regarding interfacing with its corresponding
human worker. The HRH-AS could communicate to the human using semiotic services,
and could obtain state information of the human worker and their workspace using
observation services, through interface services in the AmIE.
Future work will focus on the further development of the HRH-AS as a mechanism
to integrate humans in factories of the future, along with the full integration of Interface
Holon services with the HRH-AS to create, and fully utilize, AmIEs. Promising future
research into structuring Interface Holon services using the Web Ontology Language
and Resource Description Framework technologies along with graph databases is also
planned. The planned research will also consider issues pertaining to human acceptability
of the proposed tools and will explore mechanisms to improve the trust that humans may
have in such systems.

Acknowledgements. Funding from the National Research Foundation (NRF) through the South
African National Antarctic Programme (SANAP Grant No.110737) is thankfully recognized.
Interfacing with Humans in Factories of the Future 311

References
1. Baheti, R., Gill, H.: Cyber-physical systems. the impact of control technology, pp. 161–166
(2011)
2. Geissbauer, R., Vedso, J., Schrauf, S.: Industry 4.0: Building the Digital Enterprise. PwC
Global Industry 4.0 Survey Report (2016)
3. Schroeder, G.N., Steinmetz, C., Pereira, C.E., Espindola, D.B.: Digital Twin data mod-
eling with AutomationML and a communication methodology for data exchange. IFAC-
PapersOnLine. 49(30), 12–17 (2016)
4. Burns, M., Manganelli, J., Wollman, D., Laurids Boring, R., Gilbert, S., Griffor, E., Lee, Y.C.,
Nathan-Roberts, D., Smith-Jackson, T.: Elaborating the human aspect of the nist framework
for cyber-physical systems. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 62(1), 450–454
(2018)
5. Rother, M.: Toyota Kata: Managing People for Improvement Adaptiveness and Superior
Results. McGraw-Hill Education, New York (2010)
6. Loveday, S.: BMW Comments On Tesla Model 3 Production Woes. https://insideevs.com/
bmw-comments-on-tesla-model-3-production-woes/
7. Rauch, E., Linder, C., Dallasega, P.: Anthropocentric perspective of production before and
within Industry 4.0. Comput. Ind. Eng. 139, p. 105644 (2020)
8. Rey, G. Z., Carvalho, M., Trentesaux, D.: Cooperation models between humans and artifi-
cial self-organizing systems: motivations, issues and perspectives. In: Proceedings of the 6th
International Symposium on Resilient Control Systems, ISRCS 2013, pp. 156–161 (2013)
9. Pacaux-Lemoine, M.P., Trentesaux, D., Rey, G.Z., Millot, P.: Designing intelligent manufac-
turing systems through human-machine cooperation principles: a human-centred approach.
Comput. Ind. Eng. 111, 581–595 (2017)
10. Peruzzini, M., Grandi, F., Pellicciari, M.: Exploring the Potential of Operator 4.0 Interface
and Monitoring. Comput. Ind. Eng. 139, p. 105600 (2019)
11. Posada, J., Toro, C., Barandiaran, I., Oyarzun, D., Stricker, D., De Amicis, R., Pinto, E. B.,
Eisert, P., Döllner, J., Vallarino, I.: Visual computing as a key enabling technology for industrie
4.0 and industrial internet. IEEE Comput. Graph. Appl. 35(2), 26–40 (2015)
12. Romero, D., Bernus, P., Noran, O., Stahre, J., Fast-Berglund, A.: The operator 4.0: human
cyber-physical systems and adaptive automation towards human-automation symbiosis work
systems. In: Proceedings of the International Federation for Information Processing on
Advances in Production Management Systems, pp. 677–686 (2016)
13. Romero, D., Stahre, J., Wuest, T., Noran, O., Bernus, P., Fast-Berglund, A., Gorecky, D.:
Towards an operator 4.0 typology: a human-centric perspective on the fourth industrial rev-
olution technologies. In: Proceedings of the International Conference on Computers and
Industrial Engineering vol. 46, pp. 1–11 (2016)
14. Rabelo, R.J., Romero, D., Zambiasi, S.P.: Softbots supporting the operator 4.0 at smart factory
Environments. Adv. Prod. Manage. Syst. 2, 456–464 (2018)
15. Flemisch, F., Heesen, M., Hesse, T., Kelsch, J., Schieben, A., Beller, J.: Towards a dynamic
balance between humans and automation: authority, ability, responsibility and control in
shared and cooperative control situations. Cogn. Technol. Work 14(1), 3–18 (2012)
16. Jirgl, M., Bradac, Z., Fiedler, P.: Human-in-the-Loop issue in context of the cyber physical
systems. Int. Fed. Autom. Control PapersOnLine 51(6), 225–230 (2018)
17. Weiser, M.: The computer for the 21st century. Sci. Am. 265(3), 94–104 (1991)
18. Weiser, M., Gold, R., Brown, J.S.: The origins of ubiquitous computing research at PARC in
the late 1980s. IBM Syst. J. 38(4), 693–696 (1999)
19. Friedewald, M., Raabe, O.: Ubiquitous computing: an overview of technology impacts.
Telematics Inform. 28, 55–65 (2011)
312 D. Sparrow et al.

20. IST Advisory Group (ISTAG): Ambient Intelligence: From Vision to Reality, ISTAG Draft
Consolidated Report (2003)
21. Riva, G., Vatalaro, F., Davide, F., Alcañiz, M.: Ambient Intelligence. IOS Press, Amsterdam
(2005)
22. Bains, P.: The Primacy of Semiosis: An Ontology of Relations. University of Toronto Press,
Toronto (2006)
23. Thiran, J., Marques, F., Bourlard, H.: Multimodal Signal Processing: Theory and Applications
for Human-Computer Interaction. Academic Press, San Diego (2010)
24. Peden, R.G., Mercer, R., Tatham, A.J.: The use of head-mounted display eyeglasses for
teaching surgical skills: a prospective randomized study. Int. J. Surg. 34, 169–173 (2016)
25. Quint, F., Loch, F.: Using smart glasses to document maintenance processes. In: Weisbecker,
A., Burmester, M., Schmidt, A. (eds.) Humans and Computers 2015 - Workshop, pp. 203–208.
De Gruyter Oldenbourg, Stuttgart (2015)
26. W3C Multimodal Interaction Working Group, https://www.w3.org/
27. Abbas, A., Privat, G.: Bridging property graphs and rdf for iot information management.
In: Proceedings of the International Workshop on Scalable Semantic Web Knowledge Base
Systems, pp. 77–92 (2018)
28. Bermudez-Edo, M., Elsaleh, T., Barnaghi, P., Taylor, K.: IoT-lite ontology: a lightweight
semantic model for the internet of things. In: Proceedings of the IEEE Conferences on Ubiq-
uitous Intelligence and Computing, Advanced and Trusted Computing, Scalable Computing
and Communications, Cloud and Big Data Computing, Internet of People, and Smart World
Congress, pp. 90–97 (2016)
29. Lemaignan, S., Siadat, A., Dantan, J., Siemenenko, A.: MASON: a proposal for an ontology
of manufacturing Domain. In: Proceedings of the IEEE Workshop on Distributed Intelligent
Systems: Collective Intelligence and its Applications, pp. 195–200 (2006)
30. Valckenaers, P., Van Brussel, H.: Design for the Unexpected: From Holonic Manufacturing
Systems towards a Humane Mechatronics Society. Butterworth-Heinemann, Waltham (2016)
31. Wickens, C.: Multiple Resources and Mental Workload. Hum. Factors 50(3), 449–455 (2008)
32. Sanders, M.S., McCormick, E.J.: Human Factors in Engineering and Design, 7th edn.
McGraw-Hill, New York (1993)
33. Brier, S.: Cybersemiotics. University of Toronto Press, Toronto (2008)
34. Kahneman, D.: Thinking Fast and Slow. Penguin Books, London (2012)
35. Sparrow, D. E., Kruger, K., Basson, A. H.: An architecture for the integration of human
workers into an industry 4.0 environment. Submitted to the International Journal of Production
Research (2020)
36. Valckenaers, P.: ARTI reference architecture – PROSA revisited. In: Borangiu, T., Trente-
saux, D., Thomas, A., Cavalieri, S. (eds.) Service Orientation in Holonic and Multi-Agent
Manufacturing. SOHOMA 2018. Studies in Computational Intelligence, vol. 803, pp. 1–19.
Springer, Cham (2019)
A Benchmarking Platform for Human-Machine
Cooperation in Cyber-Physical Manufacturing
Systems

Quentin Berdal1(B) , Marie-Pierre Pacaux-Lemoine1 , Thérèse Bonte1 ,


Damien Trentesaux1 , and Christine Chauvin2
1 LAMIH - CNRS UMR 8201, Université Polytechnique Hauts-de-France, Valenciennes, France
{quentin.berdal,marie-pierre.lemoine,therese.bonte,
damien.trentesaux}@uphf.fr
2 Lab-STICC - CNRS UMR 6285, University of South Brittany, Lorient, France

christine.chauvin@univ-ubs.fr

Abstract. The research community is producing a tremendous amount of works


with the sole purpose to prepare for the next industrial revolution. In the context
of the development of cyber-physical systems to production systems, a variety
of technologies and architectures integrating humans and machines are studied.
Results are evaluated using platforms to identify the best solutions. However,
most platforms are designed to test one approach in a specific domain and the
human integration remains difficult to evaluate. As the cooperation aspect between
Humans and machines becomes of an increasing concern, a global benchmark-
ing platform becomes necessary. Such a platform is specified in this paper. An
illustration is detailed, based on the concept of digital twin.

Keywords: Human-Machine cooperation · Benchmarking · Intelligent


manufacturing systems · Cyber-physical system · Manufacturing

1 Introduction
The fourth industrial revolution is, on one side, the opportunity to rework the foundations
of production systems, leading to heterogeneous development based on an increasing set
of new technologies and the development of models and methods aiming to integrate the
human in future production systems. In this paper, we consider the case of cyber-physical
manufacturing systems (CPMS) [1] interacting with humans, as a class of Cyber-Physical
Production Systems [2]. Cyber-Physical Systems (CPSs) are “systems of collaborating
computational entities which are in intensive connection with the surrounding physical
world and its on-going processes, providing and using, at the same time, data-accessing
and data-processing services available on the internet” [3]. Cyber-Physical Production
Systems “consist of autonomous and cooperative elements and sub-systems that are
getting into connection with each other in situation dependent ways, on and across all
levels of production, from processes through machines up to production and logistics
networks” [3].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 313–326, 2021.
https://doi.org/10.1007/978-3-030-69373-2_22
314 Q. Berdal et al.

On the other side, when it comes to test these new ideas, one must design and use
a platform to demonstrate or benchmark the results. Demonstration platforms are made
to show the feasibility of an idea and the benefits compared to a reference approach
while benchmarking platforms are made to compare contribution with others. Referring
to the work of Trentesaux et al. [4], “benchmarking is comparing the output of different
systems for a given set of input data in order to improve the system’s performance”.
Benchmarking in CPMS development is a critical aspect since research in CPMS has
still not led to the development of highly mature off-the-shelf solutions to be applied in
real industrial systems.
Developing a benchmarking platform is a common approach in system engineering
to evaluate contributions, and this holds also true in production engineering and for
CPMS. The scope of these platforms can differ depending on the research topic and the
technologies involved. Two main families of platforms are observed: 1) the development
of a technology and 2) the integration of a contribution in a system.
The development of a specific technology is generally completed by a subject-centric
platform, designed as a proof of concept [5] or designed to compare the results with
and without a suggested contribution [6]. This usually means that the scope of the
platform is limited to the scope of the technology. For example, cobotic systems are
focused on human-machine cooperation and as such, most developments are forgetting
the manufacturing system itself [7, 8]. On the positive side, these contributions usually
provide some of the latest developments in the application domain. However, some
technologies, such as Big data, requires strong simplifications, for example using a
data generator [9]. This may cause serious issues with the integration of human as these
generators will hardly reproduce the behaviour of human activity in the context of CPMS.
In [10, 11] a variety of advanced sensors are integrated in a platform and tested for both
sensors and control architecture development, but the platform remains very specific.
When addressing the integration of a contribution in a system, the scope of the plat-
form is usually quite large from design, as it is important to represent the variety of
interactions and events. “Integration” can be both technology-to-technology, human-to-
technology and human-to-human. A typical example concerns the challenge of commu-
nication in a system where multiple communication protocols exist [12]. The platform
presented in [12] tries to represent at best what is expected regarding the communi-
cations in Industry 4.0. In [4] the benchmarking platform is used in the development
of self-organizing entities and most of the work focusses on the control of the system.
These researches tend to model systems as realistically as possible since simplifications
may harden the evaluation of the contribution.
Let us detail two illustrative benchmarks that will be further exploited along this
paper. First, multiple platforms have been developed [4, 6, 13, 14], cf. Fig. 1, and are based
on the same academic but realistic production cells. These platforms are accompanied
by both an emulator and a simulator [14] developed in-site. The simulator was used
itself in the benchmarking project Bench4Star. This platform embeds the principle of
autonomous entities with autonomous shuttles, using the principle of intelligent products
and potential fields to schedule the production. As a digitalized system such as the
simulator uses a time reference (clock signal) to trigger the system’s evolution, it is
possible to alter the evolution speed of the system. In real-time scenario, the frequency
A Benchmarking Platform for Human-Machine Cooperation in CPMS 315

of the clock signal is adapted to the processing timestep such as the perceived evolution
speed corresponds to the one of the real systems. However, it is possible to obtain results
faster by using a higher frequency clock signal. If this principle is used on a new instance
initialized with the current state of the system, this provides a projection of the system
future state, but interaction with the human becomes unfeasible. Furthermore, if some
parameters are modified during the initialization, this principle can be used to perform
virtual commissioning [15], which is the simulation of a system modification before
physical implementation.

Fig. 1. S.MART flexible cell of Valenciennes (left), its simulator (centre) and SUCRé project
AGVs (right)

The second platform was developed during the SUCRé project [16, 17], in which a
fleet of ground robots is autonomous or remotely operated in the context of emergency
response. The study focused on Human-Machine cooperation through system analysis
and adaptation of autonomy levels. Thus, the fleet of ground robots can be considered
as automated guided vehicles (AGVs) which authorise human operator interaction at
three degrees, selected at his/her discretion. The availability of video feedback for the
human operator from the robots enables the environment monitoring and robots track-
ing independently of the environment itself. The fleet of AGVs is tasked with logistic
operations, which can be summarized as taking load X from point A and delivering it to
point B in the open environment of a flexible cell. This platform has been firstly adapted
and applied to another project [18] then secondly to the context of manufacturing for
the ANR HUMANISM project.
Linked to the previously introduced types of platform, some publications point out
the importance of human interactions with the system and encourage such a develop-
ment, requiring a benchmarking system able to take the human factor into consideration,
for example [19, 20]. Related to the human interaction aspect, multiple contributions are
now gathered around the term Operator 4.0 [21]. The technologies at disposal for human
operators are studied and developed in order to improve the skills and communications
of human operator. From those studies, new opportunities are observed regarding human
integration and work organisation. Meanwhile, few contributions actually had the oppor-
tunity to test their ideas at a global scale. This aspect is important as the idea of operator
4.0 lead to a new perception of the human operator’s place in the manufacturing system.
From this short overview, one can note that most of the time (the SUCRé platform
being an exception), benchmarking platforms developed in research are limited to spe-
cific applications and are not designed to be used in other contexts. Moreover, these
platforms hardly consider the evaluation of the quality and the consistency of the inte-
gration of the human (supervisor, operator, maintainer…) in future industrial systems.
316 Q. Berdal et al.

As a consequence, the aim of this paper is to suggest and specify a reusable benchmark-
ing platform with the will to evaluate the integration of the human in future industrial
systems, with a focus on CPMS. The re-investment activity of the SUCRé platform
described above triggered the will of the authors to suggest a more global approach to
design human-aware CPMS benchmarking platforms. In the context of our research, we
took the initiative to try to specify a benchmarking platform in which both emerging
Industry 4.0 technologies and the human could be integrated and evaluated together.

2 Human-Machine Cooperation in CPMS: Specifying


a Benchmarking Platform
Benchmarking platforms offer standardized systems on which one can plug a technology
supported by the benchmarking platform and which will provide data upon execution.
This list of requirements covers the general use-case for a CPMS that a benchmarking
platform must represent.
First of all, one must thus take care of how new technologies will be integrated to
the platform to ease the process and enable full exploitation of the platform. In addi-
tion, multiple scenarios must be planned to cover possible events, to provide data on all
aspects deemed important, like in [4]. It is important for any benchmarking platform to
ensure reproducibility of results, by identifying all the data and input parameters used
for each obtained result, and save the executables and their relevant codes in databases.
If the human is involved, reproducibility is hardly possible, but in that case, it is impor-
tant to memorize the decision taken and the relevant context, including complementary
measures of the human behaviour (mouse clicks, eye movement, etc.).
Considering human aspects, multiple levels of cooperation must be addressed: typ-
ically, the strategic, tactical, and operational ones [22]. Work is performed differently
demanding on the level and this aspect is generally of interest when working on Human-
Machine system. In addition, CPMS can be handled by multiple operators, working at
different levels, bringing also human-to-human interactions. Our specification concerns
for the moment only one operator, whatever the level of cooperation. Furthermore, work
on Human-Machine cooperation brings another requirement: the studies of interactions
require to build a model for each actor (human or artificial decisional entity in the CPMS)
[23] of the system. If the creation for example of a digital twin implies the availabil-
ity of actors’ behavioural models, these models must be transparent and modifiable.
The lack of transparency (“black box” effect) would complicate the study process and
the reproducibility, and an unmodifiable one would simply prevent any exploitation of
the resulting analysis. In addition, the system must considerate possible changes in the
actor’s interactions, depending on the situation.
From an experimental point of view, experiment teams must generate a set of scenar-
ios and collect data, as suggested above, targeting the elaboration of key performance
indicators (KPI). The platform must enable the recording of these KPI for all possible
situations, so one can identify the pro and cons of a contribution. Each scenario is defined
in terms of objectives and perturbations (with/without). Objectives and measured KPI
can be effectiveness oriented (time, quantity, quality…) or efficiency oriented (cost,
energy, scrap, GHG and pollutant emission…). Efficiency is a complex criterion as it
A Benchmarking Platform for Human-Machine Cooperation in CPMS 317

may include multiple factors depending on the domain and company policy. Further-
more, the inclusion of human operators brings the need to measure data not relevant to
the technical system itself (e.g., stress, situation awareness, mental workload, etc.). It
is however important to note that all relevant data from humans are not all obtainable
from the platform itself and may require pre and post-experimentation questionnaires
to collect subjective data. Obviously, the platform must be designed such as at least
objective data could be collected from in order to adapt to any situation. In addition, the
platform must support the integration of any new sensor an experiment team needs to
record relevant data.
If desired in the scenarios, the platform must enable the occurrence of perturba-
tions (pre-defined for example for reproducibility purpose or not). Typical perturbations
concern internal or external events, either resources (e.g., failure of a component), or
control (e.g., wrong execution of command, urgent order). The evaluation of a fail-
ure’s impact is important when working on human in order to evaluate possible chaotic
behaviours in industrial systems (e.g., the butterfly effect). It is also important to monitor
the human behaviour facing the complexity of the unexpected. It is important to note
that in automatic control it is compulsory to perturbate a system to identify it correctly,
with sufficient information and data. This principle applies here so perturbations are of
great importance to understand how and why humans decide and act.
Simulation technologies are important when considering such complex socio-
technical systems and the platform will gain from this technology. Indeed, simulations
are used to select the best course of actions depending on the current situation. The
complexity of the industrial system and its causal chain makes it difficult to use off-line
optimization algorithms in real-time. Without such simulations, it becomes difficult to
understand the causality of a decision or an action on the industrial system. Typically,
two types of simulation can be developed: an emulation of the industrial system to avoid
using a real one and speed up tests, and a simulation used as a forecasting tool to test dif-
ferent strategies before choosing one. Obviously, a platform can merge these two types
of simulations or can even hybridize simulated elements and real ones. In this context,
the use of a digital twin is suggested as depicted Fig. 2.

Fig. 2. Global architecture for a human-aware CPMS benchmarking system


318 Q. Berdal et al.

A digital twin is “a virtual and computerized counterpart of a physical system that


can be used to simulate it for various purposes, exploiting a real-time synchronization of
the sensed data coming from the field” [24]. This digital twin integrates a digital shadow
as a simulation instance of the industrial system (real or emulated) and a control system
of the shadow to test different strategies before applying one. Ideally, a switch would
render the target industrial system “transparent”, meaning that ideally the control system
should not “know” if it is connected through an interface to an emulator of the industrial
system or to the real industrial system itself or to a hybrid micro world merging both
emulation and real equipment. A major benefit of using a digital twin is the accessibility
of data and control decisions related to the CPMS. The principle itself of a digital shadow
is based on a precise model of the considered system, updated in real-time. This proves
to be helpful when considering the extraction of KPI and the integration of new elements
or technologies, such as different control architectures and algorithms.
The Cloud technology becomes more important with the globalisation of the system
[25]. The Cloud offers an infrastructure that grants access to the connected elements to
all actors of the production wherever they are. This includes actors on site but also those
from remote locations, enabling a global communication. The organisation of work may
be impacted by the Cloud technology in the sense every human operator can operate
parts of the system which are not necessary linked physically. It is thus important to take
in consideration the possibility of a global and interconnected production system, in
which the human operator may not be physically present. For a benchmarking platform,
this translates into a network enabled architecture.
The “horizontal integration” (the industrial system as an element of a more global
system) is also an important factor, as manufacturing can be influenced by its environ-
ment. For example, logistics and manufacturing are tightly coupled in real-life situations
while most platforms do not consider this aspect. The platform must thus be able to
accept combinations of complex systems for those working on this aspect, or at least,
the possibility to inject data from an external source.
The next part presents a platform designed according to these specifications.

3 A Benchmarking Platform for the HUMANISM Project


As introduced, our research team aims to develop a reusable benchmarking platform to
study and evaluate relatively the integration of the human in CPMS. This platform is
intended to be used firstly for the HUMANISM ANR project (ANR-17-CE10-0009). In
the context of the HUMANISM project, the human works mainly at the tactical (super-
vision, launch of products, energy monitoring, etc.) and operational levels (allocation
of robots to products, supply orders, etc.) and has to choose the level of automation for
some equipment that can behave autonomously or not, depending on his/her awareness
of the situation, his/her mental workload and the occurrence of unexpected perturbating
events (in the project, three are considered: varying global available energy thresholds,
robot malfunctions and urgent orders). The entire system is evaluated in terms of two
objective KPIs: production completion time and overall energy consumption.
A Benchmarking Platform for Human-Machine Cooperation in CPMS 319

To reach this objective, the work started with what we have at disposal: 2 standalone
platforms previously built for other projects, described in the introduction (see Fig. 1).
Each platform was targeting a different part of the system: one for the logistics and one
for the production.
This platform has been elaborated from the specifications described in the previous
section. The resulting platform is depicted in Fig. 3. It is constructed on a digital twin,
integrating a digital shadow of an industrial system (in our experiments, the S.MART
flexible cell of Valenciennes using Arezzo [14]), updated and used on purpose when
required by the human to test strategies. Such a digital twin handles information about
the S.MART cell and a mean to control it, and thus is closely related to multiple other
technologies involved in Industry 4.0. The control system is done using potential fields
emitted by robots to attract shuttles conveying products.

Fig. 3. The proposed architecture in HUMANISM

The target system is a micro-world, which has been defined as a hybridization of


an Arezzo emulator of the cell (robots, shuttles conveying products on a 1D conveying
systems) used in real-time and real entities, here ground robots used as AGVs in charge
of supplying virtually the robots; virtual and real worlds are then merged. Since our
AGVs platform is using physical ground robots which are supposed to interact with
the cell, it was decided to use a projector to link the two worlds, cf. Fig. 4. To make
this system more tangible, a wood structure with a ratio corresponding to the physical
production platform has been designed. The volume of this structure helps to visualise
the production cell and the projection screen on top offers a good visualisation of the
cell state, enabling video feedbacks for the human, “as is” the human was looking at the
real cell.
Since Arezzo is an emulator, its behaviour is close to the behaviour of the operating
part of the cell. It simulates the operative parts for which each order and low level
command (bytes) must be set through networks, else nothing “happens” in the emulator.
320 Q. Berdal et al.

Fig. 4. The physical reconstruction of the cell using the Arezzo emulator

In the digital twin, a shadow of the emulator, running off-line and used on demands from
the initial real-time emulator, projects the system state in the near future.
Figure 5 depicts the overall technical implementation of the platform. The ground
robot’s legacy control system was developed in C++. The experiment team is tasked
with the validation of the AGV’s physical position before allowing commands such as
loading or unloading products. The Arezzo emulator of the S.MART cell was developed
using NetLogo and interfaced using Java. As all data and interactions are digitalised, a
server running with the platform and exporting part of the interactions is added. As an
example, the production and logistic planning is performed using a web page. The server
sends updates to the interface using only raw data, leaving the graphical representation
and data exploitation to the remote client. A benefit of such an architecture, apart from
the decreased processing pressure on the platform, is that human operators working
on the platform are not bond to the physical location of the platform and any device
with network capability can be used. In addition, user specific interfaces and interaction
means can be considered without impacting the existing one. However, in our case, only
the experiment team has a full remote access to the platform with the ability to interact
with most of the system’s entities. This network enables an architecture reproducing the
capability of cloud platforms, even if it is quite limited in this case. Furthermore, the
communication is designed to use standard protocols and embeds on initialization a list
of available functionalities. This enables the insertion of new elements and the dynamic
linking to new functionalities.
The user interface being quite independent from the digital twin, the experiment
team does not need to modify the digital twin itself unless a very specific new function
needs to be integrated. This tends to separate the requirement of the experiment and
the benchmarking platform itself, avoiding the development of new versions of the
platform for each new research project. This principle eases work sharing between
research teams, the reproducibility of research and the expansion of the platform toward
new requirements.
A Benchmarking Platform for Human-Machine Cooperation in CPMS 321

Fig. 5. Technical implementation of the platform

Figure 6 shows for illustration purpose the planification interface for the human
supervisor (tactical level). The planification is displayed using a web browser and as
such, is completely independent of the system. The platform itself is not affected by
changes in the interfaces, as the server only serves the interfaces and the data on demand.

Fig. 6. The planification interface for the human supervisor

Works on human factor also benefit from the accessibility of the system and it
completes “vertical integration”. Every part of the platform, whenever related to the
operational level, tactical level, or strategic level of operation can cooperate with a
322 Q. Berdal et al.

human operator through a set of available functions and data feeds. This can be observed
in Fig. 6 where one of the remote interfaces gathers all functions related to the tactical
level of production and logistic planning. The possible functionalities are up to now
basic, such as adding a product to the production queue or changing the launching order.
The interface owns for that purpose a graphical representation of product ordering and
real-time system capabilities in order to help the human operator. All functionalities of
the system are exposed and can be exploited from any connected application.
In the current configuration for our research in HUMANISM, the human operator
is in charge of both production and logistic supervision, operating mainly at tactical
and operational levels. The human operator has a direct control over most actors of
the system, being the conveying shuttles or the AGVs, and can adapt their autonomy
depending on the situation. For example, in a nominal situation, the human operator can
give a high autonomy to the AGVs in order to benefit from their pathfinding algorithms,
while taking back control when a problem occurs like physical obstacles. However, the
system is not limited to this configuration and one may thing of another one to test on
our platform.
As specified, the key points to record in this situation are the following data: objective
data (e.g., events relevant to production) linked to the desired KPI (production and energy
in the HUMANISM project) and subjective data (e.g., relevant to the interaction with the
human operator or a measure of his/her mental workload completed by face movements
tracking) are gathered. Concerning objective data, the energy consumptions and activity
times are collected for each individual equipment (robot, shuttle…) over time. Then, the
results are completed by the logging of achieved products and the historization of every
major event (such as machine breakdown). Regarding the human system interaction, all
actions triggered by the operator result in an entry in the logs for both the element he/she
interacted with, the communication that occurred in the case of remote interfaces, and the
action implemented in such a way the chain of events can be replayed later. These data are
then completed by video recording of both the human operator (front and shoulder view)
and the physical platform (from two angles), completed by a face tracking of the human
operator. This is important in order to know for each situation which element attracted
the attention of the human operator and which action was decided and implemented. In
the current state of the platform, some sensors such as camera and face tracking systems
are decoupled from the system to be reusable for other developments in the future, as
specified. Experiments including human operator generally require a full experimental
protocol, including formatted teaching and explanations, monitoring of the experiment,
and an additional questionnaire at the end to gather the perception of the human. All this
protocol has been developed [26] but is not presented in this paper.
To summarize, the use of the suggested platform in the context of the ANR
HUMANISM project was specified as provided in Table 1.
This platform has been successfully used to operate a set of experiments with more
than 20 participants. These experiments lasted nearly 30 h. Results are being expertised,
but it is worth noting that the participants were able to point out specific advanced
cooperation needs with the digital twin and suggested to increase the autonomy of the
control system at the operational level.
A Benchmarking Platform for Human-Machine Cooperation in CPMS 323

Table 1. HUMANISM instance of the proposed specifications.

Benchmark platform HUMANISM application


specification
Target system Hybrid: Arezzo emulator (S.MART cell)/real (ground robots AGV)
Levels Tactical and operational
Digital shadow Arezzo emulator
Input for the human Set of products to realize, global energy threshold to comply with
Human decisions/actions Tactical level: product order and prioritization, resupply orders
operational level: product allocation to robot (modification of target
robot or potential field attractivity), level of automation of ground
robots (automatic/manual with joystick)
Objective KPI Effectiveness: production completion time
Efficiency: overall energy consumption and respect of energy
consumption limit
Subjective KPI Situation awareness, mental workload
Perturbations S.MART robot malfunction, obstacle on AGV path, modified
overall available energy threshold

4 Discussion

Working with an emulator of the cell and not on the real physical system, simplifies
the benchmarking process and will ease the future application on the real cell. It also
avoids the use of the real system, limiting costs and risks (bad usage) before real appli-
cations. Our platform offers multiple advantage related to the use of in-house software:
every element of the platform is customizable and open. This helps the realisation of
experiments through the accessibility of every element. The quick deployment of new
elements in the platform and the possibility to alter the behaviour of every entity offer a
great variety of possible scenarios and answer part of the requirements. The integration
of the human in experimentation is facilitated, but the development of specific coopera-
tion modes remains to be done, depending on the case study and on what is expected to
be evaluated.
While some points still need improvements, our platform follows the requirements
listed. Through further development, we expect such platform could become one of the
first, affordable human-aware CPMS benchmarking system available for the research
community. Collaboration with other research centres is possible, such that our plat-
form could become a resource centre for researchers. From our perspective, even the
specifications introduced are helpful for researchers to avoid the development of costly
(money, human resource and time) but un-reusable platforms. Until then, the principle
used is easily exploitable for those who already own digital platforms and who wish to
develop a more complete, generic platform.
324 Q. Berdal et al.

5 Conclusion
As the cooperation aspect between Humans and machines in CPMS becomes of an
increasing concern, a global benchmarking platform becomes necessary. Such a plat-
form was specified in this paper. An illustration was detailed, based on the concept
of digital twin. The work presented is intended to be useful for researchers aiming to
develop reusable platforms involving the human. Indeed, a lot of research laboratories
already have at their disposal most of the resources used in our example and multiple
digital platforms are publicly available. This makes our design principle affordable and
customizable to match most research topics relevant to the human in CPMS.
The next focus in the development of the platform is the design of a common
workspace to enable more elaborated human-machine cooperation processes. This is
the most important development prospect for our platform. Further efforts must also be
made in defining a more generic interaction model between entities.

Acknowledgements. This paper is carried out in the context of the HUMANISM ANR-17-
CE10-0009 research program, funded by the ANR “Agence Nationale de la Recherche”, and by
SUCRé project. The work presented in this paper is also partly funded by the Regional Council
of the French Region “Haut-de-France” and supported by the GRAISYHM program. The authors
gratefully acknowledge these institutions.

References
1. Jakovljevic, Z., Majstorovic, V., Stojadinovic, S., Zivkovic, S., Gligorijevic, N., Pajic, M.:
Cyber-physical manufacturing systems (CPMS). In: Majstorovic, V. and Jakovljevic, Z. (eds.)
Proceedings of 5th International Conference on Advanced Manufacturing Engineering and
Technologies, pp. 199–214. Springer International Publishing, Cham (2017). https://doi.org/
10.1007/978-3-319-56430-2_14
2. Cardin, O.: Classification of cyber-physical production systems applications: proposition of
an analysis framework. Comput. Ind. 104, 11–21 (2019). https://doi.org/10.1016/j.compind.
2018.10.002
3. Monostori, L.: Cyber-physical Production Systems: roots expectations and R&D challenges.
Procedia CIRP 17, 9–13 (2014). https://doi.org/10.1016/j.procir.2014.03.115
4. Trentesaux, D., Pach, C., Bekrar, A., Sallez, Y., Berger, T., Bonte, T., Leitão, P., Barbosa,
J.: Benchmarking flexible job-shop scheduling and control systems. Control Eng. Pract. 21,
1204–1225 (2013)
5. Cardin, O., Castagna, P., Couedel, D., Plot, C., Launay, J., Allanic, N., Madec, Y., Jegouzo,
S.: Energy-aware resources in digital twin: the case of injection moulding machines. In:
International Workshop on Service Orientation in Holonic and Multi-Agent Manufacturing.
Springer series in computational intelligence, pp. 183–194. Springer (2019)
6. Barbosa, J., Leitão, P., Adam, E., Trentesaux, D.: Dynamic self-organization in holonic multi-
agent manufacturing systems: the ADACOR evolution. Comput. Ind. 66, 99–111 (2015).
https://doi.org/10.1016/j.compind.2014.10.011
7. Cherubini, A., Passama, R., Crosnier, A., Lasnier, A., Fraisse, P.: Collaborative manufacturing
with physical human–robot interaction. Robot. Comput. Integr. Manuf. 40, 1–13 (2016)
8. Heo, Y.J., Kim, D., Lee, W., Kim, H., Park, J., Chung, W.K.: Collision detection for industrial
collaborative robots: a deep learning approach. IEEE Robot. Autom. Lett. 4, 740–746 (2019)
A Benchmarking Platform for Human-Machine Cooperation in CPMS 325

9. Han, R., Jia, Z., Gao, W., Tian, X., Wang, L.: Benchmarking big data systems: state-of-the-art
and future directions. ArXiv150601494 Cs. (2015)
10. Mezgebe, T.T., El Haouzi, H.B., Demesure, G., Pannequin, R., Thomas, A.: A negotiation sce-
nario using an agent-based modelling approach to deal with dynamic scheduling. In: Service
Orientation in Holonic and Multi-Agent Manufacturing. Springer Studies in Computational
Intelligence, pp. 381–391. Springer (2018)
11. Zimmermann, E., El Haouzi, H.B., Thomas, P., Pannequin, R., Noyel, M., Thomas, A.: A
case study of intelligent manufacturing control based on multi-agents system to deal with
batching and sequencing on rework context. In: Service Orientation in Holonic and Multi-
Agent Manufacturing. Springer Studies in Computational Intelligence, pp. 63–75. Springer
(2018)
12. André, P., Azzi, F., Cardin, O.: Heterogeneous communication middleware for digital twin
based cyber manufacturing systems. In: International Workshop on Service Orientation in
Holonic and Multi-Agent Manufacturing. Springer Studies in Computational Intelligence,
pp. 146–157. Springer (2019)
13. Pach, C., Berger, T., Sallez, Y., Bonte, T., Adam, E., Trentesaux, D.: Reactive and energy-
aware scheduling of flexible manufacturing systems using potential fields. Comput. Ind. 65,
434–448 (2014). https://doi.org/10.1016/j.compind.2013.11.008
14. Berger, T., Deneux, D., Bonte, T., Cocquebert, E., Trentesaux, D.: Arezzo-flexible manufac-
turing system: A generic flexible manufacturing system shop floor emulator approach for
high-level control virtual commissioning. Concurr. Eng. 23, 333–342 (2015). https://doi.org/
10.1177/1063293X15591609
15. Hoffmann, P., Schumann, R., Maksoud, T.M., Premier, G.C.: Virtual commissioning of man-
ufacturing systems a review and new approaches for simplification. In: ECMS, pp. 175–181.
Kuala Lumpur, Malaysia (2010)
16. Habib, L., Pacaux-Lemoine, M.-P., Millot, P.: Adaptation of the level of automation according
to the type of cooperative partner. In: 2017 IEEE International Conference on Systems, Man,
and Cybernetics (SMC), pp. 864–869. IEEE (2017)
17. Habib, L., Pacaux-Lemoine, M.-P., Millot, P.: Human-robots team cooperation in crisis man-
agement mission. In: 2018 IEEE International Conference on Systems, Man, and Cybernetics
(SMC), pp. 3219–3224. IEEE (2018)
18. Pacaux-Lemoine, M.-P., Habib, L., Sciacca, N., Carlson, T.: Emulated haptic shared control
for brain-computer interfaces improves human-robot cooperation (2020) https://ieeexplore.
ieee.org/Xplore/home.jsp. Accessed 15. May 2020
19. Trentesaux, D., Millot, P.: A human-centred design to break the myth of the “Magic Human”
in intelligent manufacturing systems. In: Service Orientation in Holonic and Multi-Agent
Manufacturing. Springer Studies in Computational Intelligence, pp. 103–113. Springer series
in computational intelligence (2016). https://doi.org/10.1007/978-3-319-30337-6_10
20. Sarter, N.B., Woods, D.D.: How in the world did we ever get into that mode? mode error and
awareness in supervisory control. Hum. Factors 37, 5–19 (1995). https://doi.org/10.1518/001
872095779049516
21. Romero, D., Stahre, J., Wuest, T., Noran, O., Bernus, P., Fast-Berglund, \AAsa, Gorecky,
D.: Towards an operator 4.0 typology: a human-centric perspective on the fourth industrial
revolution technologies. In: Proceedings of the International Conference on Computers and
Industrial Engineering (CIE46), Tianjin, China, pp. 29–31 (2016)
22. Pacaux-Lemoine, M.-P., Flemisch, F.: Layers of shared and cooperative control, assistance,
and automation. Cogn. Technol. Work 21, 579–591 (2019)
23. Pacaux-Lemoine, M.-P., Trentesaux, D., Rey, G.Z., Millot, P.: Designing intelligent manufac-
turing systems through human-machine cooperation principles: a human-centered approach.
Comput. Ind. Eng. 111, 581–595 (2017). https://doi.org/10.1016/j.cie.2017.05.014
326 Q. Berdal et al.

24. Negri, E., Fumagalli, L., Macchi, M.: A review of the roles of digital twin in CPS-based
production systems. Procedia Manuf. 11, 939–948 (2017). https://doi.org/10.1016/j.promfg.
2017.07.198
25. Drăgoicea, M., Borangiu, T.: A service science knowledge environment in the cloud. IFAC
Proc. 45, 1702–1707 (2012). https://doi.org/10.3182/20120523-3-RO-2023.00438
26. Pacaux-Lemoine, M.-P., Berdal, Q., Guérin, C., Rauffet, P., Chauvin, C., Trentesaux, D.: Eval-
uation of the cognitive work analysis methodology to design cooperative human- intelligent
manufacturing system interactions in industry 4.0. Submitted to CTW (2020)
Human-Machine Cooperation with Autonomous
CPS in the Context of Industry 4.0:
A Literature Review

Corentin Gely(B) , Damien Trentesaux, Marie-Pierre Pacaux-Lemoine,


and Olivier Sénéchal

LAMIH UMR CNRS 8201, Université Polytechnique Hauts-de France, Le Mont Houy,
59313 Valenciennes Cedex, France
Corentin.Gely@uphf.fr

Abstract. The aim of this paper is to study to what extent the current state-of-
the-art in human-machine cooperation can be applied or adapted in the emerging
context in Industry 4.0 where the “machines” are autonomous cyber-physical
systems with whom the cooperation is done. A review of 20 papers has been
realized. A discussion is made, pointing out the advances and limits of existing
state-of-the-art applied in the context of autonomous cyber-physical systems. An
illustration in the domain of the maintenance phase of an autonomous cyber-
physical system is provided to explain the conclusions of our review.

Keywords: Cyber-physical systems · Industry 4.0 · Autonomous systems ·


Human-machine cooperation · Literature review

1 Introduction

Humans are standing on the threshold of another industrial revolution that will funda-
mentally change the way of living, working, communicating with each other [1]. This
fourth revolution is unfolding with all the new technologies that are derived from the
emergence of the Internet of Things and new technologies enabling the creation of a
virtual world (virtual reality, augmented reality…) [2]. The flow of new technologies is
becoming a new fundamental paradigm for humanity as well as for the Industry 4.0 [3].
Industry 4.0 does not just bring new technological advancements, but a new vision on
how the factories should operate with manufacturing products, providing services, and
managing assets and doing business in general [4].
A fundamental representation of this revolution is the concept of cyber-physical sys-
tem (CPS) [5]. By integrating the physical and cyber domains, CPS provides a function
that asks for a physical answer as well as supporting the digital representation of infor-
mation [3]. Similarly, it can also be defined as computer systems controlling physical
entities using sensors and actuators, with intelligence provided by software and data [5].
Such systems are becoming more intelligent due to the fact that they are able to under-
stand and change [7]. They become more and more autonomous, to the point where these

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 327–342, 2021.
https://doi.org/10.1007/978-3-030-69373-2_23
328 C. Gely et al.

intelligent systems that merge the digital and physical worlds are capable of performing
tasks without any control [8]. CPS can be found all over any industrial sector. New
technologies and methods make the CPS smarter, in the sense they can perceive their
environment: through ad-hoc sensors, they can “see”, “hear”, “smell” [9]. Through infor-
mation, and communication, artificial intelligence, and decision-making technologies,
they can also interact with their environment (other CPS or humans), learn, decide and
act on the physical world, being more and more autonomous with time. Their autonomy
renders them self-driven in an open environment with strategies that even we, humans,
may not comprehend [8]. Consequently, the interactions engaged by an autonomous CPS
with its environment become more and more complicated to understand and to optimize,
both from a “machine” and a “human” point of view.
With this notion of autonomy, researchers and industrialists are conceiving new tech-
nical advances enabling autonomous CPS and humans to work jointly or to cooperate,
which is relevant to the concept of Human-Machine Cooperation (HMC) [10]. Cooper-
ation, as a specific kind of interaction, comes from the Latin words “co” (together) and
“operatio” (work, activity) and means “working together” or “the action or process of
working together” [11]. It can mainly be characterized as a situation where two actors
(typically, a human and a CPS in this paper) strive towards goals while having to interact
with each other during a task/operation due to a necessity of procedures or resources;
to deal with this interference both actors must facilitate each other tasks for the sake of
their interaction [12]. To achieve such cooperation, one actor must know what the other
is doing to know how to cooperate correctly with him/her/it [13]. The important notions
that result undoubtedly from this definition are goals, tasks / operation, and interferences.
Focusing on HMC, the emergence of such autonomous CPS has risen a lot of ques-
tions and problems: How can people predict autonomous CPS’s behaviour? What would
their relationship with humans be? To what extent can they be considered at the same
decision level as humans? There are a lot of more questions than answers for now. In this
paper, we address the following research question: to what extent the current literature
in HMC suggests accurate models and methods enabling humans and an autonomous
CPS to properly cooperate?
More precisely, this paper aims to study to what extent the current state-of-the-art
in HMC can be applied or adapted in the emerging context in Industry 4.0 where the
“machines” are such autonomous CPS. The review of the state-of-the-art is structured and
a discussion is proposed, pointing out the advances and limits of HMC when applied
to the context of autonomous CPS. An illustration in the domain of the maintenance
phase of an autonomous CPS (for example an autonomous train) is provided at the end
of the paper to illustrate the conclusions of our review. This review constitutes indeed
an important step in our work dealing with the design of an effective and consistent
HMC system within the specific context of the autonomous train, thus seen as a kind of
autonomous CPS.
The outlines of this chapter are thus the following: human cooperating with
autonomous CPS is specified in Sect. 2. Three typologies characterizing Human-machine
cooperation are proposed in Sect. 3. Section 4 contains the literature review, based on
these typologies. Section 5 details a case study dealing with the maintenance of an
autonomous train, as an autonomous CPS interacting with the human operator. This
Human-Machine Cooperation with Autonomous CPS in the Context of Industry 4.0 329

case study is followed by a conclusion summarizing the main points of our works along
with some prospects.

2 Human and Autonomous CPS: Specifying the Needs


for an Appropriate Cooperation
CPSs are becoming smarter and more complex with new AI technologies, learning
new knowledge without any guidance from a supervisor or a designer, in a word, more
autonomous. This impacts the way HMC must be approached. In this section, we specify
the need for an appropriate cooperation. A first tentative set of specifications has been
listed in [14]. They have been elaborated considering that the CPS is getting more and
more similar skills to the human ones. In our work, we thus considered the following
seven expectations, inspired and improved from [14]. These specifications will be used
hereinafter as criteria to evaluate the applicability of state-of-the-art contributions in
HMC in the context of autonomous CPS:

1. Any actor (human or autonomous CPS) can trigger a cooperation need with others
to reach his/her/its sub-goal as and when needed. The cooperation need depends on
the expertise of the other actors with whom he/she/it is cooperating.
2. The cooperation must depend on the level of expertise of other actors, implying a
need to adapt exchanges, requests, and tasks accordingly.
3. The cooperation must allow an actor to assume that the other actors with whom
he/she/it is cooperating may not be able to answer requests, may not react in due
time, or may even provide false information in good faith. He/she/it may be overload
with tasks or focused on other tasks. If possible, an actor should be able by this
cooperation to monitor and to understand the activities of the other actors.
4. The cooperation must facilitate the appropriation of the context between the actor
initiating the need and the one he/she/it is soliciting who/which is not necessarily
fully aware of this context. The cooperation scenario must then take care of the
situation awareness of other actors also known as team-Situation Awareness [13].
5. Cooperation must alert the actor that others’ decisions may change for similar past
contexts because of learning effects.
6. The need for cooperation and the activities that compose a cooperation process may
evolve according to the context that generated the need for cooperation, as well as
their knowledge of each other. The sharing and trading of the tasks must be dynamic.
7. The same principle applies to different kinds of cooperation needs. These must be
articulated according to different levels of activity [1], ranging from strategic to
operational levels, depending on the current situation. Indeed, expectations, stakes,
and constraints are not the same for these different levels and should be handled in
different manners as well as they must evolve due to a dynamic interaction.

3 Typologies Suggested
Since the HMC domain is wide and has been studied for years in various application
fields, it occurred to be compulsory to organize our review. The review of the state-of-
the-art is structured according to three typologies that we developed and used to position
330 C. Gely et al.

and to analyze the different contributions: the first one relates to the type of cooperation
chosen, the second, the design method chosen, and the third one, the interaction model
used. The construction of these typologies has been done through firstly a top-down
approach based on the study of major publications and reviews in the field, consoli-
dated secondly with a complementary bottom-up approach through queries on HMC in
Elsevier, Springer, and IEEE databases.
Concerning the first typology, we shared the point of view of [1] and [15] who have
identified two different types of cooperation as a restricted application of the HMC princi-
ples according to time horizons, ranging from operational level for the short run, tactical
at a higher level for achieving the intermediate objectives and strategic at the higher
level. These types of cooperation are “shared and cooperative guidance and control”
where tasks are exchanged depending on each actor’s competence both at the tactical
and operational levels, and “shared control” which is restricted at the operational level,
see Fig. 1. The “shared and cooperative guidance and control” is defined as “trading of
authority, of missions and goals” while the “shared control” is defined as a “trading of
authority during an operation” [1].

About cooperation About facts

Cooperational
e.g. Strategic Tactical Operational
metacommu- e.g. e.g. e.g.
nication navigation guidance control
Human machine system

Ego-system and
Tasks, values, environment
goals

Human-Machine Shared & cooperative


Shared control
cooperation guidance and control

Fig. 1. Positioning main cooperation approaches relevant to HMC

Concerning the second typology, several types of design methods in the field of
HMC have been identified in our literature review and will be used to position and
analyze the contributions. These are: the human-machine balanced approach which is a
type of cooperation based on its knowledge of the human and machine actors and their
needs for cooperation to reach a common goal [16], the actor-centered design which is
gaining importance [14] and the Human-System Integration Design where the human
and the system are fully integrated, as illustrated by the concept of Operator 4.0 [17].
With the human-machine balanced approach, the HMCs are made for each actor so
Human-Machine Cooperation with Autonomous CPS in the Context of Industry 4.0 331

that he/she/it is given information to properly achieve their mission while managing the
interference with the other actor through a common work space [16]. The actor-centered
design allows each actor to know the behavior of their interlocutor, regardless of being
a machine or a human [14]. The actor-centered design has been created for interaction
between two actors, two peers that need to cooperate for each mission. The Human-
System Integration Design is based on the emergence of new technologies giving more
and more information and options to the operators [17].
Concerning the third typology, several types of models and tools have been identified
during our review to model the interactions between actors. In the studied literature, one
can find mainly game theory [18], Fuzzy cognitive model [19], and Know-How/Know-
How-to-Cooperate (KH/KHC) [16]. Cooperative game theory is an interaction model
often used for a situation where two individuals or more are cooperating for a common
goal while each of them shares a common resource. Fuzzy model is an interaction model
based on the complexity and diversity of the information an actor can give, especially
when the flow of information is made by natural language, where the information is not
‘1’ and ‘0’, but ‘fuzzy’. KH/KHC is an interaction model used mostly with a human-
machine balanced approach enabling the use of operation skills of each actor as well as
their cooperation skills that allows them to know the other behaviour.
Figure 2 summarizes our typologies and the different types identified in the literature.

Shared &
CooperaƟve
Guidance and
Control
CooperaƟon type

Shared Control

Actor-Centered
Design

Human-machine
Design type
Human-Machine balanced approach
CooperaƟon

Human-System
IntegraƟon Design

CooperaƟve Game
Theory

InteraƟon model
Fuzzy Model
types

Know how /Know


how to Cooperate

Fig. 2. The three typologies of HMCs proposed and their types identified in the literature.

4 Literature Review
4.1 Method
The protocol used to construct our review has been done according to the method detailed
in [20]. The major source of information used to identify the papers eligible for this review
332 C. Gely et al.

were the following scholarly databases: Elsevier, Springer, and IEEE. The queries used
are provided in Table 1. 20 resulting papers were then identified.

Table 1. Queries used in our review.

4.2 Choice of the Criteria Used to Evaluate the Pertinence of a Contribution

The seven specifications were listed to ensure that the cooperation with an autonomous
CPS is effective and consistent. Each of these specifications is essential to create a
cooperation that would be able to fully exploit the competence of each actor during
the cooperation. Consequently, it has been decided that the criteria be aligned with
these specifications, as explained in Sect. 1. Using criteria 1 through 7, evaluations of
contributions were made according to the following scale: 0/+ /++/+++, interpreted as
follows:

• 0: Article do not focus on the criterion and/or the approach does not suit the criterion,
• +: Article partially (but insufficiently) deals with the criterion and the approach
partially suits the criterion,
• ++: Article deals with the criterion and the approach suits well the criterion,
Human-Machine Cooperation with Autonomous CPS in the Context of Industry 4.0 333

• +++: Article has well addressed that criterion and the approach perfectly suits the
criterion.

4.3 Results

Table 2 contains the results of our analysis. This table shows for each of the seven criteria
the strong points of a given positioned contribution:

1. Cooperation Triggering
2. Exchange of behaviour models concerning their expertise
3. Exchange of behaviour model concerning their activities
4. Team Situation Awareness
5. Adaptation of cooperation based on feedback
6. Dynamic sharing and trading of tasks
7. Dynamic exchange of information at each level of activity

Table 2. Reviewed papers.

Cooperation Design type Interaction


type model type
integration design
Human-machine
Guidance and
Cooperative

Actor-Centered
Shared Control

Human-system
Shared &

Game Theory
Control

approach
balanced

Fuzzy Model
Reference

Criteria 1

Criteria 2

Criteria 3

Criteria 4

Criteria 5

Criteria 6

Criteria 7
KH/KHC
design

1 X X X ++ ++ ++ ++ + ++ ++
21 X X ++ ++ + ++ + ++ ++
22 X X X ++ + + ++ + ++ +
23 X X X ++ ++ + ++ + ++ +
17 X ++ ++ ++ 0 0 0 0
24 X ++ ++ ++ 0 ++ 0 0
25 X ++ ++ ++ 0 ++ 0 0
15 X X ++ ++ ++ + ++ + +
26 X X +++ ++ ++ + ++ + +
16 X X ++ ++ ++ ++ ++ + ++
27 X X ++ ++ ++ + ++ + +
28 X + + + + + ++ ++
2 X + + + + + ++ ++
18 X + + + + + ++ ++
29 X + + + + + ++ ++
30 X + + + + + ++ ++
31 X ++ ++ ++ ++ ++ ++ 0
32 X ++ ++ ++ ++ ++ ++ 0
33 X +++ ++ ++ +++ ++ ++ 0
14 X +++ ++ +++ +++ ++ ++ +
19 X +++ + + 0 + 0 ++
34 X ++ ++ ++ 0 + 0 ++
334 C. Gely et al.

Describing in detail all these references is impossible in this chapter. Meanwhile,


the following paragraphs contain some key elements extracted from references listed in
this table. They are to be considered as illustrative elements of our analysis.
Pacaux-Lemoine et al. explained in their article the need for a human-machine bal-
ance approach using KH/KHC [16]. From their point of view, such an approach and
model ease the dialogue with easy solicitation and behaviour model of each interlocutor
(criteria 1 and 3) and adaptability to the human experience (criterion 5) as well as the
collaboration for sharable goals (criterion 2), while mainly focusing on the situation
awareness for both human and machine (criterion 4) and the exchange of task concern-
ing each individual in the interest of a sharable goal (criteria 6). Meanwhile, it seems
difficult to have a correct exchange of information (criteria 7) in a common work space
facing the need of negotiation between a human actor and a machine actor, especially
when the human actor can only ‘accept’ or ‘impose’ as an explicit explanation cannot
be made.
Flemisch et al. adopted a shared control and Shared & Cooperative Guidance and
Control, easing the dialogue with a simple solicitation and a behaviour model of each
interlocutor (criteria 1, 2) as well as a dynamic interaction (criterion 6) [1]. To prop-
erly share and be adapted to the other actor, the cooperation allows one actor to know
his/her/its activities (criterion 4) as well as reuse his/her/its feedback from his/her/its
previous experience (criterion 5). However, such a shared control has limits concerning
the exchanging of information (criteria 3 and 7), but when the shared control is extended,
it makes the trading of authority possible for all the levels of activity.
Romero et al. described the human-system integration design as an approach allowing
easy solicitation (criterion 1), as well as knowledge of the behaviour model (criterion
3), and the experience and expertise of each actor (criteria 2, 5) as each actor is working
in the form of symbiosis supporting each other in every task [17]. However, since both
actors work together all the time, human-system integration design is not prepared for an
actor improperly responding to a solicitation from the other actor (criterion 4) and then
since the machine is supporting the human, complex dynamic exchange of information
is not needed in this design, while autonomous CPS would surely need for cooperation
with a human (criteria 6 and 7).
Gely et al. defined an actor-centred design as a solution to the exchange of behaviour
model (criteria 2 and 3), enabling solicitation for any actor (criterion 1) and a dynamic
communication concerning each of their tasks (criterion 6) so the cooperation can adapt
to the goals, missions, and experience of each actor [14]. The actor-centred design allows
an easy appropriation of the situation so that any actor can act to help the other whenever
he/she/it needs the cooperation (criteria 4 and 5). The weaknesses that can be brought
out are its inapplicability to manage non-sharable goals and so its impossibility to adapt
and enable negotiation between the two actors concerning a goal, a plan, or even their
different experiences (criterion 7).
Ballagi et al. focused on a different model of interaction with a human-machine
balanced approach - the fuzzy model of interaction [26]. From their point of view, such
an interaction model makes the solicitation easy (criterion 1) as well as the exchange
of behaviour models (criterion 3). It allows a dynamic flow of information (criterion 6),
meaning that the fuzzy model enables not only the nominal solicitation need but also
Human-Machine Cooperation with Autonomous CPS in the Context of Industry 4.0 335

it can adapt to the human fuzzy logic when the machine dynamically interacts using
the natural language of the human actor. Meanwhile, this approach hardly considers the
criteria 2, 4 and 5 since the fuzzy model does not enable an adaptation of the cooperation
based on the expertise of the human actor, on his/her future input on his/her experience
or even propose a strategy to properly make an actor aware of the situation the other
actor needs.
Söffker et al. explained in their paper how cooperative game theory is used and how it
can be applied to a human-machine situation with sharable and un-sharable goals (criteria
6 and 7) [30]. With this model of interaction, the cooperation enables to dynamically
exchange information between both actors in a way that negotiation can be done so
that they arrived at a solution proper for both of them. However, game theory has some
weaknesses which are due: 1) to the need to integrate every event that could occur into
this complex mathematical model, 2) to the need to design a global model of HMC to
be based on for solicitation and 3) its lack of adaptability to different behaviours and to
ensure actors’ awareness as feedback (criteria 1, 2, 3, 4 and 5).

4.4 Discussion
The previous section enables to highlight the strong and weak points of the 20 identified
papers. From our review, several elements can be retained. They are presented according
to the three proposed types.

Cooperation Types
Shared and Cooperative Guidance and Control are mostly used when the intervention
of both actors is aimed for a common goal. In the context of the seven specifications
introduced, this approach has the advantage of enabling dynamic interaction, exchanging
information concerning their missions and their tasks, making the trade of authority
possible. Meanwhile, this type makes it hard to ensure a correct negotiation concerning
each other strategies, as the ‘navigation’ has already been planned.
Since Shared Control assumes the change of control depending on the task in the
context of the seven specifications introduced, this approach has the advantage of being
able to solicit both actors while also giving the other actor situation awareness. Mean-
while, before the extended version of the shared control [35] it has the drawback of not
enabling dynamic exchanges of information concerning the different levels of activity;
but with this extended version shared control no longer has this problem, sharing and
trading information concerning their goals, their strategies, and their tasks.

Design Types
A human-machine balanced approach is adopted when priority is allocated to the human
during HMC. In the context of the seven specifications introduced, this approach has
the advantage to adapt to the human operator’s expertise and experience enabling the
machine to know the behaviour model of each human it is collaborating with. Meanwhile,
this approach hardly ensures a correct dynamic exchange concerning information using
a common work space facing the need of negotiation between a human actor and a
machine actor, where ‘acceptance’ and ‘imposition’ are the only options when an explicit
explanation based on operational information cannot be made [16].
336 C. Gely et al.

Human-system integration design allows a human and a machine to become a single


integrated entity using each other senses to complete their information, transforming
for example the human into an operator 4.0. In the context of the seven specifications
introduced, this approach has the advantage of creating a symbiotic relationship where
both actors can interact based on expertise, feedback, and each other behavioural models.
Meanwhile, it has the drawback of not being able to manage a situation where one
actor is not aware of the other actor’s situation and when the dynamic exchange of
information becomes complex as needed for cooperation with an autonomous CPS in a
close relationship. More, the integration of humans and the autonomous CPS may rarely
be the final target since the idea is, on the contrary, to detach the autonomous CPS from
the human, rendering it autonomous to evolve and sometimes cooperate with others and
not only the same human.
Actor-centred design is a design type where there is not specifically a human and a
machine, but these are in fact two actors that must be able to cooperate and thus com-
municate easily as one cannot accept or impose a decision to the other in the scope of
a sharable goal as they are both ‘peers’ working together. In the context of the seven
introduced specifications, this approach has the advantage of enabling the exchange of
behavioural models. It is also able to adapt those models depending on their expertise,
feedback, facilitating solicitations from any actor, followed by a communication accord-
ingly to each of their goals, strategies, and tasks in the most fluid and natural way for
each actor. Meanwhile, sometimes in the concerned context, un-sharable goals can be
faced with, and in that situation, the cooperation cannot function when actors do not
have the same goals, so a decision must be made by a supervisor of both actors to solve
this dilemma.

Interaction Model Types


All these models bring interesting features to support cooperation: game theory is a
formal mathematical model facilitating the iterative interactions and the optimization of
cooperation strategies, while fuzzy models can deal with the transfer of vague informa-
tion and the KH/KHC proposes modelling of what an actor knows and how actors can
communicate it.
In the context of the seven specifications, game theory approach has the advantage of
being able to ensure the cooperation with sharable and non-sharable goals. Meanwhile,
one of its main drawbacks concerns the complexity of the mathematical model where
every possibility must be anticipated and modelled, implying a strong need for machine
learning. Another drawback is that the cooperation model assumes that an actor must
be aware of the situation of the other actor as well as his/her/its expertise, feedback, and
availability. Social interaction can be evaluated most of the time using a fuzzy model due
to its nature. Meanwhile, fuzzy models hardly model the actors’ awareness, experience,
and feedback.
Finally, KH/KHC considers multiple cooperation contexts when actors have a com-
mon goal but may have different sub-goals or have to cooperate in various situations,
each in a given time window where a common work space allows the interference to
be dealt with. It allows different actors to cooperate and negotiate to find a proper and
quick solution.
Human-Machine Cooperation with Autonomous CPS in the Context of Industry 4.0 337

4.5 Synthesis

Considering the seven specifications, one can conclude that each of the introduced types
has its advantages and disadvantages in the context of autonomous CPS cooperating
with humans, and no contribution fully comply with these specifications. Meanwhile,
from our perspective:

• The most interesting cooperation type is the extended version of shared control mainly
because of its adaptability to share authority and to ease negotiation concerning goals,
missions, and task in evolving contexts implying several activity levels.
• The most interesting design type is actor-centred design mainly because of its sym-
metrical design for the cooperation and communication, of its adaptability to model
each actor’s behaviour and of its adaptability to the experience of each actor to find a
solution elaborated between ‘peers’, based on each level of activity and feedback.
• The most interesting interaction type is the cooperative game theory since it facilitates
the negotiation of non-sharable goals and usage of the different skills of each actor
easing the optimization of their interaction depending on their specific goals.

Most of the contributions in HMC adopted a point of view where the cooperation was
designed mainly to satisfy the goals of the human and adapt to his/her need, which lead
to the predominance of human-centric based cooperation systems. By combining shared
control and actor-centred design cooperation, the latter may become a real symmetric
collaboration allowing a two-way flow of information and exchange of goals, mission,
and tasks among actors, autonomous CPS, or humans.
As can be seen in the human-human interaction, not all interactions can be dealt
with ‘easily’, especially when actors have different goals, so the HMC must be prepared
for such situations when actors do not share their goals, the difficulty to cooperate lead-
ing to deadlock situations. This is why there is a need to include interaction modelling
approaches such as cooperative game theory, more than KH/KHC, or fuzzy models. By
combining game theory studies with other types of cooperation such as shared control,
traded control or actor-centred design cooperation, the cooperation created would allow a
mathematical approach to modelling of the interaction between an autonomous CPS and
human, and one of their interactions for any goals during a task. Such HMC would con-
sider any type of actors interacting (human-human, human-machine, machine-machine),
as a global model for any autonomous CPS, as a peer to the human actor.
Consequently, from this review we intend to adopt an actor-centred design method
with a shared cooperation approach coupled with game theory models in our development
of HMC with Autonomous CPS in the context of Industry 4.0. For illustration purposes,
the next section details a case study where these choices are illustrated on an application
we are working on.

5 Case Study

Our research work concerns the development of a specific autonomous CPS in the trans-
portation domain, the autonomous train. The autonomous train has to cooperate with
338 C. Gely et al.

different profiles of actors, being artificial (other autonomous trains, railways, infras-
tructure, etc.) or humans (customer, maintainer, onboard crew, fleet operator, etc.). Our
case study concerns a specific moment during the exploitation of this autonomous train,
which is its maintenance. Because of the increasing level of financial penalties when a
fleet of trains is not sufficiently available, maintenance is becoming a crucial moment in
its lifecycle. More, considering the importance of sustainability aspects, the surveillance
of energetical performance for ecological reasons as well as the surveillance of comfort
of the passengers depending on the functioning of the HVAC (heating, ventilation, and
air-conditioning) system are gaining importance.
During such a maintenance, the autonomous train must be able to inform, alert and
cooperate with the different actors about the maintenance operations that are needed
or could be done, aside from classical planned systematic maintenance activities. The
cooperation must be prepared to generate different alarms concerning the different roots
of the problem. The alarms will concern the onboard personnel as well as the fleet super-
visor. They may lead to the solicitation of an expert concerning the problem detected,
exchanging specific information for each interlocutor depending on their expertise. Since
it is autonomous, the train is able to diagnose its sub-systems and evaluate if it can fulfil
its missions [36]. If it cannot fulfil them immediately or during a given time window, a
maintenance process is triggered.
The process illustrated in this case study is the following: the health monitoring
of an autonomous train detects that the performance indicator measuring the energy
efficiency of one of its equipment (a door) goes beyond predetermined thresholds and is
still deteriorating. This event triggers the need for cooperation from the autonomous train,
meaning a solicitation is made by the train asking for collaboration for its maintenance
(this corresponds to specification #1).
After having completed a self-diagnosis, the autonomous train cooperates with the
fleet supervisor to allocate a maintenance dock (tactical level) able to proceed with the
maintenance tasks. When i is done, the train starts a second cooperation process with a
qualified maintenance operator (operational level) to realize the maintenance operations.
The model of this process (a GRAI net), depicted in Fig. 3, has been done using the GRAI
design methodology [37].
To illustrate more precisely the case study, we focus on the second cooperation
process. A possible corresponding cooperation scenario is depicted in Fig. 4.
During this scenario, the cooperation allows the autonomous train to remind the
human actor of the maintenance tasks (situation awareness) as well as to detect the
abnormal duration of the operation that the human is realizing so that it asks him if there
is any problem (specifications #2, #3 and #4). The cooperation process allows the train
to give its own estimation of the problem with its experience, taking into account what
operation was previously made to fix a problem so that it can check if the operation was
successful by calculating its new energy efficiency, updating its own system for future
usage (specification #5). In this scenario, the dynamic flow of information can be made
using text-speech software allowing an exchange of goals, missions, and operations
needed (specifications #6 and #7).
Human-Machine Cooperation with Autonomous CPS in the Context of Industry 4.0 339

Fig. 3. GRAI net of the maintenance process

Fig. 4. An illustrative cooperation scenario during maintenance operation with an autonomous


train.

The HMC is under development. As discussed, its development will be based on


an actor-centered design method with a shared cooperation approach. The interaction
model will be based on game theory.
340 C. Gely et al.

6 Conclusion
Autonomous CPS will evolve jointly with humans. This fostered us to address the issue of
human-machine cooperation between human and autonomous CPS. This paper presented
the results of a review made on this topic. For that purpose, a set of seven specifications
has been constructed, from which seven criteria were used to analyze the state-of-the-
art in the field on human-machine cooperation. The 20 contributions reviewed were
selected according to several keywords, addressing the cooperation approach, the design
approach, and the modelling of the interaction. To illustrate our work, a case study
focusing on the maintenance operation of an autonomous train has been provided. From
our review, it is clear that the existing literature does not fill the gap with the need
expressed in terms of these specifications. Indeed, in the future, autonomous CPSs are
considered in the same way as a human operator when cooperating with him, increasing
the need for symmetrical cooperation between actors, being human or artificial CPS.

Acknowledgement. The work described in this chapter was conducted in the framework of the
joint laboratory "SurferLab" founded by Bombardier, Prosyst and the Université Polytechnique
Hauts-de-France. This Joint Laboratory is supported by the CNRS and financed from ERDF
funds. The authors would like to thank the CNRS, the European Union, and the Hauts-de-France
region for their support. Parts of the work are also carried out in the context of the HUMANISM
No ANR-17-CE10–0009 research program, funded by the French ANR “Agence Nationale de la
Recherche”.

References
1. Flemisch, F., Abbink, D.A., Itoh, M., Pacaux-Lemoine, M.-P., Weßel, G.: Joining the blunt
and the pointy end of the spear: towards a common framework of joint action, human–machine
cooperation, cooperative guidance and control, shared, traded and supervisory control. Cogn.
Tech. Work 21, 555–568 (2019). https://doi.org/10.1007/s10111-019-00576-1
2. Lin, W.S., Zhao, H.V., Liu, K.J.R.: A game theoretic framework for incentive-based peer-to-
peer live-streaming social networks. In: 2008 IEEE International Conference on Acoustics,
Speech and Signal Processing, pp. 2141–2144, IEEE, Las Vegas (2008). https://doi.org/10.
1109/ICASSP.2008.4518066.
3. Cogliati, D., Falchetto, M., Pau, D., Roveri, M., Viscardi, G.: Intelligent cyber-physical sys-
tems for industry 4.0. In: 2018 First International Conference on AI for Industries (AI4I).
pp. 19–22. IEEE, Laguna Hills, USA (2018). https://doi.org/10.1109/AI4I.2018.8665681
4. Terziyan, V., Gryshko, S., Golovianko, M.: Patented intelligence: cloning human decision
models for Industry 4.0. J. Manuf. Syst. 48, 204–217 (2018). https://doi.org/10.1016/j.jmsy.
2018.04.019.
5. Dang, T., Merieux, C., Pizel, J., Deulet, N.: On the road to industry 4.0: a fieldbus architec-
ture to acquire specific smart instrumentation data in existing industrial plant for predictive
maintenance. In: 2018 IEEE 27th International Symposium on Industrial Electronics (ISIE),
pp. 854–859. IEEE, Cairns, Australia (2018)
6. Ratliff, L.J.: Incentivizing Efficiency in Societal-Scale Cyber-Physical Systems (2015).
https://escholarship.org/uc/item/6ck1z3x3
7. Oks, S.J., Fritzsche, A., Möslein, K.M.: Engineering industrial cyber-physical systems: an
application map based method. Procedia CIRP 72, 456–461 (2018). https://doi.org/10.1016/
j.procir.2018.03.126
Human-Machine Cooperation with Autonomous CPS in the Context of Industry 4.0 341

8. Bekey, G.A.: Autonomous Robots: From Biological Inspiration to Implementation and


Control. MIT Press 20, 197–198 (2005). https://doi.org/10.1162/artl.2007.13.4.419
9. Lin, H.-T.: Implementing smart homes with open source solutions. Int. J. Smart Home 7, 8
(2013)
10. Pacaux-Lemoine, M.-P., Berdal, Q., Enjalbert, S., Trentesaux, D.: Towards human-based
industrial cyber-physical systems. In: 2018 IEEE Industrial Cyber-Physical Systems (ICPS),
pp. 615–620. IEEE, St. Petersburg (2018). https://doi.org/10.1109/ICPHYS.2018.8390776
11. Aarts, B., Chalker, S., Weiner, E.: The Oxford Dictionary of English Grammar, OUP Oxford
(2014)
12. Hoc, J.-M., Lemoine, M.-P.: Cognitive evaluation of human-human and human-machine coop-
eration modes in air traffic control. The Int. J. Aviat. Psychol. 8, 1–32 (1998). https://doi.org/
10.1207/s15327108ijap0801_1
13. Millot, P., Pacaux-Lemoine, M.-P.: A common work space for a mutual enrichment of human-
machine cooperation and team-situation awareness. IFAC Proc. Volumes 46, 387–394 (2013).
https://doi.org/10.3182/20130811-5-US-2037.00061
14. Gely, C., Trentesaux, D., Le Mortellec, A.: Maintenance of the autonomous train: a human-
machine cooperation framework. In: Müller, B. and Meyer, G. (eds.) Towards User-Centric
Transport in Europe 2: Enablers of Inclusive, Seamless and Sustainable Mobility, pp. 135–148.
Springer, Cham (2020). https://doi.org/10.1007/978-3-030-38028-1_10
15. Trentesaux, D., Millot, P.: A human-centred design to break the myth of the “magic human” in
intelligent manufacturing systems. In: Borangiu, T., Trentesaux, D., Thomas, A., and McFar-
lane, D. (eds.) Service Orientation in Holonic and Multi-Agent Manufacturing. pp. 103–113,
Springer SCI, Cham (2016). https://doi.org/10.1007/978-3-319-30337-6_10
16. Pacaux-Lemoine, M.P., Debernard, S.: Common work space for human–machine cooperation
in air traffic control. Control Eng. Pract. 10, 571–576 (2002). https://doi.org/10.1016/S0967-
0661(01)00060-0
17. Romero, D., Bernus, P., Noran, O., Stahre, J., Fast-Berglund, Å.: The operator 4.0: human
cyber-physical systems and adaptive automation towards human-automation symbiosis work
systems. In: Nääs, I. et al. (eds.) Advances in Production Management Systems. Initiatives
for a Sustainable World, pp. 677–686. Springer, Cham (2016). https://doi.org/10.1007/978-
3-319-51133-7_80
18. Lozano, S., Moreno, P., Adenso-Díaz, B., Algaba, E.: Cooperative game theory approach to
allocating benefits of horizontal cooperation. Eur. J. Oper. Res. 229, 444–452 (2013). https://
doi.org/10.1016/j.ejor.2013.02.034
19. MacVicar-Whelan, P.J.: Fuzzy sets for man-machine interaction. Int. J. Man-Machine Stud.
8, 687–697 (1976). https://doi.org/10.1016/S0020-7373(76)80030-2
20. Schulze, M., Nehler, H., Ottosson, M., Thollander, P.: Energy management in industry – a
systematic review of previous findings and an integrative conceptual framework. J. Cleaner
Prod. 112, 3692–3708 (2016). https://doi.org/10.1016/j.jclepro.2015.06.060
21. Muslim, H., Itoh, M.: The effects of system functional limitations on driver performance
and safety when sharing the steering control during lane-change. In: 2017 IEEE International
Conference on Systems, Man, and Cybernetics (SMC), pp. 135–140. IEEE, Banff, AB (2017).
https://doi.org/10.1109/SMC.2017.8122591
22. Pacaux-Lemoine, M.-P., Vanderhaegen, F.: Towards levels of cooperation. In: 2013 IEEE
International Conference on Systems, Man, and Cybernetics, pp. 291–296, IEEE, Manchester
(2013). https://doi.org/10.1109/SMC.2013.56
23. Mars, F., Deroo, M., Hoc, J.-M.: Analysis of human-machine cooperation when driving with
different degrees of haptic shared control. IEEE Trans. Haptics 7, 324–333 (2014). https://
doi.org/10.1109/TOH.2013.2295095
342 C. Gely et al.

24. Zolotová, I., Papcun, P., Kajáti, E., Miškuf, M., Mocnej, J.: Smart and cognitive solutions for
operator 4.0. In: Laboratory H-CPPS Case Studies, Computers & Industrial Engineering, vol.
105471 (2018). https://doi.org/10.1016/j.cie.2018.10.032
25. Gammieri, L., Schumann, M., Pelliccia, L., Di Gironimo, G., Klimant, P.: Coupling of a
redundant manipulator with a virtual reality environment to enhance human-robot Coop. In:
Proceedings CIRP, vol. 62, pp. 618–623 (2017). https://doi.org/10.1016/j.procir.2016.06.056
26. Ballagi, Á., Kóczy, L.T., Pozna, C.: Man-machine cooperation without explicit communica-
tion. In: 2010 World Automation Congress, pp. 1–6 (2010)
27. Agah, A., Tanie, K.: Human-machine interaction through an intelligent user interface based
on contention architecture. In: Proceedings 5th IEEE International Workshop on Robot
and Human Communications, pp. 537–542. IEEE, Tsukuba (1996). https://doi.org/10.1109/
ROMAN.1996.568894
28. Xie, F., Liu, F.-M., Yang, R.-R., Lu, R.: Game-based incentive mechanisms for cooperation in
P2P networks. In: 2008 4th International Conference on Natural Computation, pp. 498–501,
IEEE, Jinan, Shandong, China (2008). https://doi.org/10.1109/ICNC.2008.100
29. Requejo, R.J., Camacho, J.: Evolution of cooperation mediated by limiting resources: con-
necting resource based models and evolutionary game theory. J. Theor. Biol. 272, 35–41
(2011). https://doi.org/10.1016/j.jtbi.2010.12.005
30. Söffker, D., Langer, M., Hasselberg, A., Flesch, G.: Modeling of cooperative human-machine-
human systems based on game theory. In: Proceedings of the 2012 IEEE 16th International
Conference on Computer Supported Cooperative Work in Design (CSCWD), pp. 274–281
(2012). https://doi.org/10.1109/CSCWD.2012.6221830
31. Fong, T., Nourbakhsh, I., Kunz, C., Fluckiger, L., Schreiner, J., Ambrose, R., Burridge, R.,
Simmons, R., Hiatt, L., Schultz, A., Trafton, J.G., Bugajska, M., Scholtz, J.: The peer-to-
peer human-robot interaction project. In: Space 2005, American Institute of Aeronautics and
Astronautics, Long Beach, California (2005). https://doi.org/10.2514/6.2005-6750
32. Dias, M.B., Kannan, B., Browning, B., Jones, E.G., Argall, B., Dias, M.F., Zinck, M., Veloso,
M.M., Stentz, A.J.: Sliding autonomy for peer-to-peer human-robot teams. In: Proceedings
of the International Conference on Intelligent Autonomous Systems, pp. 332–341 (2008)
33. Kaupp, T., Makarenko, A., Durrant-Whyte, H.: Human–robot communication for collabo-
rative decision making - a probabilistic approach. Robot. Autonomous Syst. 58, 444–456
(2010). https://doi.org/10.1016/j.robot.2010.02.003
34. Nukuzuma, A., Yamada, K., Harada, N., Ishimaru, K., Furukawa, H.: Decision support to
realize intelligent cooperative interactions. In: Proceedings of 1995 IEEE International Con-
ference on Fuzzy Systems, vol. 2, pp. 837–842 (1995). https://doi.org/10.1109/FUZZY.1995.
409780
35. Pacaux-Lemoine, M.-P., Itoh, M.: Towards vertical and horizontal extension of shared con-
trol concept. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics,
pp. 3086–3091, IEEE, Kowloon Tong, Hong Kong (2015). https://doi.org/10.1109/SMC.201
5.536
36. Sénéchal, O.: Performance indicators nomenclatures for decision making in sustainable con-
ditions based maintenance. IFAC-PapersOnLine 51, 1137–1142 (2018). https://doi.org/10.
1016/j.ifacol.2018.08.438
37. Chen, D., Doumeingts, G.: The GRAI-GIM reference model, architecture and methodol-
ogy. In: Bernus, P., Nemes, L., Williams, T.J. (eds.) Architectures for Enterprise Integration.
pp. 102–126, Springer US, Boston (1996). https://doi.org/10.1007/978-0-387-34941-1_7
Simulation on RFID Interactive Tabletop
of Working Conditions in Industry 4.0

Nicolas Vispi1(B) , Yoann Lebrun2 , Sophie Lepreux3 , Sondès Chaabane3 ,


and Christophe Kolski3
1 ARACT Hauts-de-France, Lille, France
n.vispi@anact.fr
2 CCI Grand Hainaut, Serre Numérique, Valenciennes, France
y.lebrun@serre-numerique.fr
3 LAMIH-UMR CNRS 8201, Univ. Polytechnique Hauts-de-France, Valenciennes, France

{sophie.lepreux,sondes.chaabane,christophe.kolski}@uphf.fr

Abstract. In a changing world, in permanent international competition, Industry


4.0 must be able to adapt, to transform itself, especially from an organizational
point of view. In case of reorganization or major changes, having the means for
the design, training and appropriation of future working conditions becomes an
asset. For this purpose, a simulator on interactive tabletop with RFID sensors is
proposed. This tabletop is associated with tangible objects equipped with RFID
tags. This simulator allows Industry 4.0 stakeholders to project themselves into
future work situations. These situations can be simulated collectively, around the
interactive tabletop, in relation to scenarios that can be replayed. Principles of
implementation in Industry 4.0 are described. Different research perspectives are
also highlighted.

Keywords: Simulation · Organization · Interactive tabletop · Tangible object ·


RFID · Industry 4.0 · Future working conditions · Training · Design

1 Introduction
The era of industry 4.0 and digitalization make companies evolve in a competitive
technology-driven world. The digital environment is based on two trends: massive dema-
terialization and interconnection of everything with everything. The world has entered a
new era of data and virtualization [12]. Both French and global industries are concerned
by this digital transformation. In result, this transformation impacts uses, behaviours,
activities and work modes [11]. The role of the human operator is not to perform dan-
gerous and hard tasks. He or she becomes a pilot of machines via mobile terminals and
dedicated user interfaces. The human operator is involved in decision-making processes
and executive operations in collaboration with other operators, with machines and phys-
ical systems. It is there essential to take the human factor in consideration and provide
human operators, as well as different other categories of stakeholders, with training and
means to adapt to the new working conditions. For this purpose, different methods are
possible such as paper/cardboard simulations or virtual reality [25, 30].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 343–354, 2021.
https://doi.org/10.1007/978-3-030-69373-2_24
344 N. Vispi et al.

The simulation tool proposed within the framework of this research is based on a
set of advances in the field of Human-Computer Interaction on large horizontal surfaces
[24], in tangible interaction [6, 14] and in relation to serious games [9]. This simulation
support is developed on an interactive tabletop associated with tangible objects. It follows
various researches conducted on the TangiSense tabletop, which is equipped with RFID
technology [8, 13, 16, 18–20, 22]. This new simulation approach aims to encourage the
involvement of the stakeholders of future organizations and work situations in relation to
a set of scenarios, according to a playful and interactive approach. The aim is to improve
the design, training, and ownership of future working conditions.
This paper begins with a state of the art on the simulation of new organizations and
future working conditions, with design and transformation objectives. The approach
proposed for the simulation on interactive tabletop is then described. Next, the simula-
tion tool developed on the TangiSense interactive tabletop is explained and illustrated.
Principles and examples of implementation in Industry 4.0 are also presented. The paper
ends with a conclusion and research perspectives.

2 State of Art on Simulation of Future Working Conditions


Present working conditions are the result of past transformation projects, whether tech-
nical, architectural, or organizational. In fact, present projects will define future working
conditions. In this logic, designers, decision-makers, managers and operators point out
the importance of anticipating new organizations in a safe way [3], regardless of the type
of simulation envisaged: retrospective, reflexive, projective or prospective [7].
However, several studies emphasize the lack of involvement of the human factor
in the design of systems and highlight its crucial role to master and support the digital
transformation in the context of Industry 4.0 [2]. Neglecting the human factor in a
complex system such as Industry 4.0 will have a negative impact on its performance and
its ability to operate safely. This implies the risk to create a gap with the Industry 4.0
concept [15]. For this reason, researchers make efforts to involve human factor; they place
the human in the centre of system design approaches [10, 29]. To meet this objective,
our contribution proposes a simulation-based approach to design systems and to enable
operators to project themselves into their new work environment and organization.
Initially based on traditional supports (2D plans or volumetric models), we are wit-
nessing a democratization of simulation tools: 3D modelling, virtual reality (in head-
mounted Vision System or immersive space). Virtual reality, for example, holds great
promise in terms of immersion quality, data accuracy, coupling and data capture (postu-
ral, eye tracking). However, the feedback from experience shows a moderate use. Indeed,
these simulation tools are mainly used by large companies because of the human (mod-
elling, preparation, animation) and financial costs of these technologies, but also because
of their techno-dependent aspect.
Mainly envisaged within the framework of architectural projects, the development
of an organizational simulation [4] using classic supports such as paper, cardboard and
other physical objects, has made it possible to extend the use of the simulation of future
working conditions to other purposes, such as management or skills development. This
has also made it possible to envisage broader prospects for deployment, particularly
among small and medium-sized enterprises.
Simulation on RFID Interactive Tabletop of Working Conditions in Industry 4.0 345

3 Proposal: Simulation on Interactive Tabletop


The proposal takes the form of a simulator developed on an interactive tabletop that
allows interaction with tangible objects. These objects are inert in the sense that they do
not move on their own or change their appearance; they are referred to as active because
they are detected by the tabletop. This combination of tabletop and objects makes it
possible to involve different users in a participative and interactive approach and thus to
support activity simulations. On this subject, studies show that the horizontal tabletop
type support favours the involvement of several people interacting simultaneously around
the tabletop with the help of the objects, and possibly in a collaborative way [21].

Fig. 1. Global view of the principle of simulation on interactive tabletop showing different
possible uses of the simulator

The simulation of activity (about work or usage), commonly used in ergonomics,


aims to help individuals (agents, users) to project themselves into the activity that will
be theirs in the future environment. The simulator then facilitates this simulation and,
in particular, the exchanges between the actors of the project (operators, designers,
project managers, decision-makers, partners, etc.). This interactive simulation allows,
over iterations and possible modifications of the design choices, to lead to the validation
by the actors of these design choices. This validation takes place when the simulation
346 N. Vispi et al.

of the activity is considered as acceptable to satisfy the stakes of the project by taking
into account a set of criteria such as working or usage conditions, quality requirements,
service to the user, etc. This tool is a promising support to: (1) represent the prescription
scenarios (workspace, equipment,…) integrating technical aspects that are difficult to
represent on paper models and (2) its possible evolutions. It becomes also possible
to: (3) allow the different actors to make modifications on the prescription scenarios
represented on the tabletop, (4) to record the scenarios in order to (5) replay them later
or in the presence of other actors.
Interaction with the interactive tabletop through tangible objects is intended to facili-
tate appropriation. The objects used can be of different shapes and must allow to represent
in a playful and transitional way the workspaces to be arranged (reception, loading areas,
office, etc.), the equipment to be positioned (personal computers, machine tools, robots,
etc.), and to simulate the activity of the user(s) through avatar objects. The surface of
the tabletop can be used to display permanent information (partitions, loading deck) as
well as certain technical constraints (for instance buried pipes).
Figure 1 shows three possible stages of simulator use; these steps can be used cycli-
cally to follow the method suggested by Van Belleghem [30], according to the central
arrow or autonomously according to the needs and degree of progress in innovation.
The Simulation stage allows users to project themselves into new activities, according
to the scenarios foreseen in future developments. If needed, these new activities may be
compared with the current ones. The traces of the simulation are recorded. The resulting
recordings are used in the Design phase to show decision-makers the effects on users of
the different options proposed in the Simulation phase. Finally, the Integration activity
stage allows users to become familiar with the new work situations that have been chosen.

4 Realization of a First Version of the Simulator on TangiSense


Tabletop
A simulator of TangiSense (version 2) interactive tabletop (manufactured by the RFIDées
company, www.rfidees.com) equipped with RFID technology has been developed. This
simulator is written in Jade, a FIPA (Foundation for Intelligent Physical Agents) stan-
dard software implemented in Java, and used to simplify the deployment of multi-agent
applications [5]. This tool allows to create and activate different agents, and to process
messages exchanged between them [17]. In addition, specific behaviours can be devel-
oped and applied to agents depending on the users around the tabletop. For instance, it
is possible to develop behaviours allowing agents (associated with tangible objects) to
act according to their location on the tabletop or to human actions on them.
This simulator allows the participation of future users in the design phase of a project
(e.g. reorganization of a production plant). The objective is to discuss the different
options and to test the scenarios. These situations are highlighted thanks to the field
experience of the users and their know-how. In addition, the simulator allows decisions to
be modified until the simulated activity is considered acceptable by the participants. The
goal is to satisfy the project’s issues (working or usage conditions, quality requirements,
service rendered, etc.). By bringing all the players together around the same project,
Simulation on RFID Interactive Tabletop of Working Conditions in Industry 4.0 347

simulation helps to avoid design errors. These errors can have serious financial and
working conditions consequences.
However, the tools available do not easily allow this interaction with future users.
Reading the plans is not easy. Mock-ups are often costly in terms of time for their man-
ufacture and modifications. Different software exist, but they also require a significant
amount of time for configuration and technical knowledge (often sub-contracted to exter-
nal service providers). Moreover, they allow limited interactivity during the simulation
phases with the users.
Therefore, this first version of the simulator must satisfy several requirements. It
must be possible to represent and collectively modify the prescription elements in the
form of scenarios. It must allow the action of employees on the tabletop with avatars
(taking the form of figurines) which allow the actors of the simulation to transpose
themselves by playing the scenarios [4]. They also allow them to imagine themselves in
a situation by moving the object and to produce data. It is necessary to allow numerical
interactions through the visualization of processes and flows. In addition, the simulator
must allow a scenario to be replayed by modifying characteristic data (e.g. distances
travelled, number of tasks to be performed, etc.) so that they can be compared. This
comparison can be performed through tools integrated into the simulator, such as the
possibility of replaying the simulation, but also statistical analysis tools in the form of
bubble graphs, curves graphs, etc.
Figure 2(a) shows the tabletop with the simulator on which various objects equipped
with RFID tags are placed.

Fig. 2. (a) Principle of use of the simulator on tabletop with tangible objects placed on the table
and handled manually by users, (b) simulator screen page with available functionalities

The simulator allows combining both the tangible (through objects) and the virtual
(through the tabletop screen) items. The objects represent the avatars (characters), the
equipment (e.g. desks or machines). When a tangible object is moved by a user, the
display is adapted to show the distance travelled under this object thanks to the scale
integrated into the plan. In addition, when moving traces in the form of dots and/or
dashes are displayed under the tangible object.
Figure 2(b) shows the simulator menu, consisting of the following modes:
348 N. Vispi et al.

• Config: Allows configuring objects equipped with an RFID tag so that they are recog-
nized in the simulation tool. This mode is composed of two parts (see Fig. 3) allowing
to configure any object equipped with an RFID tag when it is placed on one of the
definitions:

– Definition of facilities. Allows to associate the object with an image. In addition to


being associated to an image, this object can have functionalities in the simulation.
It can have the role of tracking this image (in the form of a shadow) or the role
of a stamp in order to replicate this image as many times as desired. Indeed, it is
interesting to define a stamp type object if an image (for example a type of machine)
must be represented several times in the simulation.
– Character definition. Allows to associate the tangible object to a character. Two
types of characters are considered: Internal characters (e.g. employees of a factory)
and external characters (e.g. a visitor, a delivery man, a technician from an external
company). After placing an object in one of the areas, it is also possible to assign
a colour to it. This colour will be used in the simulation (for example to draw the
movements of the avatar). This colour can be used to distinguish the roles of the
characters (for example, blue for the production manager, green for the worker,
yellow for the quality specialist, etc.).

Fig. 3. Simulator setup menu

• Play: allows to play a simulation according to a plan and a prescriptive scenario.


• Replay: allows to replay a simulation to exchange on the content between different
users.
• Plan: allows to select a plan (in digital image format) to load it into the simulation or
to set/modify its scale.
Simulation on RFID Interactive Tabletop of Working Conditions in Industry 4.0 349

• Save: allows to save the simulation to replay it later or to export it on a USB storage
media.
• Open: allows to open a simulation that has been saved.
• Reset: allows to reset the simulation tool by deleting all plans, backup folders, images,
instructions, etc.
• Load: allows to import graphic elements (images, plans) into the simulation tool
library; this function also allows you to export backups (from USB storage media).

Note that other tangible objects are predefined to interact with the simulation tool, for
example: to access the main menu, to select and delete virtual elements (for example to
delete a storage tank), to move virtual elements (for example to move a drilling station),
or to manage a simulation (play, stop, fast forward, save, etc.).
The design possibilities are therefore very extensive and make the simulation tool
adaptable to a very large number of situations. For example, organizational situations
implemented in Industry 4.0, as illustrated in the following section.

5 Implementation Through Industry 4.0

In this section, we present the deployment of the simulator, by including it into a regional
scheme in the Hauts-de-France region, France. Since 2015, Aract Hauts-de-France and
the Regional Agency for Innovation and Development have been working together as
part of regional schemes concerning Industries of the future. Indeed, considering the gap
with other countries such as Germany or South Korea, these schemes aim at financing
the modernization of the production apparatus of small and medium-sized enterprises as
well as intermediate-sized establishments. For this purpose, about a hundred consultants
specializing in different areas (robotics, information systems, lean management, etc.)
have been referenced.
We are here in a diffusionist model (in the sense of [27]), a technological "push"
putting modernization as an end in itself. But these technical and technological devel-
opments call for more global transformations as they necessarily modify the tasks, the
way of working, the organization or the required skills. Moreover, depending on their
culture and degree of acculturation, companies do not apprehend, conduct or prepare
these transitions in the same way. These are all pitfalls that justify the need to anticipate
and foresee these changes in their technical and technological as well as organizational,
social and human dimensions.
In this context, using simulation tools appears as a modality able to support these
transitions in their technical, organizational and human dimensions. This is how the
simulator deployment presented here was foreseen; it should be noted that this spin-off
follows several experiments.
Concerning the methodology, the deployment first required a selection of consultants
interested in testing the simulator and integrating simulation approaches in their normal
way of support. The selection has been made among those referenced by the regional
agency, through a call for expression.
The second stage consisted in training the consultants to the use of the simulator
and the simulation approach. We voluntarily choose consultants from different fields
350 N. Vispi et al.

of expertise: lean management, architecture, industrial performance and ergonomics, in


order to assess the appropriation capacities.
The third stage was to support companies previously identified among those inte-
grating the Industries of the future regional schemes. These technical supports were
provided by Aract. Finally, several seminars were held to draw lessons for a larger scale
deployment (in the region). These seminars brought together the consultants, the chief
executive officer of each of the involved companies, Aract and Hauts-de-France region.
Table 1 identifies and characterizes the companies and consultant profiles that par-
ticipated in the deployment of the simulator. Figure 4 illustrates a simulation sequence
involving several stakeholders in one of the companies.

Table 1. Part companies and consultants.

Fig. 4. Simulation sequence in one of the companies

The simulation approach was deployed on both retrospective and projective levels.
As an example, company #3 involved in the construction of a new building replayed on
a simulator an entire monthly production. It allowed to identify structural elements and
to enrich the specifications of their future building. It also led to organizational decisions
Simulation on RFID Interactive Tabletop of Working Conditions in Industry 4.0 351

that have be implemented in the current situation. As another example, in company #1,
employees modelled a new preparation process that was played on the simulator before
being tested in real.
Regarding results, beyond the impacts on the projects themselves effects have been
observed on three levels: company level, human level and on the regional scheme itself.
On company level, simulation has a structuring function, even in small companies
often bound by production objectives without dedicated time for project management.
Still at company level, simulation, because it offers a secure and reassuring frame-
work (right to make mistakes, possibility to restart, etc.), favours experimentation. It
encourages stakeholders to full-scale experimentation of the methods tested on the
simulator.
On the human level, this verbatim from the head of company #1 summarizes the
effects we observed: “working on the simulator allowed to free up very discreet person-
alities. The playful side engages everyone’s participation.”. In addition to the playful
aspect, the use of tangible objects refers the employees to their experiences, to real
work situations facilitating representation. It allows them to express their point of view
more easily. We also noticed a decisive role that simulation has in developing working
collectives. The case of company #3 at the beginning of the intervention highlighted
inter-individual tensions between employees and management. Firstly, the sequences
around the simulator unite employees and managers around a common project. Secondly,
simulation focuses discussions on real work overpassing individual matters making. It
makes possible for employees among themselves (such as employees and manager) to
share common issues. Finally, it brought to light everyone’s activity so that everyone can
measure its interdependence and therefore the need for dialogue to identify acceptable
compromises. Eventually, the effects go beyond the project itself to feed professional
dialogue and contribute to the development of collectives.
On the regional scheme level, previously described as a diffusionist model, we were
able to observe, through the process deployed (training of consultants, technical support
and learning seminars), the emergence of a collective dynamics that highlights the pitfalls
of diffusionist approach.
Thus, the training and technical support of the consultants contributed to develop
their skills on social and organizational dimensions that make Industry 4.0 projects
success or not. By bringing together the entire chain of actors (financiers, company
actors and consultants), seminars became expression space and time that did not exist
before. These times nourished collective learning and led to useful feedback for public
action. It particularly questioned the current diffusionist model.
In summary, deploying simulation methods through this simulator improved Industry
4.0 projects by a better consideration of organizational, social and human dimensions.
There are also collateral effects that go beyond the projects themselves: enhancing exper-
imentations, development of collective work and (horizontal and vertical) working rela-
tionships. All of these dimensions contribute to the overall performance and quality of
life at work [1].
Compared to other tools, the interactive tabletop allows several actors to interact at the
same time when virtual reality generally immerses one person at a time. Although several
actors immersion is technically possible, it still does not allow multiple interactions.
352 N. Vispi et al.

Still in comparison to virtual reality, the tabletop is a non-expert tool. It does not require
any computer development skills. So that it can be mobilized quickly and enhance
swarming capacities. Finally, the tabletop allows the visual representation of processes,
organizations, organization charts, while virtual reality is limited to spatial environments
(buildings, workstations).
To go further, we observed a better appropriation of the new environments and
organizations among the employees who participated in simulation sequences. This
suggests perspectives in using the tabletop in professional training processes, particu-
larly in alternating training, apprenticeship or on-the-job trainings. The tabletop can be
considered:

• In a preparatory way before a situation is set up. In particular when these situations
raise questions of costs (e.g. operations on products with high added value or high
profitability requirements), safety (e.g. crossing of vehicle/pedestrian flows, access to
dangerous areas) or seasonality (e.g. manufacture of products specific to the holiday
season in the agri-food industry).
• For reflective thinking after situational setting. The tabletop helps the trainee to verbal-
ize his/her activity, to understand the choices he/she made, to self-evaluation, and so
on. Such possibilities allow transforming the working experience into competences.

6 Conclusion

The simulation and appropriation of future working conditions imply an important need
in many areas. Coupling interactive tabletops and tangible objects, as presented in this
paper, provides new solutions to respond to this need. By combining digital and analog-
ical technologies, the proposed simulator has allowed a large deployment of simulation
by making low cost access to VSEs and SMEs. Using tangible objects has increased the
ability for decision support through the collection of digital data.
Using such a simulator, as proposed, is interesting to support companies and enter-
prises in their transformation towards Industry 4.0. This evolution will not only focus
on the dematerialization of practices but also on the development of new human organi-
zations. For future studies, three perspectives are identified. The first one concerns the
proposal of a global methodology of transformation based on fundamental concepts of
Industry 4.0. This methodology will help to master and stabilize this transformation. The
second perspective concerns the use of simulation to identify generic characteristics of
so-called operator 4.0 or smart operator 4.0 [11, 28]. In the literature, a list of possible
human factor barriers that may prevent the successful digital transformation are identi-
fied (e.g. the lack of standardized instructions for using digital tools, the lack of training
tools…) [23, 26]. Finally, it would be interesting to provide enterprises and companies
with a set of solutions based on simulation to overcome these barriers.
Simulation on RFID Interactive Tabletop of Working Conditions in Industry 4.0 353

Acknowledgements. The authors would like to thank Marie-Christine Lenain, at the initiative
of the simulator. They also thank Julian Alvarez who intervened on the Serious Game aspects of
the simulator, as well as Laurent Van Belleghem for his active participation in Integratic semi-
nars. Finally, they particularly thank ANR, Anact, Lionel Buissière of Hauts-de-France Innova-
tion Développement and all the consultants and companies involved in the action. This paper is
dedicated to our colleague and friend Pr. Christian Tahon.

References
1. Anact: Agir sur la qualité de vie au travail, coordonné par Pelletier, J. (2017)
2. Angelopoulou, A., Mykoniatis, K., Boyapati, N.R.: Industry 4.0: the use of simulation for
human reliability assessment. Procedia Manuf. 42, 296–301 (2020)
3. Barcellini, F., Van Belleghem, L., Daniellou, F.: Design projects as opportunities for the
development of activities. In: Falzon, P. (ed.) Constructive Ergonomics, pp. 150–163. USA,
Taylor and Francis, NY (2014)
4. Barcellini, F., Van Belleghem, L.: Organizational simulation: issues for ergonomics and for
teaching of ergonomics’ action. In: Proceedings of 11th International Symposium on Human
Factors in Organizational Design and Management (ODAM), pp. 885–890 (2014)
5. Bellifemine, F.L., Caire, G., Greenwood, D.: Developing Multi-Agent Systems with JADE,
Wiley (2007)
6. Blackwell, A.F., Fitzmaurice, G. Holmquist, L.E., Ishii, H., Ullmer, B.: Tangible user inter-
faces in context and theory. In: CHI 2007 Extended Abstracts on Human Factors in Computing
Systems, pp. 2817–2820, New York, ACM (2007)
7. Bobillier Chaumon, M-E., Rouat, J., Laneyrie, E., Cuvillier, B.: De l’activité DE simulation
à l’activité EN simulation : simuler pour stimuler, Activités, 15–1 (2018)
8. Bouabid, A., Lepreux, S., Kolski, C.: Design and evaluation of distributed user interfaces
between tangible tabletops. Univ. Access Inf. Soc. 18(4), 801–819 (2019)
9. Djaouti, D., Alvarez, J., Jessel, J.P.: Classifying serious games: the G/P/S model. In: Hand-
book of Research on Improving Learning and Motivation Through Educational Games:
Multidisciplinary Approaches, pp. 118–136. IGI Global (2011)
10. Fantini, P, Pinzone, M, Taisch, M.: Placing the operator at the centre of Industry 4.0 design:
modelling and assessing human activities within cyber-physical systems. Comput. Ind. Eng.
vol. 139 (2020)
11. Gazzaneo, L., Padovano, A., Umbrello, S.: Designing smart operator 4.0 for human values:
a value sensitive design approach. Procedia Manuf. 42, 219–226 (2020)
12. Guideline Industrie 4.0: Guiding principles for the implementation of Industrie 4.0 in small
and medium sized businesses (2015). https://industrie40.vdma.org/en/viewer/-/v2article/ren
der/15540546
13. Havrez, C., Lepreux, S., Lebrun, Y., Haudegond, S., Ethuin, P., Kolski, C.: A Design Model
for Tangible Interaction: Case Study in Waste Sorting, pp. 373–378. IFAC/IFIP/IFORS/IEA
Symposium on Analysis, Design and Evaluation of Human-Machine System, Kyoto, Japan
(2016)
14. Ishii, H., Ullmer, B.: Tangible Bits: towards seamless interfaces between people, bits and
atoms. In CHI 1997 Conference Proceedings, Atlanta, Georgia, USA, March 22–27, ACM
(1997)
15. Kinzel, H.: Industry 4.0 – Where does this leave the human factor? J. Urban Culture Res. 15,
70–83 (2017)
354 N. Vispi et al.

16. Kubicki, S., Lebrun, Y., Lepreux, S., Adam, E., Kolski, C., Mandiau, R.: Simulation in contexts
involving an interactive table and tangible objects. Simul. Model. Pract. Theory 31, 116–131
(2013)
17. Lebrun, Y., Adam, E., Kubicki, S., Mandiau, R.: A multi-agent system approach for interactive
table using RFID. In: 8th International Conference on Practical Applications of Agents and
Multi-Agent Systems (PAAMS 2010), pp. 125–134. Springer (2010)
18. Lebrun, Y., Adam, E., Mandiau, R., Kolski, C.: A model for managing interactions between
tangible and virtual agents on an RFID interactive tabletop: case study in traffic simulation.
J. Comput. Syst. Sci. 81, 585–598 (2015)
19. Lebrun, Y., Lepreux, S., Haudegond, S., Kolski, C., Mandiau, R.: Management of distributed
rfid surfaces: a cooking assistant for ambient computing in kitchen. In: 5th International
Conference on Ambient Systems, Networks and Technologies, ANT-2014 (June 2–5, 2014,
Hasselt, Belgium), Procedia Computer Science 32, pp. 21–28, Elsevier (2014)
20. Lepreux, S., Alvarez, J., Havrez, C., Lebrun, Y., Ethuin, P., Kolski, C.: Jeu sérieux pour le tri
des déchets sur table interactive avec objets tangibles : mise en œuvre et évaluation. Ergo’IA
’18, Proceedings of the 16th Ergo’IA "Ergonomie et Informatique Avancée" Conference (3–5
October), ACM, Bidart, France (2018)
21. Manches, A, O’Malley, C., Benford, S.: Physical manipulation: evaluating the potential
for tangible designs. In: Proceedings of the 3rd International Conference on Tangible and
Embedded Interaction 2009, Cambridge, UK, February 16–18, ACM, pp. 77–84 (2009)
22. Merrad, W., Habib, L., Héloir, A., Kolski, C., Krüger, A.: Tangible tabletops and dual reality
for crisis management: case study with mobile robots and dynamic tangible objects. In: ANT
2019 The 10th International Conference on Ambient Systems, Networks and Technologies
(April 29–May 2, 2019), Leuven, Belgium (2019)
23. Mikulic I., Stefanic A.: The adoption of modern technology specific to industry 4.0 by human
factor. In: Proceedings of the 29th DAAAM International Symposium, pp. 941–946, DAAAM
International, Vienna, Austria (2018)
24. Müller-Tomfelde, C.: Tabletops - Horizontal Interactive Displays, Springer (2010)
25. Pastré, P.: Apprendre par la simulation - De l’analyse du travail aux apprentissages
professionnels. Octarès Editions, Toulouse (2005)
26. Polet, P., Vanderhaegen, F., Wieringa, P.: Theory of safety related violation of system barriers.
Cogn. Tech. Work 4(3), 171–179 (2002)
27. Rogers, E.M.: Diffusion of Innovations. Free Press, 3rd edition (1983)
28. Romero, D., Noran, O., Stahre, J., Bernus, P., Fast-Berglund, Å.: Towards a human-centred
reference architecture for next generation balanced automation systems: human-automation
symbiosis. In: IFIP Advance Information Communication Technology (2015)
29. Sætren, G.B., Hogenboom, S., Laumann, K.: A study of a technological development process:
human factors—the forgotten factors? Cogn Tech Work 18, 595–611 (2016)
30. Van Belleghem, L.: Simulation organisationnelle: innovation ergonomique pour innovation
sociale. In: Dessaigne, M-F., Pueyo, V., Béguin, P. (eds.) Innovation et Travail: Sens et valeurs
du changement, Actes du 47ème Congrès de la SELF, 5–7 September, Lyon (2012)
Multi-agent Simulation of Occupant Behaviour
Impact on Building Energy Consumption

Habtamu Tkubet Ebuy1,2(B) , Hind Bril El Haouzi1 , Rémi Pannequin1 ,


and Riad Benelmir2
1 UMR 7039, Université de Lorraine, CRAN, Campus Sciences, BP 70239,
54506 Vandœuvre-lès-Nancy cedex, France
habtamu-tkubet.EBUY@univ-lorraine.fr
2 Faculté Des Sciences and Technologies, Université de Lorraine, LERMAB,

Vandoeuvre, France

Abstract. Building energy consumption and environmental emission are signifi-


cantly influenced by end-users, and building energy simulations tools are used to
optimize the performance of the building. Currently, most of the simulation tools
considered oversimplified behaviour and contribute to the energy gap between
the predicted and actual consumption. However, the building energy performance
also depends on occupant dynamic behaviours and this tools fails to capture the
dynamic occupant behaviour. To overcome this, developing a co-simulation plat-
form is an effective approach to integrate an occupant behaviour modelling using
a multi-agent-based simulation with building energy simulation tools. The co-
simulation process is conducted in Building Control Virtual Testbed (BCVTB), a
virtual simulation coupling tool that integrates the two separate simulations on a
time step basis. This method is applied to a case study of a multi-occupant office
building within an engineering school in France. The result shows the applicability
and relevance of the developed platform.

Keywords: Occupant behaviour · Co-simulation · Multi-agent · Energy


simulation · Behavior modelling

1 Introduction

The world is accelerating towards a severe energy crisis due to high energy demand
compared to energy resources [1]. The energy crisis and sustainability have become
increasingly crucial topics among academia and industry. With the rising demands for
energy use and future concern of scarce energy resources, the need for energy efficiency
is growing steadily [2]. The energy challenge is one of the most significant events of
today’s society, and governments worldwide must adopt strategic policies to confront it.
According to the International Energy Agency, residential and commercial buildings
consume more than 40% of total primary energy and releases 33% of carbon dioxide
emissions in the universe [3, 4]. Two-third of the total energy utilized by the buildings
will be used within the households for heating, cooling, and lighting. The edifice is

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 355–364, 2021.
https://doi.org/10.1007/978-3-030-69373-2_25
356 H. T. Ebuy et al.

the sector with greater potential and lowest cost for carbon dioxide reduction, and has
been named as a key potential contributor to bring down the energy consumption and
greenhouse emissions. To be able to optimize building energy performance, researchers
need a way to evaluate it.
Traditionally, building energy consumption is analysed by a simulation tool, which is
an effective approach for estimating building performance. Almost all of the simulation
tools are considered as static building features for determining building energy perfor-
mance [5]. For example, the occupants, electric equipment plugin, thermostat point and
lighting in an office follow static patterns without considering the dynamic interaction
with the building systems, occupants and environmental factors (weather conditions), to
mitigate their discomfort condition.
Delzendeh et al. [6] show that occupants influence building energy directly and indi-
rectly by manipulating light, equipment, thermostat, shade, domestic hot water (DHW)
and operable windows to maintain their comfort as depicted in Fig. 1.

Fig. 1. Occupant type of activities influencing building energy performance.

The main challenge of estimating building energy consumption using simulation


tools is the consideration of the dynamics and stochastic behaviour of end users and
the evaluation of multi-occupant influence. This is indeed a complex endeavour since
buildings have different occupancy with different perceptions, beliefs and actions [7]. To
solve this aspect, multi-agent based simulation has been developed to represent dynamic
occupant behaviour.
This modelling and evaluation tools is included in a large project FORBOIS2,
which will provide environmental sensors data (temperature, carbon dioxide, energy
consumption, humidity) and thermography image (heat source and loss) for validating
the simulation results. In this paper, we focused on a simple case to validate the approach.
The rest of the paper is structured as follows: Sect. 2 describes the state of the art
on occupant behaviour modelling. Section 3 presents the modelling elements of the
multi-agent systems and implementation of the co-simulation environment. Section 4
Multi-agent Simulation of Occupant Behaviour Impact 357

presents the application case and finally, the conclusions section explains the advantage
and limitations of the proposed approach.

2 State of the Art

In the last decades, researchers emphasized the influence of occupant behaviour on build-
ing energy consumption. For example, Gilani [8] reports that the electricity consumption
of an office deviated by 30% from what was simulated. Experimentally measured build-
ing energy consumption demonstrated a large energy variation range by a factor of two
to five between edifices with the same function and located in the same weather region.
Gilani and Brien performed experimental investigation on one university residence room
for two academic years with different occupancy and observed a 20% energy consump-
tion variation for the last two academic years. Hong et al. [9] have shown that the impact
of occupant behaviour on building energy use reaches up to 300%. Hani [10] studied the
energy auditing for an office and the analysis revealed more energy consumption in non-
working hours than working hours. The use of daylight systems or intelligent lighting
control by reducing artificial lighting decreases energy use of the building for electric-
ity by about 15% [11]. Commonly in building simulation programs, fixed schedules
for typical days are assumed to represent the dynamic occupant behaviour (occupancy,
lighting) energy use in buildings.
Over the last years, researchers considered the occupants’ behaviour impact on the
simulation tool to simulate building performance. For example, Rijal et al. [12] modelled
the probability of window opening in terms of operative indoor/outdoor temperature.
Jacob et al. [13, 14] integrated multi-agent-based occupant behaviour modelling into
a residential building simulation tools. The dynamic occupant behaviour models were
developed as a functional mock-up interface (FMI) for co-simulation with energy plus
[15]. Mengda et al. [16] developed a dynamic occupant behaviour modelling framework
to improve the building energy simulation; the agent-based occupant behaviour model is
used for validation. However, researchers focused on the group-level behaviour models,
mainly used for the implementation process instead of performance analysis, and then
proposed requirements for future occupant behaviour models to be used in the design
of energy simulation tools. This paper describes individual-level occupant behaviour
models developed independently and coupled to the building.
Buildings have been portrayed as complex systems involving several kinds of dynam-
ically interacting components that are nonlinear, dynamic and complex. The aforemen-
tioned driving parameters can be modelled and simulated through thermal dynamic
building simulation tools. However, due to the complexity and stochastic nature of this
type of model, it is hard to completely indicate the influence of occupant behaviour
through dynamic energy performance simulation [15, 16]. Therefore, we need to find a
way to pair off the occupant behaviour on the building simulation.
Hong et al. [17] indicate that co-simulation allows a more realistic and robust rep-
resentation of occupant behaviour and aim to couple two or more simulation tools,
offering a data exchange environment between subsystems. Co-simulation offers a flex-
ible solution that allows considering network behaviour and physical energy system
state at the same time and also enables the possibilities of large scale system assessment
358 H. T. Ebuy et al.

[18]. Therefore, co-simulation takes occupant behaviour into consideration in energy


performance simulation and predicts it more precisely. The aim of this paper is to
propose a co-simulation platform to couple occupant behaviour modelling using the
well-known BRAHMS, a multi-agent-based simulation environment, with EnergyPlus
building energy simulation tools and to validate this software in order to deal with a
complex occupant behaviour by considering a simple scenario.

3 Development of the Co-simulation Framework


Modeling dynamic occupant behaviour to create situation in the simulation are closer to
what could happen in real life of office occupancy. In order to examine these interactions
and to express clearly, it is necessary to model and include dynamic occupant behaviour
in energy simulations.
These behaviours are not activated at regular times. They depend only on the value
of some environmental factors. When the physical state of the environment exceeds
its threshold value, it induces a psychological effect to the users; this determines the
occupants to perform some activities to adjust their environments.
The dynamic occupant behaviour is modelled and simulated in BRAHMS - a multi-
agent based simulation platforms. It is a descriptive language that can be used to record
and simulate the causal relations of occupant’s behaviour. Moreover, it is a fully fledged
multi-agent, rule-based, activity programming language [19].
This multi-agent environment uses a belief-desire-intention (BDI) approach, with an
accent on the belief base of agents. It also makes a clear distinction between facts and
beliefs. This is useful to model the rational action of humans. Agents can have needs and
can perform activities based on their needs and can also communicate for the fulfilment
of various needs. The outside context and the inside psychology state are perceived by
the agent as a thoughtful frame, which becomes its need [19, 20]. This need is carried
by the work frame through primitive or composite activities.
The BRAHMS environment defines agents and objects that we map to human occu-
pants and building devices. It also provides a rich geographical model which is used
to model the various locations in the building simulation and the travel between them.
Therefore the multi-agent model has three types of concepts:

• Area, that represents the various places where occupants and objects are. They have
facts corresponding to the environment variables (temperature, light), which come
from the building simulation.
• Agents, that correspond to the human occupants. They have beliefs about their goals
and current environment variables (temperature, humidity, light level, etc.…). They
execute work frames that contain activities to achieve their goals.
• Object, that models the building actuators such as lights, thermostat, HVAC, windows,
etc.… They have states that will be sent to the building simulation.

The interaction between the three types of entities is as follow:

• At each simulation time-step of the energy simulation software, variables are received
and used to set the facts of the area.
Multi-agent Simulation of Occupant Behaviour Impact 359

• Occupants can detect these facts and react by triggering a work frame. In these work
frames, they express their needs for a change of a building device.
• Device perceives and reacts to the needs of the occupants, by changing their state.
• The state of the devices and some simulation parameters such as the number of
occupants in an area is collected and send to the building simulation software.

EnergyPlus is a new building performance simulation program that combines the best
capabilities and features. EnergyPlus, that comprises a completely new code written in
Fortran 90, is a broadly used building simulation tools that allows the analysis of energy
throughout the building and the thermal load [21] on a sub-hourly basis [22]. It is
important to outline that in EnergyPlus there is not a visual interface that allows users to
visualise and edit the building. In case these functions are needed, third-party software
tools need to be used.
The Co-simulation environment couples the occupant behaviour model BRAHMS
with multi-agent-based modelling with EnergyPlus. A Java application has been devel-
oped for this application; it communicates with the BRAHMS virtual machine using the
Java application programming interface (JAPI) of the BRAHMS platform. Using JAPI,
the co-simulation environment is able to start the agents, and alter the facts and belief
base. On the other side, it communicates with EnergyPlus using the Building Control
Virtual Testbed (BCVTB) interface by exchanging packets formatted using the Ptolemy
II standard over a TCP socket.
The information interchange between BRAHMS and EnergyPlus is represented in
Fig. 2. EnergyPlus simulates zone-based level and transfers output environmental fea-
tures such as illuminance to the BRAHMS as input to model occupant behaviour. From
BRAHMS, the occupant behaviour model’s output (which includes occupancy schedule
and building equipment states) is sent back to EnergyPlus and used for the next time
steps. The whole process of information exchange is transmitted through building con-
trol virtual testbed (BCVTB). This process is repeated until the simulation end time is
reached.

Fig. 2. Data flow of BRAHMS and EnergyPlus co-simulation framework


360 H. T. Ebuy et al.

4 Application
To test the applicability and significance of the developed platform, a simple behaviour
scenario and case study are conducted.

4.1 Occupant Behaviour Scenario


The process scenario is presented in Fig. 3. When an occupant enters the room, the agent
had a perception of indoor light from the room and this triggers physical perceptions
like visual comfort (e.g., as the agent feels dark or light). For example, the occupant
perceives light intensity of the room and compares with a reference of the external
illuminance threshold. If the illuminance extracted from the simulation tools is less than
the threshold, the agent interacts with the light device to turn on. The output of the light
device is then resent to the simulation for the next time step.

Fig. 3. MAS to energy performance simulation

4.2 Case Study


To test the platform, a short case study is conducted. The educational building which
consists of four adjacent multi-user offices is selected to test the coupled simulation. It
is located in Épinal, France.

• Each office had one external wall with window facing North East direction and has a
rectangular shape with a floor area of 206.3 m2 .
• The offices are occupied by 1 to 3 persons in this study, depending on the purpose of
the office. Its 3D view is shown in Fig. 4.
• The simulation coupling is conducted at the room level. The simulation period is for
a whole year (from January to December 2019).
• The weather data used for this simulation is extracted from online source.
• The time step for this simulation is set 10 min.

4.3 Simulation and Results


The model has been applied to a simple case study, with one occupant controlling the
lighting in an office room.
Multi-agent Simulation of Occupant Behaviour Impact 361

Fig. 4. 3D view of the building

The control of the temperature was done using a thermostat controlling a HVAC
(Heating, Ventilation and air conditioning) systems, maintaining temperature in the range
of 22–23 °C.
The activity executed when the agent is in the office room is a compound activity:
it contains several repeating work frames competing for execution. The first one is to
simulate the work task that the occupant does (with some delay) and the second one
consists in turning the lighting on/off. The two work frames are triggered by the value
of the room light level and also by the belief they set, to avoid repeating execution.
So, when the lighting of the room is less than the considered threshold, the agent adds
a note to its belief base meaning that he needs light, and this triggers an action relative
to the lighting device to light on. The result of this action from BRAHMS simulation
environment is shown in Fig. 5. In the timeline view of agents, the location of the agent
is shown at the top, while its work frame is shown at the bottom of the picture (in blue)
and the activities in green (and orange for compound activities).

Fig. 5. Simulation result of an occupant behavior example lighting control

The occupant executes a main work frame that corresponds to his day routine related
to the lighting: arrival and departure. All these periods are implemented using activities.
Using stochastic values for the duration of these activities allows creating randomness
for the arrival, lunch and departure. A clock object is added to the multi-agent system
to send notification for the period of the day (morning, afternoon, evening) changes.
The simulation was run over a year with a time step of 10 min interval. The whole
year simulation result is difficult to represent in a single picture, hence a sample result is
extracted to show the relation of the various features as represented in Fig. 6. The figure
shows how the occupant reacts to changes of indoor lighting, and how it influences the
362 H. T. Ebuy et al.

energy consumption in an office building, which is the determinant factor in the lighting
device status.
The occupancy schedule is shown with the white area generated by a blue broken line,
and the occupant behaviour schedule is represented by the green area created by a small
green broken line, which varies depending on the value of the illuminance indicated by
the purple line. For example, when the agent departs from the office for lunch or has the
belief that the illuminance is beyond the threshold (9:00 to 11:00 and 15:00–17:00), it
tries to switch off the light; otherwise the opposite action will happen. The lighting and
occupancy values are represented by a Boolean number (0 for off, and 1 for on).

Fig. 6. Lighting relationship with illuminance and occupancy

The energy consumption using co-simulation, which considers the dynamic occu-
pant behaviour, is much lower than that of simulation considering occupancy factors in
deterministic or predefined schedule, as presented in Fig. 7. A percentage of 30 to 40%
energy consumption saving is expected by integrating dynamic occupant behaviour.

Fig. 7. Occupant behaviour impact on electricity consumption in lighting and equipment

5 Conclusion and Future Work


The main contribution of this study is to test the applicability and relevance of our
approach in modelling the occupant’s behaviour and the co-simulation environment.
Multi-agent Simulation of Occupant Behaviour Impact 363

The key concepts are to employ three types of entities: area, occupants and devices. The
latter two implement their behaviours using work frames that compete for execution.
The capacity to recursively embed the work frame in activity seems very promising to
simply describe occupant behaviours.
The case study shows that this modelling approach enables us to create a simple
case. The results confirm the results described in the literature: the occupants’ behaviour
can have a big impact on energy consumption, even in a simple case.
This case study has also shown some limitations of the BRAHMS multi-agent envi-
ronment. The principal one is the inability to run several work frames in parallel. This
is completely realistic for cognition-intensive activities (e.g. reading), but when dealing
with modelling of building occupants, it is very important to process perceptions con-
tinuously, in parallel with the work frame execution. Another limitation is also the poor
expressivity of the BRAHMS language (e.g. there is currently no support for more than
two operands or for nested arithmetic expressions). The maintenance problems of the
BRAHMS software represent also a concern for users.
We are therefore planning to develop a new simulation package specifically adapted
to building occupants, that would keep the advantages of BRAHMS while overcoming
its limitations.
Future work consists in performing more complex simulations to test the scalability
of the modelling approach. An open issue is also to create a robust reference model
for inter-occupants interactions (e.g. how they take into account the action of other
neighbour occupants).

Acknowledgments. The author acknowledge the financial support of the CPER FORBOIS 2–
2016-2020 project. Secondly, the author also acknowledge Campus France and the Ethiopian
Ministry of Science and Higher Education for their financial support.

References
1. De Silva, M. N. K., Sandanayake, Y.G.: Building energy consumption factors: a literature
review and future research agenda, Digital library University of Moratuwa, Sri Lanka, pp. 90–
99, 2012. https://dl.lib.mrt.ac.lk/handle/123/12050
2. Jia, M., Srinivasan, R.S., Ries, R., Bharathy, G.: Exploring the validity of occupant behavior
model for improving office building energy simulation. In: 2018 Winter Simulation Con-
ference (WSC), Gothenburg, Sweden, 2018, pp. 3953–3964. https://doi.org/10.1109/WSC.
2018.8632278
3. Paone, A., Bacher, J.P.: The impact of building occupant behavior on energy efficiency and
methods to influence it: a review of the state of the art, Energies, 11, 4 (2018)
4. Pérez-Lombard, L., Ortiz, J., Pout, C.: A review on buildings energy consumption information.
Energy Build. 40(3), 394–398 (2008)
5. Wang, C., Yan, D., Ren, X.: Modeling individual’s light switching behavior to understand
lighting energy use of office building. Energy Procedia 88, 781–787 (2016)
6. Delzendeh, E., Wu, S., Lee, A., Zhou, Y.: The impact of occupants’ behaviours on building
energy analysis: A research review. Renew. Sustain. Energy Rev. 80, 1061–1071 (2017)
7. Schaumann, D., Putievsky, N., Sopher, H., Yahav, J., Kalay, Y.E.: Simulating multi-agent
narratives for pre-occupancy evaluation of architectural designs. Autom. Constr. 106, 102896
(2018)
364 H. T. Ebuy et al.

8. Gilani, S., O’Brien, W.: Best Practices Guidebook on Advanced Occupant Modelling.
Carleton University, Ottawa, Canada (2018)
9. Turner, W., Hong, T.: A technical framework to describe occupant behavior for building
energy simulations, Lawrence Berkeley National Laboratory (2013)
10. Sait, H.H.: Auditing and analysis of energy consumption of an educational building in hot
and humid area. Energy Convers. Manag. 66, 143–152 (2013)
11. Piotrowska, E., Borchert, A.: Energy consumption of buildings depends on the daylight. In:
E3S Web Conference, vol. 14 (2017)
12. Rijal, H.B., Humphreys, M.A., Nicol, J.F.: Development of a window opening algorithm
based on adaptive thermal comfort to predict occupant behavior in Japanese dwellings. Japan
Archit. Rev. 1(3), 310–321 (2018)
13. Chapman, J., Siebers, P.O., Robinson, D.: On the multi-agent stochastic simulation of
occupants in buildings. J. Build. Perform. Simul. 11(5), 604–621 (2018)
14. Chapman, J., Siebers, P., Robinson, D.: Coupling multi-agent stochastic simulation of occu-
pants with building simulation, Envir. Phys. Des. (ePAD), The University of Nottingham, no.
2004 (2011)
15. Li, R., Wei, F., Zhao, Y., Zeiler, W.: Implementing occupant behaviour in the simulation of
building energy performance and energy flexibility: development of co-simulation framework
and case study. In: Proceedings 15th IBPSA Conference, October 2017
16. Jia, M., Srinivasan, R.S., Ries, R., Bharathy, G.: A framework of occupant behavior modeling
and data sensing for improving building energy simulation. Simul. Ser. 50(7), 110–117 (2018)
17. Chen, Y., Liang, X., Hong, T.: Simulation and visualization of energy- related occupant
behavior in office buildings. Build. Simulations 10, 785–798 (2017). https://doi.org/10.1007/
s12273-017-0355-2
18. Raad, A., Reinbold, V., Delinchant, B., Wurtz, F.: FMU software component orchestration
strategies for co-simulation of building energy systems. In: 3rd International Conference
Technology Advance Electronic Electron Computer Engineering TAEECE 2015, pp. 7–11
(2015)
19. Kashif, A., Ploix, S., Dugdale, J., Le, X.H.B.: Simulating the dynamics of occupant behaviour
for power management in residential buildings. Energy Build. 56, 85–93 (2013)
20. Lez-Briones, A.G., De La Prieta, F., Mohamad, M.S., Omatu, S., Corchado, J.M.: Multi-agent
systems applications in energy optimization problems: A state-of-the-art review. Energies
11(8), 1–28 (2018)
21. Sousa, J.: Energy Simulation Software for Buildings : Review and Comparison, Technical
Report, University of Porto. https://ceur-ws.org/Vol-923/paper08.pdf
22. Crawley, D.B., et al.: EnergyPlus: Creating a new-generation building energy simulation
program. Energy Build. 33(4), 319–331 (2001)
Intelligent Products and Smart
Processes
Intelligent Products through SOHOMA Prism

William Derigent1(B) , Duncan McFarlane2 , and Hind Bril El Haouzi1


1 CRAN CNRS UMR 7039, Université de Lorraine, 54506 Vandœuvre-lès-Nancy, France
{william.derigent,hindbrilel.haouzi}@univ-lorraine.fr
2 Institute for Manufacturing,
Cambridge University Engineering Department, Cambridge CB3 0FS, UK
dcm@eng.cam.ac.uk

Abstract. In the framework of the SOHOMA 2020 special session “SOHOMA


10th -year anniversary”, this paper aims to make a review of the evolution of one
important concept studied in the SOHOMA community, namely the Intelligent
Product concept. This paper is not review of Intelligent Products - there are several
of these already - but rather examines the history of the development of this
concept through the 1st to 9th editions of SOHOMA, while also proposing future
developments on this concept.

Keywords: Intelligent products · Product driven control · Lifecycle · History

1 Introduction
In the framework of the IMS (Intelligent Manufacturing Systems) community, and as
demonstrated by the Auto ID Centre developments in 2000–2003, the related concepts of
Internet of Things and Intelligent Product sought to connect physical objects with digital
information and even "intelligence" associated with the object. Indeed, substantial infor-
mation distribution improves data accessibility and availability compared to centralized
architectures. Product information may be allocated both within fixed databases and / or
within the product itself, thus leading to products with informational and / or decisional
abilities, referred as “Intelligent Products”. This concept has been widely discussed over
more than two decades, beginning with 1997, when several authors separately presented
the notion of product intelligence to describe an alternative, product-oriented way in
which supply chains and automated manufacturing might work, [1–4]. The models pro-
posed described manufacturing and supply chain operations in which parts, products
or orders (collections of products) would monitor and potentially influence their own
progress through the industrial supply chain. The supply chain model based around
product intelligence provided a conceptual focus for these developments. A very simple
search on Google Scholar1 of articles related to this concept reveals that more than 300
papers have been published on the subject since 2000. This number does not include
articles related to “intelligent product” design (more related to the mechanical engineer-
ing field or the smart “PSS” field). Many different definitions of “Intelligent Products”
1 www.scholar.google.fr / search: all in title: “intelligent products” OR “intelligent product”.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 367–384, 2021.
https://doi.org/10.1007/978-3-030-69373-2_26
368 W. Derigent et al.

have been proposed. A comparison of the different types is provided in [5]. Reviews also
exist [6]. The objective of this paper is not to produce yet another review on Intelligent
Products. In the framework of SOHOMA 2020, it rather aims at underlining: 1) how the
SOHOMA community helped to develop and spread this concept, via scientific proposals
and industrial applications, 2) how the SOHOMA community is shaping the future of
the intelligent product concept.

2 Methodology Applied in This Paper


SOHOMA contributions related to the domain of the “Intelligent Product” have been
analysed from SOHOMA 11 to SOHOMA 19 via a careful exploration of the proceed-
ings. In our point of view, the concept of Intelligent Product is mainly related to two
sub-fields: The first one is the asset management thanks to the Intelligent Products which
has been extensively studied, leading to advanced asset management systems, most of
the time dedicated to product tracking in logistics. A second important use of the intel-
ligent product concept is the product-driven control of manufacturing systems, where
the product is an actor of its future in the manufacturing process. Both aspects are, from
our point of view, two important and complementary aspects of the same concept, being
moreover needed to ensure the resilience of production systems [7].
The first step of the methodology is thus to extract relevant papers from the pro-
ceedings. Each paper related to one of the previous concepts (or in a session related to
the Intelligent Product) was taken into account, and keywords saved. Citation numbers
were extracted from Google Scholar. As our objective is to process this extraction via
adapted tools, keywords have been ‘normalized’. Indeed, there are many keywords that
describe identical to very similar concepts (e.g. product-driven control, product-driven
automation and product-driven systems). The raw extraction is cleansed and completed
before the next step.
The second step aims to define a conceptual map called a co-occurrence net-
work. Co-occurrence networks are generally used to provide a graphic visualization
of potential relationships between people, organizations, concepts, biological organisms
or other entities represented in written materials. The generation and visualization of
co-occurrence networks has become practical with the advent of electronically stored
text compliant to text mining2 . In this paper, a co-occurrence network based on similar
keywords in the extracted papers is produced and analysed.
The next section first introduces the general co-occurrence network and details the
different identified clusters in different subsections. The last part of this section uses the
form of this network to foresee a possible future trend in the Intelligent Product field.

3 History of the Intelligent Product Concept in SOHOMA


Intelligent products (IP) are present in SOHOMA from the beginning as showed in
Table 1 (presented in the annexes). This table lists per year all the papers extracted from
SOHOMA proceedings, with a total of 49 papers. However, the number of publications
2 https://en.wikipedia.org/wiki/Co-occurrence_network.
Intelligent Products through SOHOMA Prism 369

varies in time, as depicted Fig. 1, going from 2 (SOHOMA 2017) to 8 (SOHOMA


2014). The proportion of these papers relative to the total number of SOHOMA papers
goes from 24% in 2011 to 28% in 2014 and then goes down to 11% in 2019. Each
paper in this field has been cited –6 times, going up to 29 times for [8]. In the different
proceedings, three parts are specifically dedicated to the IP (SOHOMA 2012 –Part II
‘Intelligent Products’; SOHOMA 2013 – Part IV ‘Intelligent Products’; SOHOMA 2015
– Part I ‘Application of Intelligent Products’) and two parts contain works related to the
IP (SOHOMA 2014 – Part IV ‘Physical Internet Containers; SOHOMA 2015 – Part II
‘Recent Advances in Control for Physical Internet and Interconnected Logistics’).

Nb papers on IP Nb SOHOMA papers % Papers on IP

50 30%
40 25%
20%
30
15%
20
10%
10 5%
0 0%
2011 2012 2013 2014 2015 2016 2017 2018 2019

Fig. 1. Overview of the number of SOHOMA Papers on Intelligent Products

The co-occurrence network based on keywords is done via automatic processing exe-
cuted with VOSviewer, a specific software tool dedicated to bibliometrics network. Other
tools exist like R and its bibliometrics package, Pajek, Sci2, Cytoscape, etc. VOSviewer
has been selected due to its user-friendly interface and its ability to process easily biblio-
graphic databases3 . This network is presented in Fig. 3 where it is clear that the clusters
are strongly interrelated yet still having independent threads. It is composed of nodes
representing the different paper keywords, and links representing relations between these
nodes. The shorter the links, the more the keywords are used in the same papers. It is
then possible to define clusters of keywords, based on their co-occurrence distance. In
this paper, these clusters are interpreted as domains or categories in the IP research field.
The network thus shows a total of 6 clusters:

– Cluster 1: Product-Driven Systems (PDS)4


– Cluster 2: Product Lifecycle Information Management (PLIM)
– Cluster 3: Physical Internet (PI)
– Cluster 4: Multi-Agent Systems (MAS)
– Cluster 5: Internet of Things (IoT)
3 Note that the complete list of references in *.bibtex/*.ris format as well as the VOSviewer files
have been provided as supplementary material with the article submission.
4 PDS and ODS (Order-Driven Systems) are often used interchangeably.
370 W. Derigent et al.

– Cluster 6: Digital Twin (DT)

The annual distribution of IP papers in each cluster is shown Fig. 2. It can be seen
that, from the beginning, papers have been published on PDS and PLIM themes in the
SOHOMA conferences. In this respect, the theme PI within the framework of IP has
been addressed later, in 2014. The more recent cluster is the DT one, where papers have
been proposed from 2018.

10
8
6
4
2
0
2011 2012 2013 2014 2015 2016 2017 2018 2019

Cluster 1 PDS Cluster 2 PLIM Cluster 3 PI Cluster 6 DT

Fig. 2. Annual distribution of IP papers in each cluster

One first analysis would be on the type of the different clusters. Indeed, clusters 1 to
3 are related to research works originated from works in the IP field while clusters 4 to
5 are more related to theory, methods and tools used to realize the IP concept. Cluster 6
is far from the others, meaning it is not too interlinked with the other ones. It can thus
be interpreted as a relatively new cluster and new research field as well.
In the rest of this section, these clusters are detailed, focusing on the timeline of the
related research works. Only the clusters related to core SOHOMA research fields are
tackled. Each reference cited hereafter comes from SOHOMA conference proceedings.

3.1 Cluster 1: Product-Driven Systems

Product-Driven Systems (PDS) are defined in [9] as a way to optimize the whole product
lifecycle by dealing with products, whose informational content is permanently bound
to their virtual or material contents and thus are able to influence decisions made about
them, participating actively to different control processes in which they are involved
throughout their life cycle. They introduce some examples highlighting the way PDS
can improve the global product lifecycle performance in the design phase, production
phase or use phase. Designing a PDS is a challenge involving three fundamentals aspects:
functions, architecture, and interactions.
Intelligent products are the core of a PDS. The functional features given to the
intelligent products in the PDS are then essential when designing the PDS. Research
works in this area are also intertwined with Cluster 2 and will therefore be described
Intelligent Products through SOHOMA Prism 371

Fig. 3. Intelligesssnt Product related keywords co-occurrence network obtained from SOHOMA
proceedings
372 W. Derigent et al.

later. The architecture of a PDS in another facet of the PDS design and many research
works address this issue. In particular, [10] presents the PROSIS framework, an evo-
lution of PROSA, where the Staff Holon is replaced by a Simulation. In this isoarchic
architecture, Ambient Control Entity (ACE), located near the Resource Holon, allows
each I_Holon (informational part of a Product Holon) to call ambient services related
to decision making or communication. Guided by the idea that all distributed intelli-
gent approaches are mimicking nature and human behaviour, several researchers also
explored bio-inspiration to build PDS. For example, [11] defines a control architecture
based on the Viable System Model called PDS-VSM exhibiting interesting recursive
properties as natural structures like living organisms do.
Bio-inspiration has been also widely used in the framework of the third and last
aspect of a PDS, i.e. interactions. Several bio-inspired approaches have been proposed
in the past, as Ant Colony Optimization and the Firefly Algorithm. [12] proposes a mech-
anism inspired from stigmergy using the notion of volatile knowledge. Mobile products
can share with each other information (knowledge) about their environment, whose con-
fidence decreases through time. This mechanism allows to propagate knowledge about
perturbations and to return to normal situation in a distributed way without a coordinator.
Negotiation approaches are investigated in [13]. They propose a negotiation heuristic
based on the notion of critical ratio ((Due_date – current_date) / total shop time remain-
ing). Products are negotiating their schedules with other products by exchanging this
value. The collaboration mechanism between Intelligent Products can also be formalized
using Multi-criteria Decision Making (MCDM) techniques. In [10, 14], the collaboration
mechanism is based on the AHP/ANP (Analytic Hierarchy Process / Analytic Network
Process).
PDS have pros and cons. They are highly agile and reactive, and they allow for
a potentially greater involvement of the end customer. However, they are not widely
accepted in industries because of the lack of performance proofs, mainly due to their
myopic behaviour allowing them to make decisions locally efficient but inconsistent
with the global objective. This nevertheless depends highly on the use case and working
conditions. Indeed, performances of a distributed architecture can be as good as with
a centralized driving solution, as stated in [15]. To help PDS to be less myopic, [16]
illustrates how the centralization of data via a discrete-event observer can help to achieve
better decisions. This notions of “discrete-event observer” is an online simulation model
running in parallel of the manufacturing system observed. One can note that this notion
is not so far from the notion of Digital Twin that arises years later.
Another strategy to counteract myopia is to mix predictive scheduling and reactive
control. A state of the art was produced in this respect [17] and several works of the
SOHOMA community address this issue. [18] propose a PDS employing a scheduling
rule-based evolutionary simulation-optimization strategy to dynamically select the most
appropriate local decision policies to be used by the products when disturbances appear.
The originality is to choose decision policies (i.e. dispatch rules) instead of fixed sched-
ules, allowing more flexibility at the product level. Another problem is to know when to
switch from the predictive scheduling to the reactive one, and when to come back. Since
the decision is not binary, a fuzzy model of the switching mechanism is used in [19,
20]. A last method proposed by the SOHOMA community in the framework of PDS is
Intelligent Products through SOHOMA Prism 373

to bring some robustness into the scheduling strategies, thanks to operational research
works dealing with scheduling under uncertainties [21].
Systems evolve and PDS also do, meaning their architecture, configuration, states can
change through time. To react as correctly as possible, a fine understanding of the system
evolution is needed. Giving these learning abilities to a PDS is a central challenge in a
world where manufacturing systems will constantly change and be reconfigured thanks
to Industry 4.0 technologies. It is also the opportunity to transform PDS into Predictive
Manufacturing Systems as defined in [22]. [23] states that the synchronization of physical
and information flows in a PDS implies that large data volumes may be exploited to
create the necessary knowledge and information for product decision-making. The study
illustrates how product information can be processed via neural networks to obtain
shop-floor knowledge, i.e. a function computing the lead time of a product from the
beginning of its manufacturing to the input queue of a bottleneck. This function can then
be integrated into the product memory for further uses. [24] considers that Intelligent
Products should also be able to reuse their past experiences to enhance their decisional
performances. Past experiences are then stored and Reinforcement Learning (here a
Q-learning algorithm) is then used to select the best action a product may do in each
situation.
Some industries are willing to use these techniques and some SOHOMA articles
describe applications of PDS via some case studies in the manufacturing industry [25,
26].
As a conclusion, PDS are advanced manufacturing systems that can lead to better
agility and reactivity, with a better integration of the end customer. SOHOMA explored
this area and brings many contributions, from proofs of concepts to valuable implemen-
tations in industry and generic framework architectures (PDS architectures, interactions
between products, coupling of centralized and decentralized decision-making, machine
learning and so on).

3.2 Cluster 2: Product Lifecyle Information Management

The other facet of the Intelligent Product is related to the data. It is a cluster as important
as the first one, with many contributions made in this field by the SOHOMA community.
A first important research work is the paper of [27] which retraces the history of the
Intelligent Products in the Supply Chain from 2002 to 2012. This paper cites the original
definition of the Intelligent Product introduced in [1, 28]5 :
An Intelligent Product is a product (or part or order) that has part or all of the
following five characteristics:

1. Possesses a unique identity


2. Is capable of communicating effectively with its environment
3. Can retain or store data about itself
4. Deploys a language to display its features, production requirements etc.
5. Is capable of participating in or making decisions relevant to its own destiny

5 These two citations are not extracted from SOHOMA proceedings.


374 W. Derigent et al.

Two levels of product intelligence are commonly defined: IP level 1 regroups features
1 to 3, while IP level 2 covers all five features.
The management of product information all along the product’s lifecycle is then
referred as Product Lifecycle Information Management (PLIM) and ensured by Product
Lifecycle Information Management Systems as detailed in [29]. In the 2000s, the classic
implementation of the intelligent product reflected the developments being made in the
Cambridge AutoID centre. Indeed, a unique ID is stored on a low-cost RFID tag attached
to the product. This ID can be resolved to a network pointer to a linked database and a
decision-making software agent. The information is then available all along the supply
chain or even all along the product lifecycle. Such systems can provide new services in
different phases of the lifecycle, obviously in the manufacturing phase or logistic phase,
but is not limited to. For example, as described in [30], it could be used to provide new
repair services for domestic appliances, by providing appliance lifecycle data and part
designs respectively to diagnostic services and 3D printing services.
PLIM systems are basically distributed data management architectures. In this regard,
several works and members of the SOHOMA community highly contributed to this
field. Different architectures, messaging protocols and formats have been proposed as
described in [31]. The EPCIS architecture is one of the well-known distributed data
management architecture, standardized by GS1 and specially adapted to product tracking
in the supply chain [32]. These manufacturing concepts have been also applied in other
domains, as described in [33], where the EPCIS standard has been applied to workforce
management in hospitals, leading to the ‘aTurnos’ cloud-based solution6 . DIALOG is
another architecture proposed by SOHOMA members, based on a multi-agent system
distributed in every actor of a given supply chain. In this architecture, a specific messaging
protocol initially called PMI (Product Messaging Interface) and further named QLM
(Quantum Lifecycle Management) is used. As EPCIS, QLM is now a standard and is
detailed in [34]. This paper also underlines why such messaging standard are needed in
Business-to-Business infrastructures and demonstrates via use cases how flexible QLM
is, compared to the other existing messaging protocols.
The intelligence and as a result the information of the product can also be on the prod-
uct itself. According to [35], product intelligence is not a primary function of a product,
but comes as secondary functions (i.e. communicating, triggering, memorizing) that can
be embedded in the product (also referred as “target system”) and also available online.
Because the activeness is linked with the target system, the secondary functions follow
the product all along its lifecycle from manufacturing to recycling. These functions can
be added or removed, moved into the target system or online, depending on the phase
requirements [36]. As a result, several applications of the activeness concern different
phases of the lifecycle. For example, the work of [37] applies the activeness concept dur-
ing the use phase, in order to give to the products augmented monitoring and analysing
functions. It has also been used in the logistic phase and applied to smart containers, as
will be described later.
To store the intelligence directly on the product, many different devices can be used,
not only RFID but also micro-computers or wireless sensor nodes. Because these devices
have more computing power and memory than classic RFID tags, they can execute all
6 https://www.aturnos.com/.
Intelligent Products through SOHOMA Prism 375

or part of the product secondary functions. In [38], the evolution from communicating
products (products equipped with RFID tags) to autonomous products is described.
The authors provide a case study of a flexible manufacturing system where products are
evolving from IP level 1 to IP level 2. Indeed, transport carriers, originally equipped with
RFID tags, receive a miniaturized electronic device composed of a CPU, a RFID reader,
some direction actuators, sensor, and a HMI. Works in the communicating materials or
Physical Internet detailed after also use Wireless Sensor Nodes (WSNs) to realize the
intelligent product concept.
Communicating materials are materials equipped with micro-electronic components,
either RFID tags (Kubler et al. 39) or self-powered WSNs embedded into the material
[40]. The interests of such material are diverse: (a) because of their data storing capacity,
they can convey all information related to design, manufacturing and logistics, useful
during the BOL (Beginning Of Life – design, manufacturing and construction) and the
EOL (End Of Life – dismantlement and recycling) of a product; (b) given their ability to
sense their environment and process-related information, they can also be used during
the MOL (Middle Of Life – exploitation and maintenance) as intelligent sensors, mainly
to perform health monitoring. The material could be either wood, textile or concrete. The
first works dealing with communicating materials are addressing the data dissemination
/ data replication issues in this type of materials. Lately, the work of [41] aims to explore
the monitoring capability of communicating materials, by developing concrete beams
equipped with energy aware WSNs. Indeed, this concrete beam can monitor its status in
nearly real-time.
This last work shows that IP (communicating materials and so on) can exhibit mon-
itoring functions, and can be aggregated to build a global architecture monitoring the
whole system performances. This is another aspect of the Intelligent Product that has
attracted attention in the SOHOMA community. In [42], a multi-agent system is embed-
ded into wireless nodes to manage a wireless data acquisition platform applied on indus-
trial systems. In this architecture, each sink node manages a cluster of wireless sensors.
Each sink node contains three interacting agents respectively responsible for config-
uring/reconfiguring the cluster, aggregating/filtering data and communicating with the
previous agents/other sink nodes. This wireless architecture is illustrated in an oil and
gas refinery. [43] deals with the development of data management systems for fleets
of trains, able to gather, memorize, manipulate and communicate data coming from
equipment. They demonstrate the interest of holonic semi-heterarchical architectures
where each holon is a product composing the whole system. From this conclusion, they
develop the Surfer Data Management Architecture and apply it to train transportation
[44]. In the same vein, distributed monitoring is explored in [45]. The target system is
a manufacturing shop floor, where each component of a system (resource of product) is
linked to a monitoring agent. These agents then send monitoring data via a monitoring
data stream which are read, aggregated, and stored by a monitoring controller agent.
The monitoring controller agent can also send data via the monitoring data stream, and
its data may be aggregated by another monitoring controller agent. This way to proceed
is equivalent to previous architectures, highly scalable.
In the SOHOMA community, these works on PLIM have been applied in manufac-
turing, supply chain [46], and also in the agriculture and agri-food domain [47], in the
376 W. Derigent et al.

pharmaceutical [48] and building sectors [49, 50]. Finally, information systems dealing
with intelligent products have been proposed in SOHOMA with a specific importance
given to the distribution of information on the product/on the network. EPCIS, DIALOG,
the activeness concept or the communicating materials are among the main contributions
presented in SOHOMA conferences.

3.3 Cluster 3: Physical Internet


The Physical Internet has been proposed in [51] and formally defined as an open global
logistics system leveraging interconnected supply networks through a standard set of
modular containers, collaborative protocols and interfaces for increased efficiency and
sustainability. The Physical Internet is present from the beginning of SOHOMA. First
works on this theme can be found in 2011 [52], where the new paradigm is introduced;
also, a bio-inspired method to build the Open Hub Network was defined.
A first interesting finding is that the concepts of Physical Internet and Intelligent
Product were not merged at the beginning, at least in works presented in SOHOMA.
The first time both concepts are merged in SOHOMA works is in [8], with the main
idea to realize the notion of PI-Container (smart container used in the Physical Internet
Paradigm) via the application of the activeness concept to a normal container. The article
describes the concept of PI-Container activeness and details the different needed infor-
mation and interaction possibilities an active PI-Container should include. A later paper
[53] goes deeper into this concept and propose a guide to represent and analyze collec-
tives of PI-Containers and their inner interactions. Indeed, a simple PI-Container can
be assembled with other PI-Containers to form a Composite one. Going further into the
study of PI-containers, [54] addresses the problem of composition/decomposition of PI-
Containers. In this approach, the authors consider the PI-Container as a normal container
equipped with a wireless sensor node. A Collective of PI-Containers is then equivalent
to a wireless sensor network and can identify its real composition from information col-
lected on the network. This information combined with an optimization process leads
to a 3D map of the containers composing the collective, called a VoC (Virtualization of
Container).
A second interesting finding is that concepts from the PDS cluster have also been
applied to the Physical Internet. For example, the PROSIS architecture detailed previ-
ously is first applied in an intralogistics context in [55]. In this work, the authors introduce
the concept of Wireless Holons Network, constituted by mobile holons (shuttles, mov-
ing products) and fixed holons (workstations). Mobile Holons are those equipped with
a WSN mote. As in the classic PROSIS approach, ACEs (Ambient Control Entities)
placed on the workstations can provide ambient services to the mobile holons. Follow-
ing this first proposal, [55, 56] address more directly the Physical Internet and propose
an adaptation of the PROSIS framework to this domain.
As a conclusion, SOHOMA contributions to the Physical Internet relate to the intro-
duction of the Intelligent Product to the Physical Internet: proposals described in the
other clusters as the activeness concept or PDS architectures like PROSIS have been
applied promisingly to this new field. The Intelligent Product introduction is helping to
give the Physical Internet an orientation towards supporting customization as well as
greater efficiencies in the supply chain.
Intelligent Products through SOHOMA Prism 377

3.4 Cluster 6: Towards Digital Twins at the Core – One of the Future Trends?

Among all the clusters of interest, cluster 6 is the smallest in terms of number of papers.
This cluster regroups keywords related to the virtual representation of the product and
especially the keyword “Digital Twin”, that has emerged recently in the manufacturing
community (thus explaining the size of the cluster). Indeed, The Digital Twin is a new
paradigm in simulation and modelling, defined by [57]7 as “an integrated multi-physics,
multi-scale, probabilistic simulation of a complex product [using] the best available
physical models, sensor updates, etc., to mirror the life of its corresponding twin”.
Moreover, as can be seen Fig. 3, this cluster is the farthest from the others. It means
that it has the least connection with the other clusters for the moment in the IP community.
It can be interpreted as a new field of research, interesting for the IP community. However,
Digital Twin has a wider spectrum than the IP community, and many other SOHOMA
works and sessions have already addressed this field.
In [58], the author discusses the history of PROSA and presents its evolution toward
ARTI – Activity Resource Type Instance architecture. ARTI makes a strong separation
between the Intelligent Agents and Intelligent Beings, emphasizing the fact that the
Intelligent Beings are describing what is more than what is decided. In the framework of
Industry 4.0, the reality of these intelligent beings could be represented by Digital Twins,
which then become the unique “contact persons” to access the world-of-interest. The
notion of Digital Twin thus becomes crucial generally for HMS (Holonic Manufacturing
System) and for PDS. One of the first explorations of the connection between ARTI and
Digital Twin reported in SOHOMA is the work of [59] in the context of semi-continuous
production processes.
In [60], the authors propose a dynamic adaptation process of M-BOMs (manufac-
turing bill of materials) based on BIM Building Information Modelling (BIM) and CPS
assets in the building sector. The use of BIM assets and real time follow-up based on
CPS paradigm could be a source of valuable data to support the planning and monitoring
activities throughout building life cycle and pave the way for introducing the digital twin
as core system for these activities.
Digital Twin also begins to play a role in smart asset management, as underlined by
[60]. The authors propose a framework for future development of smart asset manage-
ment during the operations and maintenance phase, integrating the concept of Digital
Twin. They argue that the future framework should be divided into three layers (smart
asset layer, smart asset integration layer, smart DT-enabled asset management layer).
Compared to “old” PLIM systems, it appears a need to store and manage the lifecycle
not only of data and information, but also of simulation models like the DTs, that should
be easily connected to real-life data.

7 This citation is not issued from SOHOMA proceedings.


378 W. Derigent et al.

This research theme is still new in SOHOMA, but merging DT and IP is already
stressed by SOHOMA papers as a future interesting field. SOHOMA contributed to this
merge by proposing adapted control and information management architectures.

4 Conclusions
During the past 20 years, a lot of different works have been done in the world of Intelligent
Products and in particular, over the last 10 years systems based on the Intelligent Product
concept have undoubtedly found a home in the SOHOMA series of workshops. This
paper has retraced the evolution of connected topics that have been gravitating around the
IP concept for decades through the lens of SOHOMA, and this accurately reflects the fact
that Intelligent Products are a broad concept rather than a specific industrial solution. This
study also lists the number of IP-related papers produced in the SOHOMA conferences
(around 50). It demonstrates that, for ten years, this notion has been a rich concept
for SOHOMA, as well in production control with data management. The SOHOMA
community has participated to this scientific adventure, evidenced by the important
ratio of papers dedicated to this theme, even if less important during the last years.
SOHOMA members have contributed to Intelligent Product research in a number of
different areas (referred to as clusters in this paper). Three main clusters have been iden-
tified as more representative contributions, thanks to a bibliography analysis: Product-
Driven Systems, Product Lifecycle Information Systems and Physical Internet. For each
cluster, a synthesis of the works done by the community has been provided. One last and
smallest cluster is perhaps related to one of the future trends of IP - namely the rapid take
up in recent times of digital twins. This concept can be seen as an encompassing one -
as it extends the perimeter of the intelligent product to intelligent “anything” and pushes
for the emergence of new methods (e.g., data science) and new requirements of real
time, observation, data mining. In addition to these clusters we have observed a number
of common methods and tools being used in the development of intelligent product-
based approaches: multi-agent systems, traceability approaches and the development
and deployment of embedded devices.
In the past, the advent of technologies such as RFID tags and WSN helped to con-
cretize the IP concept. In the future, the envisaged development of tools and methods
associated with Industry 4.0, the development of infrastructure for IoT, human-object
integration in industrial operations and the development of increasingly autonomous
capabilities in industrial systems will certainly provide a fantastic playground for
this concept, which will still evolve thanks to the never-ending work of passionate
researchers.

Appendix

See Table 1
Intelligent Products through SOHOMA Prism 379

Table 1. List of references extracted from SOHOMA Proceedings, ranked by cluster and year

Cluster Title Year


1 A JADE Environment for Product Driven Automation of Holonic 2011
Manufacturing
1 Myopia of Service Oriented Manufacturing Systems: Benefits of Data 2011
Centralization with a Discrete-Event Observer
1 Service Oriented Architecture for Holonic Isoarchic and Multicriteria Control 2011
1 Viable System Model Approach for Holonic Product Driven Manufacturing 2011
Systems
1 An Approach to Data Mining for Product-driven Systems 2012
1 Product-Driven Control: Concept, Literature Review and Future Trends 2012
1 An Evolutionary Simulation-Optimization Approach to Product-Driven 2013
Manufacturing Control
1 A Model for Manufacturing Scheduling Optimization Through Learning 2014
Intelligent Products
1 Coupling Predictive Scheduling and Reactive Control in Manufacturing: 2014
State of the Art and Future Challenges
1 Fuzzy Decision-Making Method for Product Holons Encountered 2014
Emergency Breakdown in Product-Driven System: An Industrial Case
1 Volatile Knowledge to Improve the Self-adaptation of Autonomous Shuttles 2014
in Flexible Job Shop Manufacturing System
1 Application of Measurement-Based AHP to Product-Driven System Control 2016
1 Product Driven Systems Facing Unexpected Perturbations: How Operational 2016
Research Models and Approaches Can Be Useful?
1 A Case Study of Intelligent Manufacturing Control Based on Multi-agents 2017
System to Deal with Batching and Sequencing on Rework Context
1 A Negotiation Scenario Using an Agent-Based Modelling Approach to Deal 2017
with Dynamic Scheduling
1 Empowering a Cyber-Physical System for a Modular Conveyor System with 2017
Self-organization
1 Using Analytic Hierarchical Process for Scheduling Problems Based on 2018
Smart Lots and Their Quality Prediction Capability
1 An Agent-Based Industrial Cyber-Physical System Deployed in an 2019
Automobile Multi-stage Production System
2 Intelligent Products in the Supply Chain - 10 Years on 2011
2 Key Factors for Information Dissemination on Communicating Products and 2011
Fixed Databases
(continued)
380 W. Derigent et al.

Table 1. (continued)

Cluster Title Year


2 The Augmentation Concept: How to Make a Product “Active” during Its Life 2011
Cycle
2 Assessment of EPCIS Standard for Interoperable Tracking in the Supply 2012
Chain
2 Evolution of a Flexible Manufacturing System: From Communicating to 2012
Autonomous Product
2 Farm Management Information System as Ontological Level in a Digital 2013
Business Ecosystem
2 Integrating Agents and Services for Control and Monitoring: Managing 2013
Emergencies in Smart Buildings
2 Proposition of an Analysis Framework to Describe the “Activeness” of a 2013
Product during Its Life Cycle (Part I and 2)
2 QLM Messaging Standards: Introduction and Comparison with Existing 2013
Messaging Protocols
2 Resource, Service and Product: Real-Time Monitoring Solution for Service 2013
Oriented Holonic Manufacturing Systems
2 Adaptive Storage Location Assignment for Warehouses Using Intelligent 2014
Products
2 Manufacturing Operations, Internet of Things, and Big Data: Towards 2014
Predictive Manufacturing Systems
2 Adaptive storage location assignment for warehouses using intelligent 2015
products
2 Centralized HMES with Environment Adaptation for Production of 2015
Radiopharmaceuticals
2 End-of-Life Information Sharing for a Circular Economy: Existing Literature 2015
and Research Opportunities
2 Improving the Delivery of a Building 2015
2 IoT Visibility Software Architecture to Provide Smart Workforce Allocation 2015
2 Repair Services for Domestic Appliances 2015
2 Sink Node Embedded, Multi-agent Systems Based Cluster Management in 2015
Industrial Wireless Sensor Networks
2 Active Monitoring of a Product: A Way to Solve the “Lack of Information” 2016
Issue in the Use Phase
2 Communicating Aircraft Structure for Solving Black-Box Loss on Ocean 2017
Crash
(continued)
Intelligent Products through SOHOMA Prism 381

Table 1. (continued)

Cluster Title Year


2 Data Management Architectures for the Improvement of the Availability and 2017
Maintainability of a Fleet of Complex Transportation Systems: A
State-of-the-Art Review
2 Foundation of the Surfer Data Management Architecture and Its Application 2017
to Train Transportation
2 Situation Awareness in Product Lifecycle Information Systems 2017
2 A Holonic Manufacturing Approach Applied to Communicate Concrete: 2019
Concept and First Development
3 On the Activeness of Physical Internet Containers 2014
3 Wireless Holons Network for Intralogistics Service 2014
3 On the Usage of Wireless Sensor Networks to Facilitate 2015
Composition/Decomposition of Physical Internet Containers
3 The Augmentation Concept: How to Make a Product “Active” during Its Life 2015
Cycle
3 Cyber-Physical Logistics System for Physical Internet 2017
3 Integration of Distributed Manufacturing Nodes in Smart Factory 2018
6 ARTI Reference Architecture – PROSA Revisited 2018
6 Embedded Digital Twin for ARTI-Type Control of Semi-continuous 2019
Production Processes
6 From BIM Towards Digital Twin: Strategy and Future Development for 2019
Smart Asset Management

References
1. Wong, C.Y., Mcfarlane, D., Zaharudin, A.A., Agarwal, V.: The intelligent product driven sup-
ply chain. In: Proceedings IEEE International Conference on Systems, Man and Cybernetics,
pp. 4–6 (2002)
2. McFarlane, D., Sheffi, Y.: The impact of automatic identification on supply chain operations.
Int. J. Logist. Manag. 14(1), 1–17 (2003)
3. Kärkkäinen, M., Ala-Risku, T., Främling, K.: The product centric approach: a solution to
supply network information management problems? Comput. Ind. 52(2), 147–159 (2003)
4. Morel, G., Grabot, B.: Special issue on intelligent manufacturing. Eng. Appl. Artif. Intell.
16(4), 271–393 (2003)
5. McFarlane, D., Giannikas, V., Wong, A.C.Y., Harrison, M.: Product intelligence in industrial
control: Theory and practice. Annu. Rev. Control 37(1), 69–88 (2013)
6. Meyer, G.G., Främling, K., Holmström, J.: Intelligent products: a survey. Comput. Ind. 60(3),
137–148 (2009)
7. Srinivasan, R., McFarlane, D., Thorne, A.: Identifying the requirements for resilient pro-
duction control systems. In: Studies in Computational Intelligence, vol. 640, pp. 125–134.
Springer (2016)
382 W. Derigent et al.

8. Sallez, Y., Montreuil, B., Ballot, E.: On the activeness of physical internet containers. Stud.
Comput. Intell. 594, 259–269 (2015)
9. Trentesaux, D., Thomas, A.: Product-driven control: a state of the art and future trends. IFAC
Proc. 45(6), 716–721 (2012)
10. Dubromelle, Y., Ounnar, F., Pujo, P.: Service oriented architecture for holonic isoarchic and
multicriteria control. Stud. Comput. Intell. 402, 155–168 (2012)
11. Herrera, C., Berraf, S.B., Thomas, A.: Viable system model approach for holonic product
driven manufacturing systems. Stud. Comput. Intell. 402, 169–181 (2012)
12. Adam, E., Trentesaux, D., Mandiau, R.: Volatile knowledge to improve the self-adaptation
of autonomous shuttles in flexible job shop manufacturing system. Stud. Comput. Intell. 594,
219–231 (2015)
13. Mezgebe, T.T., El Haouzi, H.B., Demesure, G., Pannequin, R., Thomas, A.: A negotiation
scenario using an agent-based modelling approach to deal with dynamic scheduling. Stud.
Comput. Intell. 762, 381–391 (2018)
14. Zimmermann, E., El-Haouzi, H.B., Thomas, P., Pannequin, R., Noyel, M.: Using analytic
hierarchical process for scheduling problems based on smart lots and their quality prediction
capability. Stud. Comput. Intell. 803, 337–348 (2019)
15. Raileanu, S., Parlea, M., Borangiu, T., Stocklosa, O.: A JADE environment for product driven
automation of holonic manufacturing. Stud. Comput. Intell.s 402, 265–277 (2012)
16. Cardin, O., Castagna, P.: Myopia of service oriented manufacturing systems: benefits of data
centralization with a discrete-event observer. In: Studies in Computational Intelligence (2012)
17. Cardin, O., Trentesaux, D., Thomas, A., Castagna, P., Berger, T., Bril, H.: Coupling predictive
scheduling and reactive control in manufacturing: state of the art and future challenges. Stud.
Comput. Intell. 594, 29–37 (2015)
18. Gaham, M., Bouzouia, B., Achour, N.: An evolutionary simulation-optimization approach to
product-driven manufacturing control. In: Studies in Computational Intelligence, vol. 544,
p. 283–294. Springer (2014)
19. Li, M., El Haouzi, H.B., Thomas, A., Guidat, A.: Fuzzy decision-making method for product
holons encountered emergency breakdown in product-driven system: an industrial case. Stud.
Comput. Intell. 594, 243–256 (2015)
20. Derigent, W., Voisin, A., Thomas, A., Kubler, S., Robert, J.: Application of measurement-
based AHP to product-driven system control. Stud. Comput. Intell. 694, 249–258 (2017)
21. Aubry, A., Bril, H., Thomas, A., Jacomino, M.: Product driven systems facing unexpected
perturbations: how operational research models and approaches can be useful? In: Studies in
Computational Intelligence (2017)
22. Babiceanu, R.F., Seker, R.: Manufacturing operations, internet of things, and big data: towards
predictive manufacturing systems. Stud. Comput. Intell. (2014)
23. Thomas, P., Thomas, A.: An approach to data mining for product-driven systems. In: Studies
in Computational Intelligence, vol. 472, p. 181–194. Springer (2013)
24. Bouazza, W., Sallez, Y., Aissani, N., Beldjilali, B.: A model for manufacturing scheduling
optimization through learning intelligent products. Stud. Comput. Intell. 594, 233–241 (2015)
25. Zimmermann, E., El Haouzi, H.B., Thomas, P., Pannequin, R., Noyel, M., Thomas, A.: A
case study of intelligent manufacturing control based on multi-agents system to deal with
batching and sequencing on rework context. In: Studies in Computational Intelligence (2018)
26. Queiroz, J., Leitão, P., Barbosa, J., Oliveira, E., Garcia, G.: An agent-based industrial cyber-
physical system deployed in an automobile multi-stage production system. Stud. Comput.
Intell. 853, 379–391 (2020)
27. McFarlane, D., Giannikas, V., Wong, A. C. Y., Harrison, M.: Intelligent products in the
supply chain-10 years on, in Service orientation in holonic and multi agent manufacturing
and robotics, p. 103–117. Springer (2013)
Intelligent Products through SOHOMA Prism 383

28. McFarlane, D., Sarma, S., Chirn, J.L., Wong, C.Y., Ashton, K.: Auto ID systems and intelligent
manufacturing control. Eng. Appl. Artif. Intell. 16(4), 365–376 (2003)
29. Derigent, W., Thomas, A.: situation awareness in product lifecycle information systems. Stud.
Comput. Intell. 762, 127–136 (2018)
30. Cuthbert, R., Giannikas, V., McFarlane, D., Srinivasan, R.: Repair services for domestic
appliances. Stud. Comput. Intell. 640, 31–39 (2016)
31. Derigen, W., Thomas, A.: End-of-life information sharing for a circular economy: existing
literature and research opportunities. Stud. Comput. Intell. 640, 41–50 (2016)
32. Främling, K., Parmar, S., Hinkka, V., Tätilä, J., Rodgers, D.: Assessment of EPCIS standard
for interoperable tracking in the supply chain. In: Studies in Computational Intelligence, vol.
472, pp. 119–134 (2013)
33. Ansola, P.G., García, A., de Las Morenas, J.: IoT visibility software architecture to provide
smart workforce allocation. In: Studies in Computational Intelligence, vol. 640, pp. 223–231
(2016)
34. Kubler, S., Madhikermi, M., Främling, K.: QLM messaging standards: introduction and com-
parison with existing messaging protocols. In: Service Orientation in Holonic and Multi-Agent
Manufacturing and Robotics, vol. 544, pp. 237–256. Springer (2014)
35. Sallez, Y.: The augmentation concept: How to make a product “active” during its life cycle.
Stud. Comput. Intell. 402, 35–48 (2012)
36. Sallez, Y.: Proposition of an analysis framework to describe the “activeness” of a product
during its life cycle part ii: method and applications. In: Studies in Computational Intelligence,
vol. 544, pp. 271–282. Springer (2014)
37. Basselot, V., Berger, T., Sallez, Y.: Active monitoring of a product: a way to solve the “lack
of information” issue in the use phase. Stud. Comput. Intell. 694, 337–346 (2017)
38. Quintanilla, F.G., Cardin, O., Castagna, P.: Evolution of a flexible manufacturing system:
from communicating to autonomous product. In: Studies in Computational Intelligence, vol.
472, pp. 167–180 (2013)
39. Kubler, S., Derigent, W., Thomas, A., Rondeau, É.: Key factors for information dissemination
on communicating products and fixed databases. Service Orientation in Holonic and Multi-
Agent Manufacturing Control, Paris 402, 89–102 (2012)
40. Mekki, K., Derigent, W., Rondeau, E., Thomas, A.: Communicating aircraft structure for
solving black-box loss on ocean crash. In: Studies in Computational Intelligence (2018)
41. Wan, H., David, M., Derigent, A.: Holonic manufacturing approach applied to communicate
concrete: concept and first development. In: Studies in Computational Intelligence, Springer
(2020)
42. Taboun, M.S., Brennan, R.W.: Sink node embedded, multi-agent systems based cluster
management in industrial wireless sensor networks. Stud. Comput. Intell. 640, 329–338 (2016)
43. Trentesaux, D., Branger, G.: Data management architectures for the improvement of the
availability and maintainability of a fleet of complex transportation systems: a state-of-the-art
review. Stud. Comput. Intell. 762, 93–110 (2018)
44. Trentesaux, D., Branger, G.: Foundation of the surfer data management architecture and its
application to train transportation, international workshop on service orientation in holonic
and multi-agent manufacturing. Stud. Comput. Intell. 762, 111–125 (2018)
45. Morariu, O., Morariu, C., Borangiu, T.: Resource, service and product: real-time monitoring
solution for service oriented holonic manufacturing systems. Stud. Comput. Intell. 544, 47–62
(2014)
46. Tsamis, N., Giannikas, V., McFarlane, D., Lu, W., Strachan, J.: Adaptive storage location
assignment for warehouses using intelligent products. Stud. Comput. Intell. 594, 271–279
(2015)
384 W. Derigent et al.

47. Cojocaru, L.E., Burlacu, G., Popescu, D., Stanescu, A.M.: Farm management information
system as ontological level in a digital business ecosystem. In: Studies in Computational
Intelligence, vol. 544, pp. 295–309. Springer (2014)
48. Răileanu, S., Borangiu, T., Silişteanu, A.: Centralized HMES with environment adaptation
for production of radiopharmaceuticals. Stud. Comput. Intell. 640, 3–20 (2016)
49. Pǎtraşcu, M., Drǎgoicea, M.: Integrating agents and services for control and monitoring:
managing emergencies in smart buildings. Stud. Comput. Intell. 544, 209–224 (2014)
50. Thomson, V., Zhang, X.: Improving the delivery of a building. Stud. Comput. Intell. 640,
21–29 (2016)
51. Montreuil, B.: Toward a physical internet: meeting the global logistics sustainability grand
challenge. Logist. Res. 3(2), 71–87 (2011)
52. Ballot, E., Gobet, O., Montreuil, B.: Physical internet enabled open hub network design for
distributed networked operations. Stud. Comput. Intell. 402, 279–292 (2012)
53. Rahimi, A., Sallez, Y., Berger, T.: Framework for smart containers in the physical interne. In:
Studies in Computational Intelligence, vol. 640, pp. 71–79. Springer (2016)
54. Krommenacker, N., Charpentier, P., Berger, T., Sallez, Y.: On the usage of wireless sensor
networks to facilitate composition/decomposition of physical internet containers. In: Studies
in Computational Intelligence, vol. 640, pp. 81–90. Springer (2016)
55. Pujo, P., Ounnar, F., Remous, T.: Wireless holons network for intralogistics service. Stud.
Comput. Intell. 594, 115–124 (2015)
56. Pujo, P., Ounnar, F.: Cyber-physical logistics system for physical internet. In: Studies in
Computational Intelligence, vol. 762, pp. 303–316 (2018)
57. Glaessgen, E., Stargel, D.: The digital twin paradigm for future NASA and US Air Force vehi-
cles. In: 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials
Conference, p. 1818 (2012)
58. Valckenaers, P.: ARTI reference architecture - PROSA revisited. Stud. Comput. Intell. 803,
1–9 (2019)
59. Borangiu, T., Oltean, E., Răileanu, S., Anton, F., Anton, S., Iacob, I.: Embedded digital twin
for ARTI-type control of semi-continuous production processes. Stud. Comput. Intell. 853,
113–133 (2020)
60. Lu, Q., Xie, X., Heaton, J., Parlikad, A.K., Schooling, J.: From BIM towards digital twin:
strategy and future development for smart asset management. Stud. Comput. Intell. 853,
392–404 (2020)
Multi-protocol Communication Tool
for Virtualized Cyber Manufacturing Systems

Pascal André(B) , Olivier Cardin, and Fawzi Azzi

LS2N UMR CNRS 6004, University of Nantes and IUT de Nantes, 2, rue de la Houssini ère,
44322 Nantes Cedex, France
{Pascal.Andre,Olivier.Cardin,Fawzi.Azzi}@ls2n.fr

Abstract. In service oriented manufacturing systems, entities (such as resources,


people, products, orders) are considered as objects (actors, holons) that exchange
messages to call the services they provide. Some services can be in the cloud
while others are distributed on a wide range of manufacturing resources. This
unifying paradigm hides the complexity of implementations, due to the various
nature of the devices and their providers and also to legacy applications. In order
to handle heterogeneous communications, we propose a practical solution for
multi-protocol communication that fits to small manufacturing systems in order to
improve their evolution. We present here a running implementation that is generic
enough to be implemented in various service manufacturing systems. The solu-
tion is taking place in the evolution of the Sofal application.

Keywords: Holonic manufacturing systems · Service · Communication ·


Virtualization · Heterogeneous protocols

1 Introduction

Using service orientation for manufacturing systems enables to abstract the heteroge-
neous nature of the devices and their providers and also of legacy applications. In ser-
vice oriented manufacturing systems, a key concept is abstraction. Manufacturing enti-
ties such as resources, people, products, orders are considered as objects (e.g. actors
or holons) that exchange messages to call the services provided by entities. From low
physical actions to high level product order, every process can be seen as a service,
making it a convenient scalable paradigm to design manufacturing system. Even more,
this scalable paradigm enables to see human processes as well as business processes as
services and integrates them with the manufacturing process [8, 15, 16].
However, this unifying paradigm hides the complexity of implementations: some
services are located in the cloud while other are distributed on a range of cyber-physical
systems (the manufacturing resources), due to the various nature of the devices and their
providers and also to legacy applications. The bottleneck problem does not rely to the
scalability of the service composition and orchestration [17] but to the communication

F. Azzi—Sincere thanks to Nicolas Vannier, Maxence Coutand, Tom Le Berre and Khaled Amirat
for their active work on the implementation.
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 385–397, 2021.
https://doi.org/10.1007/978-3-030-69373-2_27
386 P. André et al.

means that support the service interactions. At the model level, e.g. in SysML [23],
objects call services by message send (or signals). At low levels, these can be simple
(remote) method calls in a program or service invocation through network layers e.g.
the 7 layers of the OSI Model according to interoperable (or not) communication proto-
cols including recent IoT devices. The manufacturing software systems, and especially
the digital twins, are based on distributed scalable services that exchange through com-
munication middleware which can be Remote Procedure Call (RPC) , Object Request
Brokers (ORB), Message Oriented Middlewares (MOM), Enterprise Application Inte-
gration (EAI) frameworks or an Enterprise Service Bus (ESB) solution. Consequently
the system’s interoperability remains a fundamental and still challenging quality of soft-
ware and hardware systems. In addition, the current trend of virtualization and encapsu-
lation of manufacturing resources and controls into cloud networked services highlights
more and more the need for a careful management of the overall communication net-
work from the cloud to the equipment.
In order to handle heterogeneous communications, we proposed a practical solu-
tion for multi-protocol communication for small manufacturing systems [2]. The main-
tenance of such systems becomes tricky when the communication statements are
widespreadly merged in the application code. The communication software mainte-
nance of manufacturing systems should support device and machine evolution and
face the technical debt of legacy application since there are never ending technological
changes. Software evolution happens during the whole system life cycle and may intro-
duce heterogeneity that will impact the communication middleware. We investigated the
separation of the communication concerns from other aspects of the application in order
to improve the system evolution and make it adaptable and reconfigurable to different
contexts (resources, workshop...). We compared various approaches and exhibited a
solution. In this paper, we present a running implementation of the multi-protocol com-
munication tool (MPCT) that is generic enough to be implemented in various service
oriented manufacturing systems.
The paper is organised as follows. We recall the problem statement and we present
the architecture of our solution to handle heterogeneous communication abstraction
in Sect. 2. In Sect. 3 we describe the tool design and discuss implementation issues.
Section 4 illustrates the application of the tool on a part of the Sofal case study. The
applicability of the approach is discussed in Sect. 5. In conclusion, we draw perspectives
for a larger integration and development of the tool in an actual manufacturing context.

2 Background

In [2], we exhibited the need for a Multi-Protocol Communication Tool (MPCT) that
would handle the communication in the interfaces of the holonic manufacturing systems
(HMS) [10, 22], either towards external applications, or even towards holons embedded
in physical resources if the HMS is distributed on several physical assets. In a Service
model, a functionality is implemented by services provided by entities [3]. In our case,
the entities are holons and the general model is formalised as a Service-oriented HMS
(SoHMS) [8, 19]. The entities provide services in their interface and require services
from other entities. Provided services are not necessarily atomic calls and may have
Multi-protocol Communication Tool for Virtualized Cyber Manufacturing Systems 387

a complex behaviour in which other services might be needed (called) and messages
can be exchanged. The service call and return (call back) are also implemented by
synchronous or asynchronous messages.
In these distributed architectures, a main issue is the communication between the
heterogeneous entities. Indeed, even if this is usually considered as a “technological”
concern, communication is an essential stake for distributed software. Among the exist-
ing architectures, a mixed solution offers a balanced compromise for HMS based con-
trol systems [2]. Considering the interfaces, MPCT is meant to handle (1 ↔ N) com-
munications, i.e. one generic interface on one side and N protocols on the other side.
Therefore, the generic and standardized interface has to be defined for the one side.
An important task is to handle the fundamental differences between the protocols. For
example, an event-based protocol and a push protocol will be fundamentally different
implementations of service messages. We need to aggregate the different behaviours in
order to fit to the standard interface.

Fig. 1. Mixed HMS communication architecture

We proposed a hybrid and non-symmetric communication system because all the


components do not play the same role in the manufacturing system. In addition to the
central manufacturing control system HMS we distinguish the persistent system DB,
the user interface GUI, the information system linked to strategic management and the
resources. There is no unique broadcasting information system but several co-existing
communication systems. Figure 1 shows a mixed architectural pattern which is a com-
promise between simplicity, adaptability and heterogeneity. The left part corresponds
to the digital twin while the right part belongs to the management system. In the right
part, the communications are peer-to-peer relations since each interaction belongs to
an orthogonal part of the HMS. For example one can use a Data Access Object (DAO)
layer to access databases, a TCP-socket to access to the application user interface (GUI)
and an Enterprise Service Bus (ESB) to exchange with management applications.
In the left part, we use a hub-and-spoke architecture [1] to handle the variety of
communication protocols of the resources: the Multi-Protocol Communication Tool
388 P. André et al.

(MPCT) component plays the role of mediator (or federation) between the digital twins
which are themselves distributed in both the HMS part and the resource part. The MPCT
offers a generic, modular and configurable standalone application to handle any type of
heterogeneous communication all along the architecture. The objective is to simplify as
much as possible those technical issues (decoupling) in order to make the developers
focus on the most valuable tasks of the HMS, including the decision making algorithms
and negotiation patterns for example.

3 Multi-Protocol Communication Tool Design


The MPCT is designed as a factory of communication managers. The four key elements
of its architecture (see Fig. 1) are detailed in this section.
1 Generic and specific Protocols
Due to the software or hardware limitations, real man-
ufacturing workshops imply multiple protocols including
TCP, MQTT, HTTP, Modbus TCP, CoAP, OPC UA, XMMP,
AMQP [2]. Decoupling the communication part from the
HMS enables to unify the communication between HMS and
resources (cf. 1 Fig. 1). The communications between HMS
and MPCT use a single generic protocol. A resource mes-
sage language (RML) defines the interface between HMS and
the resources. As an example, the messages are defined as in
Fig. 2. Message in HMS
Fig. 2.
2 Protocol Handler
The protocol handler acts as a translator factory between protocols (Fig. 3). For each
resource-specific protocol, a protocol handler is configured and added to the protocol
factory. For each message, the coordinator process loads the source or target protocol
and gets the adequate protocol handler from the factory to translate the communication
in a convenient way to the correspondent.

Fig. 3. Protocol handler architecture


Multi-protocol Communication Tool for Virtualized Cyber Manufacturing Systems 389

Fig. 4. Example of object collaboration to handle a communication between MPCT and HMS

As an example in Fig. 4, the MPCT interacts with the HMS using TCP-sockets.
The coordinator loads a SocketServer to the ComProtocolLoader, installs a communica-
tion handler from the ProtocolFactory and runs it to exchange messages with the HMS.
Recall that HMS uses the generic protocol.
In Fig. 5, when receiving an incoming message from the HMS, the coordinator
extracts the meta-information of the message. If this is the first message to the tar-
get resource, the coordinator queries the configuration manager to get the resource
data, loads the adequate protocol from the ComProtocolLoader, installs a communication
handler from the ProtocolFactory and runs it to exchange messages with the resource,
according to rules of the resource protocol.
3 Message transformation
The protocol handler converts this messaging protocol into the resource specific
protocol (in/out) according to each resource communication requirements. The message
transformation service enables to re-format the message structure from one protocol to
another according to user-defined transformations. Transformation libraries must be
provided here for standard protocols (Fig. 6).
390 P. André et al.

Fig. 5. Communication MPCT-Resources

Fig. 6. Message transformation service process


Multi-protocol Communication Tool for Virtualized Cyber Manufacturing Systems 391

1 Evolvable configurations
The configuration part of the MPCT
resources aims to store persistent “communi-
cation” data for different resources. This infor-
mation is mandatory for protocol and message
translation as we mentioned above. It includes
common data such as the resource name, the
protocol, the host and port addresses and also
optional (specific) data like a MQTT topic or
a channel. For example, Fig. 7 shows a con-
figuration mapping between the HMS and the
resource number 2. The HMS connection is Fig. 7. Examples of resource configura-
based on sockets, as detailed in the collabora- tion (JSON format)
tion example of Fig. 4. The resource commu-
nicates using a MQTT protocol on topic moveProduct. It is currently made persistent
by JSON files which are actually more portable than databases.
New protocols may be
required. The abstract class
ComProtocolLoader includes the
common structure between the
different communication proto-
cols already implemented. It
groups together all the attributes
such as the addresses, names and
ports. It also combines the trans-
formation method to make a call
on the Message Transformation
and runs the module where the
developer will have to imple-
ment the communication logic of
this new protocol as described in
Fig. 8. The ProtocolFactory plays Fig. 8. Adding a new protocol
the role of a protocol generator
for which an implementation has to fulfil its specification. We get a more effective man-
agement of the different protocols but the counterpart is that we need to write source
code inside the MPCT. New facilities are under study to make it generated instead of
coded.
We implemented the MPCT in Java by means of design patterns [9] e.g. Factory,
Adaptor, Composite State, Proxy, Mediator or Facade in order to get standard evolv-
able applications1 . Optimisations can improve the performances. For example, when
several resources use the same MQTT instance, we group them in a single communi-
cation adaptor. Also we plan to use MPCT as a middleware for resource-to-resource
communications bypassing HMS.

1 https://gitlab.univ-nantes.fr/E168727Z/capstone2019.
392 P. André et al.

4 Application on the Sofal Case Study


We illustrate the situation with the SOFAL production line implemented by Gamboa
Quintanilla et al. [7]. The legacy application is composed of three different parts: two
applications (HMI, SOHMS) and the manufacturing workshop (or an emulation). The
resource part of the SOHMS handles the link with the Virtual Logic controller (state
and orders). All the communications were initially implemented at low level by TCP/IP
connections using sockets.
The communication refactoring has been introduced in [2]. In addition to existing
TCP sockets, we added a new protocol, MQTT (Message Queuing Telemetry Transport)
that was necessary for new resources. Refering to the general architecture of MPCT in
Fig. 1, the left part remains as-is because both the persistent (JDBC) and control aspects
(TCPSocket) are point-to-point communications by asynchronous messages with mail-
box queues. The right part will enable to add new resources with a MQTT protocols to
existing ones under TCPSocket and we developped the communication connectors to
include these new resources.
The current proof of
concept (POC) is built
like a test harness on
MPCT using the ser-
vices and messages of
Sofal. Figure 10 illus-
trates the main classes
with a link to Fig. 9
as a watermark to iden-
tify the architectural fea-
tures. The App class
encapsulates the MPCT Fig. 9. New Sofal message communication Architecture
module. It is bind by: (i)
a TCP socket connection to the TestHMSClient class that represents a mock component of
the SoHMS application through a TCP socket connection, (ii) a TCP socket connection
to the ResourceUseSocket class that represents a mock component of the R1 resource, (iii)
a MQTT connection to the ResourceUseMQTT class that represents a mock component
of the R1 resource.
A simple execution scenario is as follows:
1. Configure the messages and protocols of the communication connections.
2. Launch the App application to install the communication support and a MPCTConsole
to observe the communications.
3. TestHMSClient (HMS) provides an order to resource R1, the message is displayed on
the resource console.
Multi-protocol Communication Tool for Virtualized Cyber Manufacturing Systems 393

Fig. 10. MPCT harness using Sofal messages

4. The resource R1 publishes a message which is read by TestHMSClient, the message


is displayed in the HMS console.
5. TestHMSClient provides an order to resource R2, the message is displayed in the
resource console.
6. The operations are repeated to ensure non-blocking states.

We obtain the screen of Fig. 11. During the communications, each exchange
between a resource and the HMS implies two network communication sessions and
the conversion of the messages according the communication protocol configuration.
Recall the messages can be plain text or XML files that must conform to the message
structure definitions.
The MPCT application itself is designed as an assembly of four modules. Table 1
provides some software metrics of the MPCT project: resourceConfig (rC), message-
Transformation (mT), protocolHandler (pH) and mpct-GUI.
The MPCT is planned to be integrated in the SOHMS framework during the next
release of the Sofal product line which includes new kinds of resources.
394 P. André et al.

Fig. 11. MPCT harness execution result

Table 1. MPCT Software Metrics


rC mT pH GUI rC mT pH GUI
Number of Classes 7 13 28 3 Lines of Code 177 462 1799 127
Number of Methods 28 46 188 10 Number of Attributes 11 28 117 10
Cyclomatic Complexity 4,000 5,077 8,214 3,333 different paths through the source code.
Weight Methods / Class 4,000 5,538 14,536 3,333 Σ methods complexities.
Tight Class Cohesion 0,293 0,383 0,175 0 Measures the ’connection density’.

5 Discussion
The topic of Intelligent Manufacturing represents a large scientific interest, aiming at
developing innovative control software for the new generation of manufacturing sys-
tems. Various trends can be identified, among which intelligent products [14] or Holonic
Control Architectures for example. A common feature of all these approaches is the
decentralization of the decision making process in order to cope as efficiently as possi-
ble with the disruptions happening on the system [21]. As a matter of fact, the commu-
nication between the entities is a major topic during the development and implementa-
tion phases, as the efficiency of the communication has a direct impact on the efficiency
of the overall architecture.
Two main aspects have to be dealt with: negotiation and connectivity. Negotiation is
the core of lots of studies, evaluating the best interaction protocols to enhance the global
performance of the control architectures [5]. Connectivity represents the hardware and
software possibilities to connect various elements together, including legacy systems.
Connectivity issues currently take an increasing importance in the development of inno-
vative control architectures, especially with the development of service-oriented [11],
cloud-based architectures [13] or Digital Twin-based architectures [6].
Multi-protocol Communication Tool for Virtualized Cyber Manufacturing Systems 395

The hybridization between historical industrial communication protocols, IoT pro-


tocols [18] and classical computer science protocols in the new infrastructures engen-
ders a new complexity that is difficult to handle in a generic way: the mechanisms,
the time constant and the quality of service are major levers that make the compatibil-
ity and thus the translations rather difficult. Therefore, this aspect is not very present
in literature, as the various studies generally evaluate the performance of their nego-
tiation solution on a simulated environment, where actual connectivity issues do not
show up [4]. Many studies hence rely on the development of a multi-agent system, for
example using JADE [20] or ERLANG [12]. Although very efficient for validation and
evaluation, these development platforms consider each agent as an autonomous object
for which the connection with other agents is already set up. This communication is
generally based on a message passing mechanism, that happens to be sometimes diffi-
cult to implement on some technologies.
Therefore, the difficulty is postponed to the actual implementation, with all the
problems that can occur. The impact of MPCT on the development methodology is
important, as it places the connectivity issues back in the design phase, instead of the
implementation one. With this methodology, the evaluation phase can remain in inte-
grated multi-agent development frameworks for simplicity purposes. The difference is
mostly during the deployment of the solution on actual distributed devices. The com-
munication protocols being integrated in MPCT, the connectivity between the devices
is formalized by the configuration of MPCT and is therefore more reliable than a hard-
coded solution for each connection. Being able to formalize the connectivity issues also
opens some new opportunities of dynamic reconfigurations of the control architectures,
so that they can rapidly address new systems.

6 Conclusion

Interoperability is a tricky issue in heterogeneous communications in service oriented


manufacturing systems due of the variety of service and device providers. We proposed
a practical solution for multi-protocol communication that makes software maintenance
manageable [2]. This solution has been implemented here as a tool. We provide the
detailed architecture of that MPCT and implemented a test harness to validate the app-
roach on the Sofal case study using TCP sockets and MQTT.
The next step will be to fully integrate MPCT in the SOHMS framework of the Sofal
product line, not only for resources but also for database, GUI and Cloud services. One
of the main problems the industry is facing for implementing virtualization and Digital
Twin is represented by the existing (and often old) machines and their lack of modern
communication interfaces. A tool enabling a simple adaptation of the protocols between
these assets can be of great value for the implementation of the concept. However, the
interoperability is not dealt here, at least on the syntactic topic. A future evolution could
deal with the integration of the ontology and syntax issues in the tool.
396 P. André et al.

References
1. An, Y., Zhang, Y., Zeng, B.: The reliable hub-and-spoke design problem: models and algo-
rithms. Transp. Res. Part B Methodolog. 77, 103–122 (2015)
2. André, P., Azzi, F., Cardin, O.: Heterogeneous communication middleware for digital twin
based cyber manufacturing systems. In: Borangiu, T., Trentesaux, D., Leitão, P., Boggino,
A.G., Botti, V.J. (eds.) Proceedings of SOHOMA, Studies in Computational Intelligence,
vol. 853, pp. 146–157. Springer (2019)
3. André, P., Cardin, O.: Trusted services for cyber manufacturing systems. In: Borangiu,
T., Trentesaux D., Thomas, A., Cardin, O. (eds.) SOHOMA, pp. 359–370. Springer IP (2018)
4. Antzoulatos, N., Castro, E., Scrimieri, D., Ratchev, S.: A multi-agent architecture for plug
and produce on an industrial assembly platform. Prod. Eng. Res. Devel. 8(6), 773–781 (2014)
5. Borangiu, T., Raileanu, S., Trentesaux, D., Berger, T., Iacob, I.: Distributed manufacturing
control with extended CNP interaction of intelligent products. J. Intell. Manuf. 25(5), 1065–
1075 (2014)
6. Bottani, E., Cammardella, A., Murino, T., Vespoli, S.: From the cyber-physical system to the
digital twin: the process development for behaviour modelling of a cyber guided vehicle in
m2m logic. In: XXII I Summer School “Francesco Turco” – Industrial Systems Engineering,
pp. 96–102 (2017)
7. Gamboa Quintanilla, F., Cardin, O., L’Anton, A., Castagna, P.: Virtual Commissioning-
Based Development and Implementation of a Service-Oriented Holonic Control for Retrofit
Manufacturing Systems. In: Borangiu, T., Trentesaux, D., Thomas, A., McFarlane, D. (eds.)
SOHOMA, no. 640 in Studies in Computational Intelligence, pp. 233–242. Springer IP
(2016)
8. Quintanilla, F.G., Cardin, O., L’anton, A., Castagna, P.: A modeling framework for manufac-
turing services in service-oriented holonic manufacturing systems. Eng. Appl. Artif. Intell.
55, 26–36 (2016)
9. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable
Object-Oriented Software. Addison-Wesley Longman Publishing Co. Inc, USA (1995)
10. Giret, A., Botti, V.: Engineering holonic manufacturing systems. Comput. Ind. 60(6), 428 –
440 (2009). Collaborative Engineering: from Concurrent Engineering to Enterprise Collab-
oration
11. Jiang, P., Ding, K., Leng, J.: Towards a cyber-physical-social-connected and service-oriented
manufacturing paradigm: social manufacturing. Manuf. Lett. 7(Supplement C), 15–21
(2016)
12. Kruger, K., Basson, A.: Erlang-based control implementation for a holonic manufacturing
cell. Int. J. Comput. Integr. Manuf. 30(6), 641–652 (2017)
13. Liu, X.F., Shahriar, M.R., Al Sunny, S.M.N., Leu, M.C., Hu, L.: Cyber-physical manufactur-
ing cloud: Architecture, virtualization, communication and testbed. J. Manuf. Syst. 43(Part
2), 352–364 (2017)
14. McFarlane, D., Sarma, S., Chirn, J.L., Wong, C.Y., Ashton, K.: Auto id systems and intelli-
gent manufacturing control. Eng. Appl. Artif. Intell. 16(4), 365–376 (2003)
15. Moraes, E.C., Lepikson, H.A., Colombo, A.W.: Developing Interfaces Based on Services
to the Cloud Manufacturing: Plug and Produce, pp. 821–831. Lecture Notes in Electrical
Engineering. Springer Berlin Heidelberg (2015)
16. Morariu, C., Morariu, O., Borangiu, T.: Customer order management in service oriented
holonic manufacturing. Comput. Ind. 64(8), 1061–1072 (2013)
17. Papazoglou, M.P.: Service-oriented computing: concepts, characteristics and directions. In:
WISE, pp. 3–12. IEEE Computer Society (2003)
Multi-protocol Communication Tool for Virtualized Cyber Manufacturing Systems 397

18. Raileanu, S., Borangiu, T., Morariu, O., Iacob, I.: Edge computing in industrial IoT frame-
work for cloud-based manufacturing control. In: 2018 22nd International Conference on
System Theory, Control and Computing (ICSTCC), p. 261–266 (2018)
19. Rodrı́guez, G., Soria, Á., Campo, M.: Artificial intelligence in service-oriented software
design. Eng. Appl. Artif. Intell. 53, 86–104 (2016)
20. Sandita, A.V., Popirlan, C.I.: Developing a multi-agent system in jade for information man-
agement in educational competence domains. Proc. Econ. Finan. 23, 478–486 (2015)
21. Schuhmacher, J., Hummel, V.: Decentralized control of logistic processes in cyber-
physical production systems at the example of ESB logistics learning factory. Proc. CIRP
54(Supplement C), 19–24 (2016)
22. Van Brussel, H.: Holonic Manufacturing Systems, pp. 654–659. Springer, Berlin (2014)
23. Weilkiens, T.: Systems Engineering with SysML/UML: Modeling, Analysis. Design. The
MK/OMG Press, Elsevier Science (2008)
Is Communicating Material an Intelligent
Product Instantiation? Application
to the McBIM Project

H. Wan, M. David, and W. Derigent(B)

Research Centre for Automatic Control, CRAN CNRS UMR 7039, Université de Lorraine,
Campus Sciences, BP 70239, 54506 Vandoeuvre-lès-Nancy, France
{h.wan,m.david,w.derigent}@univ-lorraine.fr

Abstract. Information and communication technologies, like Wireless Sensor


Networks (WSN) allow to creating new objects made of materials including micro-
elements able of data sensing, storing, processing, and communicating. Commu-
nicating Materials (CM) is a new paradigm inspired by the Intelligent Product
concept. This paper presents a recursive Multi-Agent Framework proposed for
handling compositions of Communicating Materials, which are an application of
the Janus Effect. The proposed framework, still in development, is illustrated on
the McBIM project consisting in designing a communicating concrete and digital
services that can be used by Building Information Modelling (BIM) applications
throughout the material’s lifecycle. MAS (Multi-Agent System) principles are
demonstrated for this construction industry project to control WSN energy and to
illustrate CM aggregation ability.

Keywords: Communicating materials · Recursive multi-agent system · Wireless


sensor network · Digital Twin

1 Introduction and Context

In the framework of the Intelligent Manufacturing Systems (IMS) community, the use
of the Internet of Things gave rise to the concept of intelligent product. Indeed, sub-
stantial information distribution improves data accessibility and availability compared
to centralised architectures. Product information may be allocated both within fixed
databases and/or within the product itself, thus leading to products with informational
and/or decisional abilities, referred as “Intelligent Products”. Many different definitions
of “Intelligent Products” have been proposed. A comparison of these different types is
provided in [1].
In 2010, a new paradigm was proposed in [2], introducing communicating materi-
als, i.e. materials able to communicate with their environment, process and exchange
information, and store data in their own structure. Besides, they also have the capability
to sense their environment and measure their own internal physical states.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 398–408, 2021.
https://doi.org/10.1007/978-3-030-69373-2_28
Is Communicating Material an Intelligent Product Instantiation? 399

The concept has been applied in different works, from different perspectives. Diverse
early prototypes were designed (or simulated) for the needs of the manufacturing and
the construction industries, by spreading micro-electronics devices into a material. The
material can be either wood, textile or concrete. The interests for such materials are
diverse: (a) because of their data storing capacity, they can convey all information related
to design, manufacturing and logistics, useful during the BOL (Beginning Of Life –
design, manufacturing and construction) and the EOL (End Of Life – dismantlement and
recycling) of a building; (b) given their ability to sense their environment and process
related information, they can also be used during the MOL (Middle Of Life – exploitation
and maintenance) as intelligent building sensors, mainly to perform structural health
monitoring. In our works, the inserted devices were either RFID tags [3] or self-powered
wireless sensor networks (WSNs) embedded into the material [4]. Both works deal with
the data dissemination/data replication in this type of materials, which is an issue related
to the second capability. However, these works did not address the issue related to the
first capability, i.e. the composition/decomposition of communicating materials.
The rest of the paper is organized as follows: Sect. 2 details the problem of composi-
tion/decomposition in communicating materials. Indeed, from a holonic perspective, it
is shown that composition/decomposition of communicating materials is similar to the
composition/decomposition of a group of product holons. Section 3 introduces the differ-
ent works existing in the holonic and multi-agent literature around recursive holarchies
and multi-agent systems, sometimes applied to WSN. Section 4 details the approach
used to model the communicating material. Section 5 demonstrates the approach by
instantiating it on the ANR McBIM (Materials Communicating with the BIM) project.

2 The Communicating Material as an Infinite Holarchy of Product


Holons

2.1 Definition of a Communicating Material

Even if studied for a long time, no clear formal definitions of the communicating material
have been provided until now and a first definition is proposed hereafter. This definition
helps to understand that a communicating material can be theoretically considered as an
infinite group of Product Holons.
In our definition, the Communicating Material is an Intelligent Product with two
additional capabilities:

• The capability of being intrinsically and wholly communicating: even if the product
undergoes a physical transformation, the resulting pieces shall still be communicating
materials. Two operators of physical transformation are considered: composition and
decomposition. Composition is an operator that gathers 2 to N communicating mate-
rials into one. Decomposition is the inverse operator, that divides one communicating
material into 2 to N different pieces, still communicating materials.
400 H. Wan et al.

• The capability of managing its own data: the material should be able to manage its own
data according to the events occurring in its environment. For instance, the material
could decide itself to propagate/replicate specific data onto different material parts
because a physical transformation is scheduled, thus avoiding data loss.

As said before, a communicating material is a special type of intelligent product.


Therefore, based on the definition of [5, 6], the communicating material is a product that
has some or all the seven following characteristics:

1. Possesses a unique identity


2. Is capable of communicating effectively with its environment
3. Can retain or store data about itself
4. Deploys a language to display its features, production requirements, etc.
5. Is capable of participating in or making decisions relevant to its own destiny
6. Continuously monitors its status and environment
7. Can remain a communicating material under composition or decomposition.

Characteristics (1) to (5) are inherited from [5], characteristic (6) is from [6]. This
one is important since it underlines the capacity of a product to monitor its own status or
properties. Characteristic (7) is the only one completely dedicated to the communicating
material concept and is mandatory to build a communicating material. Let P1 , P2 and
P3 be three products (intelligent or not); the composition operator C is then defined as
follows (Eq. 1):
C
(P1 , P2 ) −→ P3 (1)

The decomposition operator D is defined as the inverse of C.

2.2 Communicating Material and Recursive Holarchies


Characteristic 7 is then invariant under operators C or D. This underlines a central
aspect of the communicating material, i.e. their type is invariant under composition
or decomposition operators. This is a fundamental property of holons in general as
introduced by [7].
Indeed, our hypothesis is then to consider a communicating material as a Product
Holon, and at the same time as a recursive dynamical holarchy of Product Holons (which
is a straightforward interpretation of the Janus Effect applied on the Product Holon) [8].
This representation of the communicating material is one solution satisfying charac-
teristic 7 above defined. We thus define a communicating material as a holarchy of
sub-communicating material CM j i , where i is the level of holarchy the communicating
material is attached to, and j the index of the communicating material in this corre-
sponding level. Each sub-communicating material could then be decomposed in other
communicating materials until a certain level of decomposition called elementary. The
level of the hierarchy is limited in practice. The elementary level is the finest granularity
of the holarchy. Different representations of a communicating material are illustrated
Fig. 1.
Is Communicating Material an Intelligent Product Instantiation? 401

(a) Set representation (b) Graph representation

Fig. 1. Representation of a communicating material

2.3 Problem Statement


This section highlights a correspondence between a communicating material and a hol-
archy of product holons. The main problem is now to precisely describe the inner orga-
nization of this holarchy, as well as the mechanisms needed to dynamically construct
this holarchy when composition and decomposition transformations occur. To do so,
the next section will introduce some works dedicated to recursive multi-agent systems
and recursive holarchies, sometimes applied to Wireless Sensor Nodes (as WSN are an
interesting solution to construct communicating materials). The concepts and notions
detailed in these works will help to construct the proposal, answering to the problem
statement.

3 Review of Recursive Holarchies or MAS


In the Holonic and in MAS communities, several attempts were made to address recursion
in different ways. The ANEMONA methodology, developed in [9], introduces the notion
of abstract agent which could be either a classic agent or a MAS (Multi-Agent Systems)
containing abstract agents. In ANEMONA, recursion is used during the design of an
IMS to define iteration after iteration the holarchy and its corresponding levels and sub-
levels. It helps to structure a given holarchy of the future HMS. However, it does not aim
to produce dynamic recursive holarchies, i.e. holarchies that can evolve dynamically via
composition/decomposition mechanisms through time.
In the maintenance domain, (Le Mortellec et al., 2013) introduce a recursive hol-
archy dedicated to product/system monitoring. In this architecture, a generic model of
Product Holon is defined and reused recursively in all the levels of the architecture. Each
holon owns a diagnostic function that processes the data coming from the levels above
(devices or other holons) and sending a diagnostic report to the upper levels. This works
stresses the need to have diagnostic functions between levels that can be seen as filters,
aggregation functions or semantic transformations between holons of two different lev-
els. Here also, in this paper, dynamicity in the holarchy is neither really required nor
described.
[10] defines a recursive holarchy applied to Active Physical-Internet (PI)-containers,
which are intelligent products. Each PI-container can exploit different sources of infor-
mation to support its activeness (embedded static or dynamic data, measurements via
402 H. Wan et al.

sensors, etc.). Composition and encapsulation mechanisms are defined. Encapsulation


refers to the action of encapsulating one container into another of a different type, whereas
the composition mechanism builds a “composite” container from two containers of the
same type. However, the composition mechanism is not formally defined.
The comparison of WSN with MAS is natural, as the two approaches consist of inter-
active entities situated in an environment they can sense and act upon locally. Agents
provide engineers with a higher abstraction level, so that WSN become a useful appli-
cation domain for MAS. With COSA (Coalition Oriented Sensing Algorithm), [11]
proposed to use a dynamic MAS to structure the possible coalitions with two kinds
of agents: Leader and Follower agents. The coalition process goes through all exist-
ing stages and finishes with an agreement between the two agents. The agent initiating
the dialogue assumes the leader role while its neighbour becomes its follower. COSA
endows a network with self-organization capacity. This ability can be used to adapt the
energy consumption of a WSN to changes in the environment and, at the same time,
to fulfil sampling objectives in terms of the quality of the information reported to the
sink. Very few works propose WSN MAS with dynamic capabilities [11] and even fewer
proposals allow to manage the different abstraction levels necessary for the applications
using communicating materials.
In [12], a generic recursive multi-agent system (MAS-R) is formalized. Main MAS-R
principles and agents are described Fig. 2.

Fig. 2. Recursive agent compositions from [12]

Agents a0 i are “Elementary” agents which have a physical applicative part and a
recursive agent part (holon). Agents a1 i , a2 i and so on … represent agent of higher level
i and are individually called “Partial” agent (no physical part).
A “Complete Elementary” agent is composed by its elementary agents and all their
directly related “Partial” agents (from abstraction levels). “Composed” agent are the
few “Partial” agents gathered from the same abstraction level. Elementary agents are
dynamically added to the holarchy. At the same time, depending on the states of the
agents, composed agents are also automatically created or destroyed respectively via
composition and reduction mechanisms. Composition occurs when one agent reaches a
Is Communicating Material an Intelligent Product Instantiation? 403

certain state (detected via observation functions). In that state, it launches a negotiation
protocol with the agents belonging to the same level in order to aggregate the agents
into one composed agent. To generate the new composed agent and its relations with the
other agents, the authors introduce the notion of transformation functions. Indeed, the
VOWELS paradigm [13] states that a MAS can theoretically decomposed as {Agents,
Environment, Interaction, Organisation}. As a result, four types of transformation func-
tions are introduced, one for each component of the MAS, respectively PA, PE, PI and
PO. PA is an operator grouping agents from one level and associating them to an agent of
the upper level. PE is an operator grouping elements of the environment from one level
into an element of the environment from the upper level. PI is an operator transforming
interactions from agents of a level to interactions to the upper level. Perceptions/actions
and messages of agents of a level N are transformed thanks to PI to perceptions/actions
and messages of agents of level N + 1. PO is an operator that transforms relations
between agents. A relation between agents of level N grouped in different agents of
level N + 1 is transformed by PO into a relation between agents of level N + 1.
MAS is a framework for the design of global control with distributed systems, but it
is most of time employed to structure a little part of it (MAS for defining IP behaviour
for control in manufacturing or in logistics, MAS for controlling a specific feature of a
WSN, …). Currently, resource management services in existing WSN MAS solutions
are tightly coupled with applications, and generic resource management services still
need to be developed. CM implies controlling the devices network all along the lifecycle
and to consider CM capacities to define new services. As demonstrated in Sect. 2, many
new services will depend on the product abstraction level. That is why a recursive and
dynamic holarchy (or agent-based model) to exploit CM capacities is needed. In this
regard, the MAS-R model seems to be generic enough to be used as a basis to structure
the Holonic Architecture needed for the Communicating Material.

4 Structuration of a Holonic Architecture for the Communicating


Material

Section 2 states that a communicating material can be represented by a holarchy of


Product Holons but does not detail neither the structuration of the Holonic Architecture or
the rules necessary to undergo composition/decomposition operations. It is the objective
of the current section.
A communicating material is structured as a holarchy composed of two types of
holons: the elementary material holon and the composed material holon. The elemen-
tary material holon is the one connected with the real material. Indeed, we consider
a communicating material as composed of elementary material elements not divisible,
each equipped with one single electronic device that could have multiple functions (net-
work communication, identification, sensing, …). This electronic device represents the
physical part of an elementary material holon, called the Real Node afterwards (in ref-
erence with the wireless node used in some of our communicating material prototypes).
This real node is in relation with a virtual node, called “Node Agent”, representing the
informational part of the elementary material holon (Fig. 3).
404 H. Wan et al.

Fig. 3. Structure of an elementary material Holon

An aggregation of material holons is called a composed Material Holons. In this


holarchy, each holon that is not elementary is a composed one. A representation of the
internal structure of a composed material Holon is depicted Fig. 4. A composed material
holons can also be an aggregation of other composed material holons (link denoted by ➀
in the figure). Each material holon belongs to a certain level in the holarchy. The higher
the level of holarchy, the higher abstract the material holon is. The informational part of
the composed material holon is a composed agent. It is linked either to other composed
agents or to node agents of elementary material holons. Every agent can discuss with
agents of its own level of hierarchy by exchanging classical messages (denoted by ➁ in
the figure).
Links between levels of the hierarchy are abstraction links (denoted by ➂ in the
figure). These are not classical messages but transformation functions, taking informa-
tion of the lower level and transforming it in higher level information. To illustrate this
proposal, this recursive architecture has been applied to a specific context, i.e. the project
McBIM. This application is described in the next section.

Fig. 4. Communicating Material recursive holarchy

5 Construction Industry Application


5.1 McBIM Project Description
The McBIM Project (Material communicating with the BIM - Building Information
Modelling) [14, 15] aims to design a “communicating concrete”. This project is funded
by the French National Research Agency and is coordinated by the CRAN with two other
Is Communicating Material an Intelligent Product Instantiation? 405

French laboratories and one company. CRAN works on the network and information
management, LAAS designs the sensing and communicating nodes, LIB studies data
interoperability and all these works are implemented by 360 SmartConnect/FINAO SAS.
The communicating concrete (see Fig. 5) consists of a concrete structure where
many sensing and communicating nodes are spread. The sensing nodes will periodically
monitor the physical parameters (like temperature, humidity …) of the concrete. Com-
municating nodes aggregate received data and transmit them to remote servers thanks
to BIM standards. Besides, manufacturing data (like physical properties or information
related to manufacturing actors) may also be considered. The communicating concretes’
behaviours may be different along their lifecycle. During the manufacturing phase, the
WSN nodes are inserted and initialized. The communicating concretes periodically (by
example every hour) monitor their physical status, store the physical propriety informa-
tion and manufacturing actor information. During the construction phase, communicat-
ing concretes will be assembled. As communicating concretes arrive, auto-organization
is then needed to dynamically define a 3D network to achieve energy savings. Concrete
must frequently report its status to ensure the construction safety and updates the net-
work organization. When the construction is completed, the 3D static WSN will regularly
(every half-day) monitor structure health data (such as cracks, temperature, corrosion,
etc.) to ensure the maintenance of the building. The communicating concrete element
must last several decades from the manufacturing phase to the latest of the exploitation
phase.

Fig. 5. Description of the ANR McBIM Project

5.2 Communicating Concrete Control Illustrations

This part shows how we can use the MAS proposal with the physical McBIM ele-
ments. As described in Sect. 5.1, concrete pieces will pass through different phases
(manufacturing, logistics, construction, exploitation); the frequencies at which commu-
nicating concretes produce data must evolve over time. Because McBIM elements can
interoperate each other, and in order to control the lifetime of the services provided by
406 H. Wan et al.

communicating concretes, the WSN have also to be reorganized. We focus on the energy
control in the section.

Fig. 6. Energy control and visualization of a concrete McBIM piece

We designed a 3D-energy estimation software on the Netlogo platform. Netlogo is


a multi-agent programmable modelling environment developed by Northwestern Uni-
versity [16]. We developed this tool for integrating and storing sensors’ values to be
exploited by BIM applications. We modelled in real time energetic states of nodes on
a mock-up representation (Digital Twin feature). Figure 6 gives a view of the produced
software for the different lifecycle phases of a McBIM concrete and the hierarchical
and recursive corresponding MAS. In this application, elementary agents are composed
of concrete bits in which nodes are inserted. There are called “nodes”. Concrete parts
like beams or walls are made of a composition of nodes and are called Concrete agents.
Following our framework, these agents are composed agents of level 1. When concrete
agents communicate, they can, if conditions are met, be grouped in higher level agents
called Structure Agents.
In order to anticipate energy problems and to avoid stopping the services of a con-
crete McBIM element, we model at different abstraction levels (Node and Concrete for
this example) the consumed energy. Blue colour on a Node represents an energy level
between 90% and 100% (all nodes are blue during initialization and manufacturing
phase). In construction or in exploitation phases, we can follow remaining energy of
nodes represented by a colour gradient. If too many nodes die (black colour) or if BIM
applications need to change the measurement frequency, the Structure agent can define
another configuration for the WSN of the monitored McBIM element.
The agent architecture has been modelled thanks to the framework describing the
communicating material. Figure 7 illustrates the aggregation ability bringing by our
rescursive MAS. In a first step, concretes pieces C (green and red) each one composed
by Elementary agents (E) have their own life. When Elementary Agent meet, they are
automatically gathered in a Concrete Agent thanks to the PA operator.
Is Communicating Material an Intelligent Product Instantiation? 407

Fig. 7. Illustration of concrete pieces aggregation

From that point, data generated by sensors are sent to the Concrete Agent thanks to
the PI operator. Indeed, temperatures monitored by each sensor are aggregated into a
average temperature at the Concrete Agent level. In a second step, green and red sensor
nodes (S) detect another concrete. In the MAS, this meeting leads to interactions between
the Elementary Agent (E) associated with different “Composed Agent level 1” (C1 and
C2).
Aggregation rules are shared by every agent (E, C, MC, …) whatever its abstrac-
tion level and defined if the green and red pieces have to be engaged in a relationship
(case 2 and 3) or not (case 1: elementary agents interaction). If they are related, (C)
Agents exchange information to decide about the kind of the relationship. Green and red
communicating concretes can be associated (case 2: only composed agents association).
This case can represent temporary relationship (by example during storage or transport
phase). In that case, this relation between agents of lower levels is then transformed
into a higher relation between concrete agents thanks to PO operator. In a more inten-
sive relationship, the two McBIM pieces can aggregate their network, their models and
theirs properties (case 3). This last case creates the new “Composed Agent level 2” (MC)
thanks to the PO operator followed by the PA operator.

6 Conclusion and Perspectives


This work formalizes the “Communicating Material” paradigm as an upgrade of the
“Intelligent Product” concept. A framework to design the product lifecycle monitoring
of this new kind of object is studied. Inspired by holonic and Multi-Agent Systems
approaches, a recursive MAS able to structure Communicating Materials is proposed.
Aggregation and dynamic abilities are illustrated by application on McBIM project,
whose main objective is the design of communicating concretes interacting during the
overall lifecycle with BIM applications. The work of (Hoang, 2012) define four types of
transformations functions, which could be considered as requirements when designing
408 H. Wan et al.

any type of recursive and dynamic Holarchies (not only Communicating Materials). To
complete this work, transformation function and negotiation protocols still have to be
clearly defined and instantiated on the McBIM Project.

Acknowledgements. The authors thank for the financial support from the French National
Research Agency (ANR) under the McBIM project, grant number ANR-17-CE10–0014.

References
1. McFarlane, D., Giannikas, V., Wong, A.C.Y., Harrison, M.: Product intelligence in industrial
control: theory and practice. Annu. Rev. Control 37(1), 69–88 (2013)
2. Kubler, S., Derigent, W., Thomas, A., Rondeau, É.: Problem definition methodology for
the “{C}ommunicating {M}aterial” paradigm. In: 10th IFAC Workshop on Intelligent
Manufacturing Systems, Lisbon, vol. 10, pp. 198-203 (2010)
3. Kubler, S., Derigent, W., Thomas, A., Rondeau, E.: Embedding data on “communicating
materials” from context-sensitive information analysis. J. Intell. Manuf. 25(5), 1053–1064
(2014)
4. Mekki, K., Derigent, W., Zouinkhi, A., Rondeau, E., Thomas, A., Abdelkrim, M.N.: Non-
localized and localized data storage in large-scale communicating materials: probabilistic and
hop-counter approaches. Comput. Stand. Interfaces, 44, 243–257 (2016)
5. Wong, C.Y., Mcfarlane, D., Zaharudin, A.A., Agarwal, V.: The intelligent product driven sup-
ply chain. In: Proceedings IEEE International Conference on Systems, Man and Cybernetics,
pp. 4-6 (2002)
6. Ventä, O.: Intelligent products and systems, Technol. Theme-Final Report. VTT, Espoo VTT
Publ. p. 304 (2007)
7. Koestler, A.: The ghost in the machine. Hutchinson (1967)
8. Koestler, A.: Janus: A summing up. Bull. At. Sci. 35(3), 4 (1979)
9. Botti, V., Giret, A.: ANEMONA: a Multi-agent Methodology for Holonic Manufacturing
Systems, vol. 53 (2008)
10. Sallez, Y., Montreuil, B., Ballot, E.: On the activeness of physical internet containers. Comput.
Ind. 81, 96–104 (2016)
11. Delgado, C.: Organisation-based co-ordination of wireless sensor networks. University of
Barcelone (2014)
12. Hoang, T.: Un modèle multi-agent récursif générique pour simplifier la supervision de
systèmes complexes artificiels décentralisés. Université de Grenoble, Grenoble, France (2012)
13. Demazeau, Y.: From interactions to collective behaviour in agent-based systems. In:
Proceedings of the 1st. European Conference on Cognitive Science, Saint-Malo (1995)
14. McBIM Consortium, WebPage of the McBIM Project (2018). https://mcbim.cran.univ-lorrai
ne.fr
15. Derigent, W., et al.: Materials communicating with the BIM: Aims and first results of the
MCBIM project, in Structural Health Monitoring 2019: Enabling Intelligent Life-Cycle
Health Management for Industry Internet of Things (IIOT). In: Proceedings of the 12th
International Workshop on Structural Health Monitoring (2019)
16. Wilensky, U.: NetLogo. Cent. Connect. Learn. Comput. Model. Northwest. Univ. Evanst.
(1999). https://ccl.northwestern.edu/netlogo/
The Concept of Smart Hydraulic Press

Denis Jankovič(B) , Marko Šimic, and Niko Herakovič

Faculty of Mechanical Engineering, University of Ljubljana, Ljubljana, Slovenia


{denis.jankovic,marko.simic,niko.herakovic}@fs.uni-lj.si

Abstract. With the development and application of cyber-physical systems (CPS)


in Smart manufacturing, a broad spectrum of possible improvements has emerged,
thereby a big step forward can be made in hydraulic systems. This paper presents
an approach for implementing artificial intelligence (AI) in the concept of smart
hydraulic press with regard to I4.0 technology. Conceptual solutions for greater
system flexibility and improved blanket formability focus on designing a suitable
concept for cyber-physical systems in combination with the digital twin. The main
challenge is to develop a suitable AI-based algorithm in the manufacturing exe-
cution system (MES) so that the system is able to improve the forming process
and avoid disturbances in real time. The concept of visualization and data anal-
ysis based on real-time monitoring of parameters of a smart hydraulic press is
presented. With a continuous quality control of the products a more sophisticated
system can be achieved. The main advantage to take into account in terms of
hydraulics and Manufacturing as a Service (MaaS) are the new trends in energy
efficiency of the systems and rapid automatic tool exchange.

Keywords: Multi-Agent systems (MAS) · Digital twin · Smart modelling ·


On-line simulation · On-line optimization · Machine learning

1 Introduction
Hydraulics and the fundamentals of fluid mechanics have been known since the seven-
teenth century. With the development of the economy came the demand for faster, more
efficient, flexible and accurate systems. In the recent decade, a major step forward has
been made in the modernization of hydraulic components and systems. The utilization
of advanced components in process systems is low due to poor communication within
the system.
The Industry 4.0 (I4.0) framework has been proposed since 2010 and has been in the
validation phase for 10 years [1]. With the main idea of networked hydraulics, CPS and
the intelligent network of distributed smart components and systems, which results in
high- efficient automation, became even more important in terms of self-configuration
and self-awareness of the systems itself. It will increase the flexibility of hydraulic
systems in the direction of agile manufacturing, which will be ready to be implemented
in the smart factory [2].
Frequently used elements in smart factory are smart products, processes and mate-
rials, connected in the smart network of cyber-physical systems (Smart-NetCPS). The

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 409–420, 2021.
https://doi.org/10.1007/978-3-030-69373-2_29
410 D. Jankovič et al.

design methodology of a smart system supported by process data for processing and
storage, by cloud technologies, Internet of Things (IoT), communication technology,
machine learning, simulation analysis, real data analytics, AI, and smart components
such as actuators and sensors is shown in Fig. 1 [3–5].

Fig. 1. Industry 4.0 as starting point to design smart hydraulic press

In an autonomous system, humans and smart sensors/actuators are interconnected


via the IoT [6]. With a data acquisition and analysis strategy over time, the system itself
can predict the behaviour of its components and subsystems as well as the entire system
structure. The analysis and evaluation of the data in the cloud (Big Data) puts in evidence
the intelligence of a system that is able to predict and react to failures and problems based
on past events [7]. The data analysis is carried out in a shared cloud that is connected
to the network in real time. Cloud computing tasks process large amounts of data and
provide decision-making services.
In the event of unexpected situations, the autonomous system announces the solution
without stopping the machine. When the process is executed, a simulation model of a
process mirrored in a digital system is performed in parallel, correcting the device and
predicting future activities using AI techniques. Depending on the complexity of the
considered device or system, simulations can be performed in different programming
environments such as Abaqus, Matlab, Simulink or DSHplus.
The hydraulic process and behaviour of the product and hydraulic components can
be predicted in advance [8, 9]. There are different approaches for digital twin analysis:
in some cases, the analytical methodology is sufficient, but in most cases modellers
and real-time simulations have to be used. With the modellers ANSYS, SolidWorks,
NX, Creo or Abaqus it is possible to use finite element methods (FEM) to predict the
deformation of solid components, for example of the press frame [10, 11].
The Concept of Smart Hydraulic Press 411

In modern systems, the main task is to ensure proper communication between


devices, control and computing elements in the information environment. Self-learning
algorithms developed on the basis of known past experiences and events distinguish
between the classic and the smart system as described in many research works [11].
With AI the system is enhanced with added value. The selection of elements for any
autonomous system is crucial, as it involves certain measurement methods and parame-
ters, the latter needing to be of adequate accuracy and reliability. However, the parame-
ters must be carefully selected; otherwise the efficiency of an autonomous system with
machine learning ability is reduced [12].
In this paper the concept of smart hydraulic press is presented. CPS consists of
several subsystems including local digital agents that make intelligent decisions based
on machine learning algorithms and improve processes. The connected, multi-step app-
roach handling different intelligence perspectives is presented. Modelling, simulation
(what-if scenarios) and improvement of local subsystems takes place in digital twins. In
order to define real actual state of the process, different models have to be considered.
Process stability, product formability, deformation of the frame/die tool, change of oil
viscosity, effects of friction on cylinder, valves, pump are a few possible subsystems
that can be included in the simulation analysis to predict system behaviour [13]. With
integrated smart sensors, actuators and RFID technology, tracking capabilities and usage
determination in smart forming tools, proper adaptive control can be achieved [2].

2 Smart Hydraulic Press


The stability of the hydraulic systems depends on the control method and on the char-
acteristics of the installed components, actuators. Servo hydraulics offer high-dynamic
response and accurate system corrections. The proposed scheme of the physical model
system is shown in Fig. 2 which includes a hydraulic power unit, servo valve, hydraulic
actuator, sensors and control unit.
A hydraulic cylinder is used as an actuator to convert hydraulic power into mechanical
power. The generated force is measured directly with a force sensor. The pressure sen-
sors are installed on both cylinder chambers and hydraulic power unit. The tool/cylinder
displacement is measured by a position sensor, while the frame can be measured addi-
tionally by strain gages placed on the press frame. Since the servo valve is installed in
the system, a high-pressure filter with 3µm filtration is installed to ensure the cleanliness
of the oil.
Data acquisition in the system is realized in the graphical user interface NI LabView.
The programmability of the process work cycles is customized according to the product
and forming tool installed. The control of the pressing cycles is accomplished with a servo
linear drive (servo valve Moog D765 and Hanchen hydraulic test cylinder series 320 with
integrated inductive position transducer) and the closed-loop position/pressure/force
control algorithm.
The value of the speed cycle for the sequence of forming depends on the mate-
rial behaviour predicted by simulation and process monitoring from previous forming
attempts. Based on customized control algorithms, the control agent makes the decision
to auto-correct the control signal for servo valve and thereby adjust the forming param-
eters. Sudden changes of the measured forming force compared to the predicted force
412 D. Jankovič et al.

Fig. 2. Scheme of servo hydraulic press divided into several functional sub-systems

lead to fracture and wrinkling of the product or in worst case tool damage. The purpose
of an intelligent hydraulic system is to control and self-adjust the blanking force in order
to prevent fracture and wrinkling, which can be determined by monitoring the forming
force or by machine vision.
To ensure constant inlet servo valve pressure conditions, the mission of the control
agent is to regulate the speed of the electric motor that drives the hydraulic pump. The
algorithm monitors the operation of the hydraulic unit and eliminates the influence of
cavitation, while ensuring improved energy consumption.
The selection of the right forming tool depends on the shape of the product. Based on
the collected data from the product family, the algorithm is able to automatically select
the right tool for the forming operation. The digital twin in the background evaluates
possible scenarios in parallel, allowing the control agent to adjust the control parameters
based on complex computation.

2.1 Smart Hydraulic Press as Multi-agent System


The concept of a smart hydraulic press, integrated as a CPS system into the framework of
a smart factory is shown in Fig. 3. Here, the CPS is considered an execution system that
is connected to the manufacturing execution system (MES), which connects, monitors
and controls complex manufacturing systems and data flows. These systems collect and
exchange information in real time to identify, locate, track, monitor and improve the
production process [14].
The Concept of Smart Hydraulic Press 413

Fig. 3. Integration of a hydraulic press process into the smart factory production process [15]

While the deep drawing process is being performed in real time, digital twin contin-
uously improves the process in parallel and collects information via digital agents that
make decisions based on smart algorithms. The digital twin stores simulated data on a
server in a local cloud and provides traceability of information. Typically, hydraulic pro-
cesses are controlled by PLC controllers based on an analytical method that describes
the system. A local agent compares the reference with the calculated parameters and
performs compensation in the system using AI-based algorithms.
With this approach, abnormal deviations and anomalies in the system’s behaviour can
be detected in real time. The deviations can be analysed by digital agents which provide
the new control strategy, i.e. perform the adaptive control. System data is collected in a
local cloud. However, the importance of agility communication and the connection of
the local agents MAS is of great importance [15].

2.2 Digital Twin Platform

The definition of a digital twin is, from our point of view, an integrated prediction of a
scaled physical model with a simulation that defines its functioning with a probability
factor. An example of the digital twin concept of a hydraulic system for a deep drawing
process is shown in Fig. 4. During the execution of the process, a simulation method of a
process mirrored into a digital system is performed in parallel, which corrects the device
and predicts future activity using AI techniques. Synergies between real and simulation
data increase productivity, stability and interaction between components connected to
the virtual world [16].
414 D. Jankovič et al.

Fig. 4. Digital twin concept of the hydraulic system

When simulation is executed with a logical processing technique, the animation


rendering is collected. Meanwhile, when data from sensors and operation history are
collected and transferred via an interface, the virtual digital model integrates several
subjects, defines physical values and provides a prediction of the process cycle with a
degree of certainty. In this way, system deviations are detected, predictive maintenance
is performed in real - time and the system’s behaviour is analysed. A suitable interface
between the physical and digital worlds must be established. All digital sensors must be
implemented in the system in combination with logical operations. If necessary, virtual
sensors are used to monitor additional parameters that are not otherwise accessible in
the real environment.
The sensors record the characteristics of the real system, so the acquisition of ana-
logue values must be satisfactory in order for the information network algorithms to be
able to predict the solutions for the hydraulic system under consideration. The simulated
data collected in a digital model is used by an artificial agent which, based on the control
algorithm, makes a final correction and calibration of the input-controlled signal. The
autonomous system is capable of self-detection, self-adaptation, self-organization and
self-decision. The responsibility of the artificial agent is to decide and control param-
eters in the hydraulics CPS based on AI and reconfiguring rules in order to realize
self-improvement. The digital twin is a powerful tool that allows to implement AI in
any hydraulic system and provides a better forming process with diagnosis to prognosis
[11].
The virtual model accurately reflects the state of the physical model in the real world,
making it easier to predict and correct control parameters for better system performance,
in addition to prediction and error detection. The operation of the production process
depends on the boundary conditions of the virtual model in the digital environment and
the ability to process and analyse data in real time [17].
In this way, it is possible to understand the complexity of systems and the unpre-
dictable problems of devices that cannot be solved with the traditional analytical app-
roach. For specific products and unpredictable systems such as complex hydraulic servo
systems, the system must be firstly described and analysed in detail in a virtual envi-
ronment. In this way, costs and development time are reduced and the quality of the
products as well as the performance of the production system are increased [18].
The Concept of Smart Hydraulic Press 415

Fig. 5. The concept of expert hydraulic system

Figure 5 shows the digital model proposed for a hydraulic press. It consists of several
important subsystems:

• The input database where all initial parameters are collected as well as new data
provided by the expert system.
• The digital fault-diagnosis system, i.e. the digital twin where the data from the database
and the process parameters are collected in the cloud platform and used as input for all
simulation processes. Here the improvement loop, executed by advanced algorithms
(digital agents) is used to perform parameter auto-correction and decision-making in
real-time.
• The visualization system, which is responsible to monitor the important data defined
by the digital agents. Based on gathered information describing the system behaviour,
the real time monitoring and control signals are sent back to the system controller to
execute improved cycle.

The expert system concept can be integrated into the NI LabView environment.
416 D. Jankovič et al.

3 Potential Use of the Presented Methods


3.1 Process Monitoring
The most important data in process monitoring are process parameters gathered from
the sensors’ output. These parameters must be carefully selected, since in the case of
incorrect parameters the efficiency of the autonomous system using machine learning
techniques is reduced. Big data analysis is an important aspect of handling data collected
from machines, products and processes. The processing of seemingly unrelated and
unstructured data and the extraction of useful information for the automatic operation
of the improved system is of crucial importance.
Important information about the process is collected in unpredicted situations, when
the system fails or a malfunction is detected [19]. Usually the data is collected in the
cloud after a certain time when the event occurs. In the next step, information tools
related to IoT organisation process large amounts of data and offer the ability of self
– awareness and self - learning to establish appropriate CPS [20].

3.2 Quality Control


The quality of the forming process depends on the deformation of the raw material
in one or several stages until the final shape is achieved. Transformation effects that
define the quality of the product are: process parameters, material properties and tool
die shape/wear. The occurrence of wrinkling, cracking or unsuitable surface roughness
can often not be predicted by monitoring the process parameters of the forming process.
Lastly, after the forming process a quality control of the product is necessary to
confirm a proper forming cycle. The inclusion of an artificial vision system with intelli-
gent cameras should be considered in every forming process. Machine vision software
implemented on a regular PC using open source libraries should be able to detect imper-
fections of the final product. A sensor-based monitoring system is used to control die
tool condition and product quality by evaluating the image and sound emission.
In order to process collected information as fast as possible, binary image process-
ing and filtering are recommended, because fewer pixels are checked. This approach
allows a fully mastered development between the process parameter digital twin and
the artificial vision digital twin. The wireless connection of the artificial vision system
can provide information about the product visualised from a certain distance. Dura-
bility and maintainability of the system is simple and based on known imperfections
the artificial vision system will be able to recognize and communicate within hydraulic
cyber-physical systems [21].

3.3 Energy Conservation Perspective


Hydraulic systems enable large forming forces during the metal forming process due
to power to mass ratio. Unfortunately, hydraulic systems are also known for their poor
energy efficiency. In general, hydraulic press operations include fast falling, pressing with
slow falling, pressure maintaining and fast returning. The maximum energy requirement
is known to be on pressing operation, where the sheet metal is formed in the product.
The Concept of Smart Hydraulic Press 417

The vast majority of hydraulic systems still have a constant discharge pump set
to the maximum required load, resulting in unused hydraulic energy being lost in the
fluid circulation process. Energy conservation can be achieved by developing a control
method for adjusting the rotational speed of the servo motor based on the requirements
of the press operation. By improving the load on the hydraulic energy source, the energy
consumption of the system is lower and the forming process is more stable, resulting in
better formability and product quality. However, servo motors have a lower efficiency at
lower loads. A higher load on the drive motor increases energy efficiency.
The design of a suitable simulation model integrated in the digital twin of the
hydraulic power unit and the development of an algorithm to control and improve the
hydraulic energy consumption may lead to corrective measures which set up the ideal
servo motor speed and reduce the level of vibrations in the system [22].

3.4 Tool Exchange Perspective

With advanced knowledge and providing additional system capabilities, manufactur-


ing as a service (Maas) is a concept becoming increasingly popular. Virtualization and
shared use of networked manufacturing infrastructure allow for remote services. Many
companies already offer remote support, while the failure prediction of certain compo-
nents is done vie IoT in order to monitor system wear. Smart sensors with integrated
self-calibration, self-control and self-analysis are already available on the market [23].
The implementation of the Maas concept for smart control and predictive, user-
driven maintenance offers an additional flexibility and formability for hydraulic press
CPS connected in wireless networks. The forming process is usually intended for large
series, however with the use of servo hydraulic presses, automated tool changing devices
and self-calibration, it is possible to ensure a rapid initial set up of machine with new
forming tools as shown in Fig. 6.

Fig. 6. Automatic tool change and RFID recognition of the specific tool
418 D. Jankovič et al.

By knowing the production plan’s dynamic capacity in reconfigurable manufac-


turing and using automatic guided vehicles (AGV), the forming tool can be delivered
just-in-time and changed automatically. Each forming tool has its own RFID chip to
automatically identify which tool is currently installed in the press. As a result of the
automatic recognition of the new input material and the new forming tool, an artificial
agent determines the forming parameters and improves the forming cycle. Rapid die tool
change functions make it possible to set up different forming processes and thus achieve
different product designs. The dimensional properties of the parts can be varied with the
same tool and adjusted according to the tuning forming parameters.
The wireless management of the cyber-physical system and RFID track-
ing/recognition can be controlled remotely. Installing robots as devices capable to auto-
matically change die tools can be a very simple and precise solution for positioning
and locking die tool on the desired placement. Companies using Enterprise Resource
Planning (ERP) software are able to control the production of the desired products based
on demand, i.e. customer orders. Proper storage of historical event data as well as real-
time monitoring of cloud parameters should be considered. The ability to deliver work
instructions and automatic command implementation should be realized with intelligent
algorithms.

4 Conclusions and Future Work

This paper proposes a solution that can be gradually integrated into any smart hydraulic
system to improve its performances. With a modular upgrade, it is necessary to develop
a multi-agent system in the MES layer. By increasing the number of monitoring and
control subsystems, the complexity of the hydraulic CPS increases; the problem of data
acquisition is solved with a programmable system interface, while diverse informa-
tion formats requires different communication protocols. As described, the mirroring
of the real-time environment to the digital model and machine learning technique is of
crucial importance. With proper data acquisition and analysis, good decision-making
capabilities can be achieved by artificial agents.
Real-time, immediate failure detection or worn die tool estimate result in quick
artificial agent response and prevent further damage. By controlling product quality by
visual and acoustic methods, better results can be obtained; however, the research shows
that cracks and other unwanted defects cannot be detected in many cases by monitoring
process parameters. The proposed tool change method represents a big step forward
in the development of automatic smart hydraulic systems. A faster company response
time can be achieved introducing industrial robots with actuators of high precision. The
energy saving method is more efficient in a system where many parallel hydraulic presses
use the same energy source. With a lower servo rotation speed, there is less vibration,
making the process more stable.
Future research will focus on the realization of the presented concepts for improving
the performance of the hydraulic CPS. The real prototype is based on the proposed design;
the experimental analysis will be performed in order to validate the given solutions. The
main challenge will be the development of AI algorithms that must be implemented by
help of digital agents to perform the predictive analysis and influence the control strategy.
The Concept of Smart Hydraulic Press 419

The development of the digital twin and its implementation in the real environment will
be tested and eventual corrections to improve smart hydraulic system will be made.

Acknowledgment. The work was carried out in the framework of the GOSTOP program
(OP20.00361), which is partially financed by the Republic of Slovenia – Ministry of Educa-
tion, Science and Sport, and the European Union – European Regional Development Fund. The
authors also acknowledge the financial support from the Slovenian Research Agency (research
core funding No. (P2–0248)).

References
1. Bauernhansel, T., et al.: WGP-Standpunkt Industrie 4.0. Available via DIALOG (2016).
https://wgp.de/wp-content/uploads/WGP-Standpunkt_Industrie_4-0.pdf. Accessed 14 Apr
2020
2. Tao, F., et al.: Digital twins and cyber-physical systems toward smart manufacturing and
industry 4.0: correlation and comparison. Engineering 5, 653–661 (2019)
3. Barasuol, V., et al.: Highly-integrated hydraulic smart actuators and smart manifolds for
high-bandwidth force control. Front. Robot. AI 5, 137–150 (2018)
4. Schuler (2020). https://www.schulergroup.com. Accessed 14 Apr 2020
5. Bosch Rexroth (2020). https://www.boschrexroth.com/en/xc. Accessed 14 Apr 2020
6. Nord, J.H., et al.: The internet of things: review and theoretical framework. Expert Syst. Appl.
133, 97–108 (2019)
7. Parrott, A., Warshaw, L.: Industry 4.0 and the digital twin, p. 17. Deloitte University Press
(2017)
8. Ferreira, J.A., et al.: Close loop control of a hydraulic press for springback analysis. J. Mater.
Process. Technol. 177, 377–381 (2006)
9. Zhang, Z. et al.: Research on Springback control in stretch bending based on iterative
compensation method. Math. Prob. Eng. (2019). ID 2025717
10. Dion, B.: Creating a Digital Twin for a Pump. Available vie DIALOG (2017). https://
www.ansys.com/-/media/ansys/corporate/resourcelibrary/article/creating-a-digital-twin-for-
a-pump-aa-v11-i1.pdf. Accessed 14 Apr 2020
11. Xie, J., et al.: Virtual monitoring method for hydraulic supports based on digital twin theory.
Mining Technol. 128, 77–87 (2019)
12. Mittal, S. et al.: Smart manufacturing: Characteristics, technologies and enabling factors.
Proc. Inst. Mech. Eng. Part B. J. Eng. Manuf. (2017). January 2019. Special Section: Smart
Manufacturing and Digital Factory, Vol. 233(5), ResearchGate, pp. 1342–1361
13. Linjama, M. et al.: High-performance digital hydraulic tracking control of a mobile boom
Mockup Adj. In: 10th International Fluid Power Conference, Dresden, Germany, March 2016,
Digital Hydraulics, Paper A-1, pp. 37–48 (2016)
14. Wang, S., et al.: Implementing smart factory of industrie 4.0: an outlook. Int. J. Distrib. Sensor
Netw. 12(1) (20116). ID: 3159805
15. Resman, M., et al.: A new architecture model for smart manufacturing: a performance analysis
and comparison with the RAMI 4.0 reference model. Adv. Prod. Eng. Manag. 14, 153–165
(2019)
16. Tao, F., et al.: Digital twin-driven product design framework. Int. J. Prod. Res. 57, 3935–3953
(2019)
17. Glaessgen, E.H., Stargel, D.S.: The digital twin paradigm for future NASA and U.S. Air
force vehicles. 53rd Structures, Structural Dynamics, and Materials Conference, Hawaii,
April 2012. Special session on the Digital Twin, US, NASA Technical Reports Server (2012)
420 D. Jankovič et al.

18. Lee, J. et al.: Recent advances and trends in predictive manufacturing systems in big data
environment. In: Conference on Industrial Informatics, Germany, July 2013, Manufacturing
Letter, vol. 1, pp. 38–41. Elsevier, US (2013)
19. Rojko, A.: Industry 4.0 concept: background and overview. Int. J. Interact. Mobile Technol.
(iJIM) 11, 77–90 (2008)
20. Alcácer, V., Cruz-Machado, V.: Scanning the Industry 4.0: a literature review on technologies
for manufacturing systems. Eng. Sci. Technol. Int. J. 22, 899–919 (2019)
21. Fillatreau, P., et al.: Sheet metal forming global control system based on artificial vision
system and force-acoustic sensors. Robot Comput. Integr. Manuf. 24, 780–787 (2008)
22. Li, L., et al.: An energy-saving method to solve the mismatch between installed and demanded
power in hydraulic press. J. Cleaner Prod. 139, 636–645 (2016)
23. Meermann, L., et al.: Sensors as drivers of Industry 4.0. Available via DIALOG (2019). https://
assets.ey.com/content/dam/ey-sites/ey-com/de_de/topics/industrial-products/ey-study-sen
sors-as-drivers-of-industry-4-0.pdf?download. Accessed 12 Feb 2020
Distributed Dynamic Measures
of Criticality for Telecommunication
Networks

Yaniv Proselkov(B) , Manuel Herrera, Ajith Kumar Parlikad,


and Alexandra Brintrup

Department of Engineering, Institute for Manufacturing,


University of Cambridge, Cambridge, UK
{yp289,amh226,aknp2,ab702}@cam.ac.uk

Abstract. Telecommunication networks are designed to route data


along fixed pathways, and so have minimal reactivity to emergent loads.
To service today’s increased data requirements, networks management
must be revolutionised so as to proactively respond to anomalies quickly
and efficiently. To equip the network with resilience, a distributed design
calls for node agency, so that nodes can predict the emergence of critical
data loads leading to disruptions. This is to inform prognostics mod-
els and proactive maintenance planning. Proactive maintenance needs
KPIs, most importantly probability and impact of failure, estimated by
criticality which is the negative impact on connectedness in a network
resulting from removing some element. In this paper, we studied criti-
cality in the sense of increased incidence of data congestion caused by
a node being unable to process new data packets. We introduce three
novel, distributed measures of criticality which can be used to predict
the behaviour of dynamic processes occurring on a network. Their per-
formance is compared and tested on a simulated diffusive data transfer
network. The results show potential for the distributed dynamic criti-
cality measures to predict the accumulation of data packet loads within
a communications network. These measures are predicted to be useful
in proactive maintenance and routing for telecommunications, as well as
informing businesses of partner criticality in supply networks.

Keywords: Telecommunication network · Centrality · Nodal


criticality · Distributed measures

1 Introduction

Telecommunications infrastructures are physical networks that support internet


services by facilitating data transfers between agents. For example, to watch a
film on a streaming service, a user sends a request to the servers that is routed
through the infrastructure network. The streaming service sends the film data
back to the user. Infrastructures are often represented by graphs with agents
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 421–432, 2021.
https://doi.org/10.1007/978-3-030-69373-2_30
422 Y. Proselkov et al.

as nodes, and connections as edges. Network topology affects routing speed -


shorter distances give in quicker transfers - and its resilience to disruption. Opti-
mal network design has received much attention [2], especially for resilience,
particularly in complex communications networks [4]. The impact of a node’s
failure on the smooth operation of a network, criticality, affects resilience. Criti-
cality estimates the size of the impact of a node’s failure on network connectivity,
so it can inform prioritisation in network prognostics for proactive maintenance,
necessitating finding it to inform operations policy. One way to measure criti-
cality is centrality, which is the importance a node exerts on a network. Many
criticality measures are extensions of centrality measures. Classic centrality mea-
sures include betweeness, eigenvalue, and degree centrality. The first two are
centralised, needing each node to take information from all nodes. Degree cen-
trality requires each node to know the number of their connected nodes, called
neighbours. This is a distributed nodal measure, and these are the focus of this
paper.
In network systems with objects travelling through them, such as telecom-
munications infrastructure, overcongestion of objects, such as data packets, can
cause node failure. This can be expressed in a three stage process, from gener-
ation to diffusion, and then dissipation [12]. Accurate and quick network con-
trol is important in networks that are working near capacity, as is expected for
backbone networks of the near future [9]. Irregular stress also increases network
criticality, such as the abnormally heavy data traffic in consumer networks due
to home-working during the COVID-19 outbreak. Cascade failures also occur in
regularly functional systems due to random errors [11]. For similar future prob-
lems, there is a clear and present need to develop efficient distributed methods
to maximise resilience.
Distributed measures have less data transmission load for communication
networks than centralised measures. This is because in centralised methods each
node needs information from all nodes, but in distributed methods nodes need
local information. Distribution reduces information packet travel time and num-
ber of sources, so nodes can make decisions more quickly with more freedom, such
as packet routes. This preserves resilience through proactive decision-making.
Assuming criticality is static can lead to problems, since a critical node
in an underused region may affect traffic less than a non-critical node in an
overused region. Thus it is dynamic, waxing and waning with the number of
data packets passing through it. If criticality grows faster than repairs, we must
stop congestion to minimise failure spread, dynamically protecting more critical
nodes. We hence need dynamic measures of criticality. A centralised, not dis-
tributed, dynamic node criticality measure exists. So does a distributed estimate
of the effects of congestion cascades [12], but it needs full network information to
resolve. There is little research into dynamical measures of distributed nodal crit-
icality. This motivates our work. We take three distributed structural measures
of nodal criticality and augment them with dynamic node weights, representing
node stress.
Distributed Dynamic Measures of Criticality 423

In Sect. 2, we describe the three measures of distributed criticality, and the


validation procedure. In Sect. 3 we analyse which measure more accurately esti-
mates criticality. In Sect. 4, we discuss our findings, suggest how to apply them,
and further research.

2 Methods
We describe three criticality measures for nodes in a network. Each is computed
from local network structural information, and we augment them to dynamically
compute time dependant node states. We then validate them using an augmented
susceptible-infected-susceptible (SIS) model [8] with incremental infection, rep-
resenting congestion spreading in a multiagent system with fixed storage.
We chose these measures since each has flexibility to incorporate dynamic
node states and belongs to a different measure class. To compute criticality, the
first, local centrality counts the degrees of local nodes, the second, Wehmuth
centrality uses local structural measures and the third, local bridging cen-
trality considers possible paths through a local region. We compare how accu-
rately these three measures estimate criticality, and explain the simulation
model.

2.1 Weighted Local Centrality


Local centrality [3] finds how embedded a node is. It is computationally inex-
pensive, where for a graph,
 G 2
 = (V, E), with |V | = n nodes, |E| = m edges, and
k mean degree,  it has
 O nk complexity, less than, say, betweenness centrality,
which has O kn2 complexity. It also distributedly computes centrality which
is useful for networks with cognitive agents. We first set up notation to explain
the local centrality. For a node, u ∈ V , we denote the set of nodes i edges away
from it as Γi (u) ⊂ V , where when i = 1, Γi (u) is the neighbourhood of u. The
set of nodes at most i edges away from u is denoted as


i
Hi (u) = Γj (u).
j=1

To compute the local centrality of u, denoted CL : V → Z+ , first compute


its neighbourhood, Γ1 (u). Second, compute the neighbourhoods of each node in
Γ1 (u). Then sum up, for each node, w, in each Γ1 (v), v ∈ Γ1 (u), the number of
nodes in the set of all nodes at most two edges away from w, such that
 
CL (u) = |H2 (w)| . (1)
v∈Γ1 (u) w∈Γ1 (v)

We may now incorporate incremental weights. In telecoms infrastructure,


where nodes represent devices such as routers or switches, if data packets arrive
at a rate greater than the node can emit them, they may be queued up, to be
424 Y. Proselkov et al.

emitted later. To model this, suppose that all nodes have a queue length, or
weight. Then rather than summing over numbers of nodes, each node is counted
the same number of times as their queue lengths, which we list along a row vector,
c ∈ Rn , where position of a value corresponds to a node. Similarly, instead of
a set, one can use a binary vector representation to denote the set of nodes at
most i edges away from a node, denoted H i ∈ {0, 1}n . Suppose both are row
vectors, to then rewrite Eq. (1) as
 
CL (u) = cH i (w)t .
v∈Γ1 (u) w∈Γ1 (v)

This has the advantage of giving extra weight to local regions with more
queued data packets. This serves as a useful estimate of criticality over time. To
implement this renormalise the spread of CL outputs to the range [0, 1], where
being closer to one suggests greater criticality. This preserves both ranking and
scaling.

2.2 Weighted Wehmuth Centrality


The number of neighbours is a straightforward but possibly naive way to estimate
criticality. This is degree centrality. The degree of a node u can be denoted as
du . To display the nodes within a vector d, it helps to enumerate them, and for
i ∈ {1, ..., n}, denote the degree of the ith node as di , such that d = (d1 , ..., dn ).
Mapped to the leading diagonal of a matrix D ∈ Mn (Z+ ), known as the degree
matrix, we get 
di ∈ d, i = j
D = (di,j ) =
0, else.
Node degree counts incident edges. In simple graphs, nodes connect only to
other nodes at most once. We denote the number of edges between nodes ui and
uj as ai,j , displayable in a matrix, A ∈ Mn ({0, 1}), the adjacency matrix,
where 
1, (i, j) ∈ E
A = (ai,j )
0, i = j.
The degree and adjacency matrices may be used to get the Laplacian matrix,
L ∈ Mn (Z), where L = D − A, to find structural network measures, comparable
between different networks. This fully captures network topology. Normalising
gives
LN = D −1/2 LD −1/2 ,
whose eigenvalues are all between 0 and 2 such that 0 = λ1 < . . . < λn ≤ 2.
We can perform spectral analysis on LN , studying its eigenvector decompo-
sition, by reducing it into component eigenvectors, each of which has a distinct
eigenvalue. Sorting them along the leading diagonal of a square matrix gives

λi , i = j
Λ = (λi , j) =
0, else.
Distributed Dynamic Measures of Criticality 425

The number of connected components in a network is the number of 0 eigen-


values in the spectrum |{λi : λi = 0}|. The smallest nonzero eigenvector, called
the algebraic connectivity shows how well connected a component is within
itself. In connected networks this is the second smallest eigenvector, with eigen-
value λ2 .
For node u, the Wehmuth centrality [13], denoted CW (u), finds λ2 of the
induced subgraph of nodes at most h edges away from u, denoted λu2 . Divide this
by log2 (du ) to stop non-critical hubs from being ranked too high. If the node is
a leaf it is noncritical. We restrict analysis to h = 2 to find node embeddedness
for its immediate and secondary area. Wehmuth centrality is thus a distributed
measure, each node needing local structural information to determine criticality,
and is defined as  λu
2
, du > 1
CW (u) = log2 (du )
∞, du = 1.
Let us incorporate time dependant queue lengths, c, from a dynamic pro-
cess on the network, such as data packet movement within a telecoms network.
Wehmuth centrality uses network structure and node degrees, which are inde-
pendent of queue lengths. To account for them, redefine the simple graph into
a directed multigraph, so that there may be multiple, directed edges between
single nodes, such that di,j ∈ Z+ and di,j ≡ dj,i . Replace each edge with two
opposite directed edges. For each node, ui , multiply its number of out-edges by
ci . Laplacian properties hold for multigraphs, so apply the original Wehmuth
centrality on this new graph, dividing by the log of the out-degree of a node.
We define the directed multigraph degree matrix as Dd = cD and the directed
multigraph adjacency matrix as
   
Ad = adi,j ∈ Mn Z+ , where adi,j = ai,j · cj .

Last, apply the Wehmuth centrality procedure onto D d and Ad , obtaining


the weighted Wehmuth centrality.

2.3 Weighted Local Bridging Centrality


Telecommunications infrastructure routs data packets from source to destina-
tion nodes as in supply chains, circuits, or complex waterways. These all fail
when paths between nodes are unusable. This motivates a criticality measure to
network track pathway disruptions. To describe such a measure, we first define
a network path.
A network path, P ⊂ V , is a sequence of distinct nodes where consecutive
members share edges, such that for all ni , ni+1 ∈ P , (ni , ni+1 ) ∈ E. The shortest
path between two nodes is, when unweighted, a path with the fewest elements
that starts with one and ends with the other. If multiple distinct paths share
the same length and have the minimum number elements in them, they are
all shortest paths. We denote the number of shortest paths from v to w as
ρv,w : V → Z+ , and the number of shortest paths from v to w that happen to
pass through u as ρv,w (u) : V → Z+ .
426 Y. Proselkov et al.

Sociocentric betweenness [5], denoted Bs : V → R, tracks pathway dis-


ruption potential, calculating the fraction of shortest paths between all node
pairs that pass through the subject node, defined as
 ρv,w (u)
Bs (u) = .
ρv,w
v,w∈V
u=v=w

This measure can be modified into egocentric betweenness, which mea-


sures the betweenness of a region surrounding a node, and then compares the
valuations between nodes in the network. It correlates strongly with sociocentric
betweenness [7], and is computable in a distributed manner. For each node, say
u ∈ V , it measures the betweenness of the induced subgraph of Γi (u), such that
 ρv,w (u)
Be (u) = . (2)
ρv,w
v,w∈Γi (u)

It compares this value between nodes, where higher values suggest greater
criticality. Both are centrality measures and require augmentation to compute
criticality. We use the bridging coefficient [10], which describes embedding
of a node within a connected component using local information. It is defined
as the reciprocal of the node’s degree, divided by the sum of degrees of its
neighbourhood. For a node, u, the formula of the bridging coefficient, β : V →
(0, 1], is ⎧ 1


⎨ du , du > 0
β(u) = di

⎪ i∈Γ (u)

1, du = 0.
By multiplying the sociocentric betweenness centrality and bridging coeffi-
cient, we obtain the sociocentric bridging centrality, CB : V → R, such
that
CB (u) = Bs (u) · β(u). (3)
This can be changed into the local bridging centrality by replacing the
sociocentric betweenness in Eq. (3) with the egocentric betweenness, and rewrit-
ing it into
CB (u) = Bc (u) · β(u). (4)
To create a dynamic measure, use dynamic queue lengths in nodes so that
data packet flow affects network criticality, augmenting the local bridging cen-
trality with node weights associated with criticality. This uses both egocentric
betweenness and the bridging coefficient. The bridging coefficient estimates the
likelihood that a node is on a bridge between clusters. This is purely structural, so
is unchanged, but the egocentric bridging coefficient may be naturally extended.
Queues are made of data packets which follow paths, and egocentric betweenness
is a path measure, so we may weight each path by the sum of its nodes’ queue
Distributed Dynamic Measures of Criticality 427

lengths, which for a given node u ∈ V we denote as cu . For the set of shortest
paths between nodes v and w, denoted Pv,w , achieve this by redefining ρv,w and
ρv,w (u) as    
ρv,w = cx ; ρv,w (u) = cx .
P ∈Pv,w x∈P P ∈Pv,w x∈P
u∈P
/ u∈P

By inserting the new ρv,w and ρv,w (u) into Eq. (2), and this into Eq. (4), we
obtain the weighted localised bridging centrality.

2.4 Validation Method


The simulation model used to test these measures is based off of the SIS model
of disease spread. In it, nodes take one of two states, susceptible, and infected,
respectively S, I ⊂ V . An infected node u ∈ I, according to a rate β Poisson
process, may randomly synchronise with a randomly chosen neighbouring sus-
ceptible node, say, v ∈ Γi (u) ∩ S, and infect it such that v ∈ I. The infected
node u ∈ I may also according to an independent Poisson process of rate γ,
recover and become susceptible, such that u ∈ S. This represents the dynamics
of a disease that does not confer immunity.
To estimate queued data packet accumulations, give each node a counter
denoting queue length, Q : V → Z+ , where if a given node’s queue reaches
the infection threshold, I, it becomes infectious. That is, for a given node, u, if
Q(u) ≥ I then u ∈ I, else if Q(u) < I then u ∈ S. In the augmented SIS, infection
and recovery steps become counter additions and reductions. An infected node,
u may, at rate β, increase the counter of a neighbouring susceptible node, v
according to the SIS infection dynamics. This represents a data packet moving
from v to u, but u has a full queue so cannot process it, and returns the packet
to v. Nodes may also independently recover at rate γ, where reducing its counter
by one according to a Poisson process, representing the resolution of one packet,
and no other node’s queue grows.
We outline the validation method used to measure the accuracy of the
dynamic distributed measures of criticality. We use a Barabasi-Albert preferen-
tial attachment network [1], generated by adding nodes to a base and attaching
m edges to it if possible, up to the nth node. The augmented SIS is then run on
this network.
To find each node’s network impact, we run n simulations. This was coded
in Python 3.8, using the networkx, pandas, numpy, random, matplotlib, math,
scipy, and sklearn packages. Each simulation Si ∈ S, where S is the set of
all simulations, first sets the queue length of node ui ∈ V past its threshold.
This is to determine that any failures within the network that occur during
simulation Si come from attacking node ui . For the runtime of simulation Si ,
denoted Ti ∈ T , where T is the set of all runtimes, we have that Ti = {0, ..., Ti },
where Ti is either a fixed cutoff point or when there are no more queued data
packets.
Each measure is node and time dependant, so varying the node and time it is
measured at changes the value computed by the measure. This time dependency
428 Y. Proselkov et al.

is because the network model is dynamic. To compare the different measures,


we must aggregate across one of the dimensions. We are interested in the per-
formance of the measure for all nodes, as time progressed, so for each dynamic
measure per simulation, we sum over all nodes at some timestep and return an
aggregated measure.
Suppose the real network criticality can be captured by the sum of the node
queues for a short future time horizon. This is because each measure in this
paper computes the impact of failure that a node poses to the system at a given
time. This is represented in this model by the node infecting neighbouring nodes,
increasing their queue length. Since this increases the likelihood of the failure of
neighbouring nodes, which would occur after some random time, the impact of a
node’s failure is delayed, and repairs after another delay, assuming they happen.
This results in a peak of total queued data packets which we claim to occur
approximately at the half life of an infectious node, assuming independent node
lifespans. Using a mean field approximation, the future time window is defined
as  
I/2
1
tf = .
γ
Real criticality is aggregated by summing over all nodes at a timestep and
at most tf steps into the future. No computed measure is defined outside of
(f )
Ti , so we only analyse the shorter timescale Ti ∈ T (f ) , where T (f ) is the set
of reduced runtimes, such that Ti = {0, ..., Ti − tf }. With queue length, Q,
(f )

this gives the set of all dynamic node attributes, C = {Q, CL , CW , CB }. Let us
denote  
(c) (f )
Bt,i,u : Ti , S, V, C → R

as the instance of a dynamic node attribute c ∈ C for node u ∈ V at time t ∈ Ti


for simulation Si . We define the aggregated measure as



(c)
Bt,i,u , c ∈ {CL , CW , CB }
⎨ u∈V
(c)
At,i = f (c)
t+t

⎪ c ∈ Q.
⎩ Bτ,i,u ,
u∈V τ =t

(c)
For comparative analysis, we normalise At,i with respect to t for each

(c)
attribute and simulation using min-max feature scaling, generating At,i :
{Ti , Si , V, C} → [0, 1]. We then compute the mean squared error (MSE) across

(C ) ∗ (C ) ∗
(C )
time for At,i L , At,i W , and At,i B , versus the normalised aggregated queue

(Q)
length, At,i , and denote it as
∗ 
(c)
Mic = MSE Ai , c ∈ {CL , CW , CB }.

We find the mean for all simulations for each measure, getting the accuracy
to which each measure estimates the criticality of each node within the network,
giving
Distributed Dynamic Measures of Criticality 429
n
Mic
c
M̄ = i=0
, c ∈ {CL , CW , CB }.
n
We analyse this procedure for a network model with n = 15 nodes, edge
attachment number m = 3, infection rate β = 0.9, recovery rate γ = 0.5, and
infection threshold I = 3. We simulate a simple network at and beyond capacity.
Mean simulation runtime was 4. Network G is visualised in Fig. 1. Node size
corresponds to Q of each node which has been assigned randomly.

Fig. 1. A Barabasi-Albert network G. Node count n = 15, edge attachment number


m = 3, nodes randomly weighted with queue lengths. The green node u1 seeds for
simulation S1 , and blue u2 , u3 seed for S2 , S3 . Arrows and dashed and rings are possible
congestion candidates per timestep. All nodes have a corresponding simulation.

3 Results and Discussion

In Table 1 we explore the output of simulation S1 , initialised from the green node
in Fig. 1. It shows that weighting measures with queue lengths better tracks the
progression of data packets within the network. The aggregated queue length is
forward looking, suggesting that the dynamic criticality measures detect risky
nodes.
This is seen for dynamic measures in Fig. 2, which is normalised for compar-
ative inspection. This is only information for a single simulation instance. It is
not obvious which measure more closely estimates future progression. Combin-
ing the MSEs of each measure from queue length for each simulation instance
obtains the average MSE, denoted M̄ c .
Results show that dynamic local centrality performs best, with M̄ CL = 0.102,
followed by dynamic bridging centrality, with M̄ CB = 0.162, and dynamic
Wehmuth centrality is the worst, with M̄ CW = 0.197. This may be because the
model estimates dynamics via epidemic spread, since momentary rate at which
a node obtains data packets is strongly determined by the number of packets
held by the given node’s neighbours, and dynamic local centrality directly counts
this.
430 Y. Proselkov et al.

(c)
Table 1. Aggregated attributes for a simulation S1 , 5 timesteps, containing At,1 for
all measures.

Time0 Time1 Time2 Time3 Time4 Time5


Queue Length 26 25 24 17 11 6
Static Local 380 380 380 380 380 380
Wehmuth 13.509 13.509 13.509 13.509 13.509 13.509
Bridging 0.068 0.068 0.068 0.068 0.068 0.068
Dynamic Local 253 200 266 215 158 81
Wehmuth 29.011 4.166 34.048 28.409 23.517 0
Bridging 131.882 58.548 130.119 113.635 108.101 32.147

Fig. 2. A plot showing the relationship between normalised aggregated measures and
(c)
queue length from simulation S1 , containing At,1 for c ∈ C.

All values are bounded in [0, 1], so, in the contexts of this test, each mea-
sure computes criticality with between 80% and 90% accuracy, which is quite
acceptable.
Our criticality measures give a distributed, computationally efficient and fast
method of finding an impact indicator to inform maintenance prediction models.
This allows for real time control of any network system. Adding raw data such
as condition can give probability of failure and other prognostics KPIs. Com-
bining these gives a risk ranking of nodes to help order priority for proactive
maintenance. This is a three step framework, first collecting distributed data,
generating prognostic KPIs, and informing an optimal maintenance plan, shown
in Fig. 3. This will minimise packet drops, latency, congestion, and maximise
network operative capacity.
This can be integrated into: telecommunications systems for proactive main-
tenance; autonomous vehicle networks for proactive routing to minimise traffic
jams; supply networks, where actors only have primary or secondary connec-
tion information; and any system of dynamically communicating agents. With
such measures, agents will be able to quickly and reliably establish their short
term criticality, allowing for swift, inexpensive action to ensure ongoing network
function.
Distributed Dynamic Measures of Criticality 431

Fig. 3. Three step framework for assessing nodal risk ranking.

4 Future Research
In this paper, we have developed and compared the accuracy of three distributed
dynamic measures of nodal criticality within a network. Dynamic and distributed
approaches had not previously been combined in such a manner. We tested each
measure within an augmented SIS and found that for our test they predict
criticality with high accuracy. Dynamic local centrality did best, though it yet
is unclear why. To our knowledge, no measures which approach the problem
of dynamically and distributedly predicting criticality of nodes like this have
been previously developed, and it is exciting that they have such high proactive
accuracy, suggesting there it is worth researching more dynamically obtained
measures for prediction. They are necessary to deal with increasing data traffic
demands, especially if more COVID-19-like events occur in the future, where
greater network requirements are suddenly imposed on an already at capacity
system.
We will need deeper statistical analysis to learn the true accuracy of the mea-
sure family, including test repetition, comparing static and classic network mea-
sures, and multi-dimensional analysis. We would also like to learn how network
structure and model configuration affect the result of the distributed dynamic
measures.
In future we will test the dynamic measures in different network models, such
as telecommunications data packet routing models, or supply network heuristic
movement models. We also plan to test the measures on a spectrum of network
topologies, as well as real life network topologies, such as the BT network that
was studied within [6], to gain insights into the their dynamics. We will also
study the impact of information reach, or how many hops away from itself a
given node takes information from, framed as the relationship between accuracy
and computational, time, and communication complexity. This will contribute to
the general theory of value of information in distributed network analysis, and
has applications in any system with limited awareness actors, such as supply
chains.

Acknowledgements. This research was supported by the EPSRC and BT Prosper-


ity Partnership project: Next Generation Converged Digital Infrastructure, grant num-
ber EP/R004935/1, and the UK Engineering and Physical Sciences Research Council
(EPSRC) Doctoral Training Partnership Award for the University of Cambridge, grant
number EP/R513180/1.
432 Y. Proselkov et al.

References
1. Albert, R., Barabasi, A.: Statistical mechanics of complex networks. Rev. Mod.
Phys. 74, 47–97 (2002)
2. Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., Hwang, D.: Complex networks:
Structure and dynamics. Phys. Rep. 424, 175–308 (2006)
3. Chen, D., Lü, L., Shang, M., Zhang, Y., Zhou, T.: Identifying influential nodes in
complex networks. Physica A. 391, 1777–1787 (2012)
4. Cohen, R., Erez, K., Ben-Avraham, D., Havlin, S.: Resilience of the Internet to
random breakdowns. Phys. Rev. Lett. 85, 4626–4628 (2000)
5. Freeman, L.C.: A set of measures of centrality based on betweenness. Sociometry
40, 35 (1977)
6. Herrera, M., Perez-Hernandez, M., Kumar Jain, A., Kumar Parlikad, A.: Criti-
cal link analysis of a national Internet backbone via dynamic perturbation. In:
Advanced Maintenance Engineering, Services and Technologies. IFAC, Virtual
(2020)
7. Marsden, P.V.: Egocentric and sociocentric measures of network centrality. Soc.
Netw. 24, 407–422 (2002)
8. Kermack, W.O., McKendrick, A.G., Thomas, W.G.: A contribution to the math-
ematical theory of epidemics. Proc. R. Soc. Lond. 115, 700–721 (1927)
9. Moura, J., Hutchison, D.: Cyber-physical systems resilience: state of the art,
research issues and future trends. In: arXiv preprint (2019)
10. Nanda, S., Kotz, D.: Localized bridging centrality for distributed network analysis.
In: 2008 Proceedings of 17th International Conference on Computer Communica-
tions and Networks, IEEE, St. Thomas, US Virgin Islands (2008)
11. Peterson, I.: Fatal Defect: Chasing Killer Computer Bugs. Times Books, New York
(1996)
12. Wang, J., Liu, Y.H., Jiao, Y., Hu, H.Y.: Cascading dynamics in congested complex
networks. Eur. Phys. J. B. 67, 95–100 (2009)
13. Wehmuth, K., Ziviani, A.: Distributed location of the critical nodes to network
robustness based on spectral analysis. In: 2011 7th Latin American Network Oper-
ations and Management Symposium, IEEE, Quito, Ecuador (2011)
Physical Internet and Logistics
A Multi-agent Model for the Multi-plant
Multi-product Physical Internet Supply Chain
Network

Maroua Nouiri1(B) , Abdelghani Bekrar2 , Adriana Giret3 , Olivier Cardin1 ,


and Damien Trentesaux2
1 LS2N, UMR CNRS 6004, Université de Nantes, IUT de Nantes, 44 470 Carquefou, France
maroua.nouiri@univ-nantes.fr
2 LAMIH, UMR CNRS 8201, Université Polytechnique Hauts-de-France, Le Mont Houy,
59313 Valenciennes, France
3 Valencian Res. Institute for Artificial Intelligence, Universitat Politècnica de València,

Valencia, Spain

Abstract. Supply chains are complex systems and stochastic in nature. Nowa-
days, logistics organizations are expected to be efficient, effective, and responsive
while respecting other objectives such as sustainability and resilience. In this
work, a multi-agent model is proposed for a multi-plant, multi-product supply
chain network that supports an open network with n nodes (plants, retailers, etc.).
Three replenishment policies are proposed with different criteria of selection.
A multi-agent simulation tool was used to implement the proposed multi-agent
model. Different scenarios and configurations, varying from static to dynamic, are
defined and tested. The first objective of this work is to compare the performance
of physical internet supply chain and classical supply chain networks using hold-
ing and transportation costs as key performance indicators. The second goal is to
assess the performance of different replenishment policies for multi-plant, multi-
product physical internet supply chain network. Experiment results validate the
efficiency of the model to assess the performance of supply chain and to optimize
the replenishment decisions.

Keywords: Physical internet · Simulation · Multi-agent architecture ·


Replenishment’s source selection · Inventory control · Perturbations

1 Introduction

Today, supply chains are characterized by high complexity due to the competitiveness
between companies, the structures, processes, etc. The management of these systems is
critical and includes all the processes that transform raw materials into final products and
their distribution to the customers [3, 4]. Many activities affect the total costs of supply
chains, such as the forecasting and replenishment rules used [13]. In fact, the procurement
and the distribution are two vital actions in supply chains. These operational decisions
have a direct impact on inventory and transportation costs. On the other side, supply

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 435–448, 2021.
https://doi.org/10.1007/978-3-030-69373-2_31
436 M. Nouiri et al.

chain disruptions and risks affect the performance of supply chains. Nowadays, reducing
logistics costs and facing perturbations is a priority of many companies. To achieve such
a goal, a new paradigm named “Physical internet” was proposed in 2011 as a solution
to improve the global logistics performance in terms of economic, environmental and
social efficiency [8]. The physical internet network represents an open global logistics
supply chain based on physical, digital and operational interconnectivity. Contrary to the
traditional supply chain network that is based on a multi-echelon hierarchical structure
composed of plants, distribution centres, and retailers [6], the physical internet network
is composed of new components like PI-hubs, PI-containers, PI-movers, etc. [4, 5].
The network of PI-hubs is open and interconnected with other supply chain networks.
The replenishment orders from any sales point can be served by any PI-hub around the
network and the PI-hubs can be supplied by any other hub or the plant [14]. A solution of
inventory control problem in Physical Internet Supply Chain Network (PISCN) concerns
the assignment of the customer’s demands to the hubs, and the hubs to the plants or
other hubs [15]. The inventory control in Classical Supply Chain Network (CSCN) is
addressed by many researchers in literature. A mathematical formulation was proposed
by [2] for the inventory control in a multi-plant single-product supply chain to satisfy the
dynamic demand of the customers. A non-linear mathematical formulation for location
inventory to determine the quantities of products to send from plants to warehouses
and then to retailers was also presented [1]. Recently, the inventory control problem in
PISCN context has attracted attention, such as the problem of integrated production-
inventory-distribution decision in PI [6], with a mixed integer linear problem (MILP)
model to find multi-period decision-making.
To study a real system made up of interconnected elements where each of these
systems has its own dynamics, researchers may use mathematical models, simulation
tools and distributed approaches to study and evaluate the performance of such complex
systems. Multi-agent models are considered as successful and suitable solutions to solve
complex dynamic problems. In fact, the multi-agent architecture is promising due to the
interactions, the collaboration and the cooperation between agents and theirs reactivity
to deal with uncertain events [7].
A simulation study was proposed by [14] to assess the performance of PISCN com-
posed of one plant, three hubs and two retailers by keeping the same network and data
while changing the network interconnectivity. Authors in [9] proposed two multi-agent
simulation models to compare the performance of PISCN and CSCN with external
perturbations using the same supply chain network configuration. However, only one
network configuration is evaluated. The tested network is composed by one plant, three
hubs, and two retailers [9].
This paper is an extension of the work presented in [9]. It considers a multi-plant
multi-product physical internet supply chain network. The proposed multi-agent model
is designed to support an open network of supply chains. A multi-agent simulation tool
is used to implement the proposed multi-agent model. Different static and dynamics
scenarios with perturbations are tested. The holding and transportation costs are used
as key performance indicators (KPI) to compare the performance of physical internet
and classical supply chain with a multi-plant and multi-product configuration. Different
replenishment policies are also tested and compared with PISCN.
A Multi-agent Model for the Multi-plant Multi-product Physical Internet 437

The remaining of this paper is organized as follows. The problem studied is described
in Sect. 2. The proposed multi-agent model is detailed in Sect. 3. The implementation of
the proposed model is described in Sect. 4. Simulation scenarios are presented in Sect. 5.
The experimental obtained results and the analysis are given in Sect. 6. Conclusions and
some future works are given in Sect. 7.

2 Problem Description

In this work, a multi-plant multi-product PISCN is considered. The multi-plant multi-


product PISCN is composed of multiple plants (I plants), multiple PI-hubs (J PI-Hubs),
multiple trucks (V trucks) and multiple retailers (K Retailers) requesting different type
of products. The inventory control problem of such a system is NP-hard based on the
fact that the single plant inventory problem is NP-hard [2]. Figure 1 illustrates such a
network, where Fig. 1a depicts a classical supply chain network and Fig. 1b illustrates
an example of multi-plant multi-product PISCN with two plants, three PI-hubs and 2
retailers. The warehouse and the distributed centres in classical supply chain network are
replaced by PI-hubs in the Physical Internet. The product flow directions are predeter-
mined considering classical inventory models,. Each point of sale is pre-assigned to its
own distribution centre, i.e. Retailer 1 and Retailer 2 can only be supplied by Distribution
Centre 1 (DC1) and Distribution Centre 2 (DC2) respectively (see Fig. 1a). However, in
PISCN the replenishment orders from any of the points of sales can be served by any
PI-hub around the network and the PI-hubs can be supplied by any other PI-hubs or
any plant that produces the specific product. Thus, the replenishment’s source selection
needs to be determined.

Fig. 1. The supply chain networks

An optimal replenishment strategy is derived by simultaneously determining for


each period the replenishments that result in the lowest total cost. In order to provide
an intelligent approach to test and study the effect of such policies on the total cost, a
multi-agent model is proposed. The assumptions used in this work are as follows:

• Daily and periodic retailers’ demands strategies are considered.


• Each plant produces one type of product.
438 M. Nouiri et al.

• Each hub has its own fleet of trucks.


• Each retailer can request different types of products in different quantities.
• Retailers are assumed to request the same quantity of goods.
• Splitting retailer orders is not allowed.
• One type of internal perturbations is addressed: only one PI-hub or (a distribution
centre in case of classical supply chain) can be unavailable for a certain defined
number of days.

3 Multi-agent Model

As mentioned before, this work is an extension of the study in Nouiri et al. [9]. Contrary
to the work of [9], where the single plant inventory problem in PISCN was studied
and only one case study was tested, the multi-plant multi-product PISCN is considered.
The multi-agent model is adopted as a promising suitable solution to provide reactive
decisions and to deal with supply chain’s perturbations. New replenishments policies
are also proposed to solve the inventory control problem. In the next subsections, the
description of the model’s design and agents’ actions are detailed. The replenishment
policies are presented.

3.1 Description of Agent Behaviours

A multi-agent model groups actions in autonomous agents. An agent is an individual


entity characterized by some attributes, which interacts with others through the definition
of appropriate rules in a given environments. Several multi-agent design methodologies
can be found in literature [12]. To design a multi-agent model, agents, interactions and
behaviours should be defined.
The proposed multi-agent simulator is composed of i Plant Agents (PA), one for
each plant, j Hub Agents (HA), V Truck Agents (TA) and k Retailer Agents (RA). In
this model, each agent is developed for special purposes. The dynamic part of the model
is composed of actions (behaviours) and interactions between agents. The interactions
constitute the communication capabilities between agents. The hub agent communicates
with all agents in the model (TA, RA, PA and other connected hub agents HA). However,
the retailer agent communicates only with hub agents. The PA has interactions with truck
agents and hub agents. The behaviours of agents are different and depend on the type of
supply chain. As described in Fig. 2, in the case of CSCN the demand of the retailer is
assigned to the corresponding distribution centre agent (DCA) which is similar to HA in
PISCN. The DCA launches the execution of TA to deliver the goods. The DCA updates
the inventory stock and if the ROP level is not respected, the PA sends goods to the
corresponding DC. The total holding cost is also updated. The TA updates the travelled
distance and the total transportation cost. The steps are repeated until the achievement of
the total number of days. However, in the second case, a specific replenishment policy is
executed to find the best replenishment source to satisfy the retailer. Figure 3 describes
the behaviours of agents if the type of supply network is physical internet. The selected
HA triggers the activation of the TA to deliver retailer demand and update its stock. If the
new inventory level is less than ROP, the replenishment policy is executed again to find
A Multi-agent Model for the Multi-plant Multi-product Physical Internet 439

best source (Plant or other PI-hub) to fulfil the hub demand. TA moves to the requested
node to transport goods and updates the travelled distance and the transportation costs.
As described before, perturbations can occur in the PI-hub and distribution centre. Some
retailer’s demand can be delayed or missing. Perturbations can be external, related to the
unavailability of the routes between PI-nodes which affects the transportation of goods
(accident) and delays trucks, or internal perturbations like an insufficient inventory stock
in PI-hubs or insufficient production quantity of plants, etc. [10].

Fig. 2. The behaviours of agents in case of CSCN

3.2 Replenishment Policies


As the physical internet concept is based on full connectivity between PI-hubs, a replen-
ishment rule needs to be chosen. The selection of the best policy of replenishment is
mainly based on criteria such as: distance, stock level, a.o. In our multi-agent model,
three replenishment policies are defined:

• Random Method: for each retailer demand or hub demand, a random hub is selected
as a node for replenishment.
• Closest Method: the closest hub to retailers is always selected as a good replenish-
ment node to fulfil retailers’ demands. We assume that each retailer has the same
demand on each day. The criterion of selection is the distance between nodes without
consideration of the inventory stock level.
• Hybrid method: this method combines the two previous methods. It uses the distance
and the level of stock to select the replenishment source. For 50% of simulation time,
the closest hub is selected; for the other half, the hub that has the biggest inventory
level is selected to balance inventory stock in all nodes.

Note that all these policies can be used only in case of PISCN configuration because
of the full connectivity between nodes that offers multiple choices. In case of CSCN, the
440 M. Nouiri et al.

Fig. 3. The behaviours of agents in case of PISCN

link between retailer and distributed centre is unique and fixed. In case of perturbations,
a tardiness of delivery should be calculated.

4 Implementation of the Multi-agent Model


A multi agent simulation tool was used to implement the proposed multi-agent model.
The development of such tool allows to provide visualization and analysis of the
effect of different configurations on the performance of the supply chain. The multi-
agent environment Netlogo was used to implement the proposed model. NetLogo is
particularly well suited for analysing the connection between the behaviours of basic
entities in complex systems [11]; it was chosen for its openness and its agent- oriented
programming approach. Figure 4 presents the different levels in the implementation of
the model (breed, agents and attributes).
In fact, in Netlogo, to declare a set of agents, a breed should be created. It groups
the agent set associated with the breed. All nodes of supply chain network are created
as a type of breed. Trucks are also declared as breed (see the upper level in Fig. 4). In
each breed, more than one agent can be declared. Each agent presents the name of each
single member of the breed. In the simulation model, I PA, J HA, V TA and K RA
agents are created (see the second level of the Fig. 4). Each agent has its own attributes
described in the lower level in Fig. 4. For example, each PI-Hub has its own inventory
stock, a Reorder point (ROP), a vector that contains the identifier of trucks agents and
three matrices that contain the distances between other nodes (plants, retailers, other
hubs) which are defined below:
A Multi-agent Model for the Multi-plant Multi-product Physical Internet 441

• m_distance_hub_R [Nb_Hubs][Nb_Retailers]: matrix that contains the distances


between hubs and retailers
• m_distance_Plant_H [Nb_Plants][Nb_Hubs]: matrix that contains the distances
between plants and hubs.
• m_distance_Hub_Hub [Nb_Hubs][Nb_Hubs]: matrix that contains the distances
between hubs.

Fig. 4. Description of the implemented agents in the multi agent simulator

4.1 Inputs and Outputs of the Simulator

Figure 5 presents the inputs and outputs of the proposed simulator model. Different
parameters must be configured. The user can easily modify various parameters in the
model using such tools as sliders and switches; he can choose between testing static
or dynamic scenarios. An important parameter that should be defined in each simu-
lation model is the horizon to be simulated. The unit time used is days. The T vari-
able represents the total number of days. The type of retailer’s demand can be daily
or periodic. Nbr_Hubs, Nbr_Rretailers, Nbr_trucks, Nbr_Plants are variables specify-
ing respectively the no. of PI-hubs, retailers, trucks and plants in the network. Posi-
tion_nodes, distance_nodes and demand_quantity are files containing the position of
all nodes in the map, the distance between nodes and the demand quantity of retailers.
The Supply_Chain_Network_Type variable specifies a PISCN or a CSCN. Price_km and
Price_Stock are variables used to calculate the holding and transportation costs.
The current outputs of the proposed model are:

• HC: the total holding cost


• TC: the total transportation cost
• Daily stock level of each hub
• Replenishment demands’ assignment to the corresponding nodes of network.
442 M. Nouiri et al.

Fig. 5. Main input and output data of the multi-agent simulator

There are two main steps in the simulation. The first one consists to setup the supply
chain network with a specific configuration (number of nodes, links, input data, etc.).
The corresponding agents are created and the attributes are initialized. The second step
is to start simulation in the defined horizon.

4.2 Key Performance Indicators


In order to compare the results of the simulation of different configurations of supply
chain networks key performance indicators are defined. The KPI used in this work are
the holding and transportation costs; they are calculated using Eqs. (1) and (3).
T
HC = (daily_holding_cost_hub) (1)
t=0

where T is the horizon of simulation (total number of days), and the daily holding cost
of each hub is calculated as follow:

daily_holding_cost_hub = stockhub ∗ Pricem3 (2)

T
TC = (daily_Transportation_cost_truck) (3)
t=0

where T is the horizon and the daily transportation cost of each truck is calculated as:

daily_Transportation_Cost_truck = travelleddistance ∗ demandQuantity ∗ Pricekm (4)


A Multi-agent Model for the Multi-plant Multi-product Physical Internet 443

The total holding cost (HC) is calculated by each HA from the daily holding cost;
the value of HC is updated. The total transportation cost (TC) is calculated by the TA;
after each delivery the travelled distance and the costs are updated.

5 Simulation Scenarios
The data set and configurations of some simulation scenarios are described below.

5.1 Data Set: White Sugar Supply Chain

The multi-agent model was tested on a real network with real data received from [6].
The network is composed of two plants (2P), three hubs (3H) and two retailers (2R).
The data of monthly white sugar consumption rate was gathered from January 2015
to September 2019. The daily demand was generated from the real monthly demand of
white sugar consumption during November-December 2017 (Office of Agricultural Eco-
nomics of Thailand (http://www.oae.go.th). The unit holding cost equals 180 THB per
m3 , Integrated Logistics Services of Thailand (http://www.ils.co.th/th/pricing/) in 2019;
the unit transportation cost is 1.85 THB/km/ton from Bureau of standards and Evaluation
( http://hwstd.com/Uploads/Downloads/9/ 07.pdf ), 2016. Configuration and data
(distances, demand, stocks) are identically for both supply chains.

6 Definition of Configuration and Scenarios


• Scenario 1: Comparison Between PISCN and CSCN
The first scenario has been designed to compare the performance of PISCN and CSCN.
The horizon of simulation is 30 days. The initial inventory stock level of all PI-hubs and
distribution centres are the same and equals 850 THB. The demand strategy is daily, all
retailers require the same daily demand; for CSCN each retailer is connected to only one
distribution centre.

• Scenario 2: Varying Replenishment Policies


A second scenario has been designed to assess the impact of replenishment policy on
the KPI. Thus, the proposed replenishment policies are tested with two configurations.
The first configuration consists on a fixed initial inventory stock of all PI-hubs and the
second one with randomly generated values of initial stock between 400 and 850. The
horizon of simulation is 30 days. This scenario only concerns PISCN.

• Scenario 3: Dynamic Scenarios with Perturbations


The last scenario concerns the dynamic case with external perturbation. A random PI-
hub in PISCN or a random distribution centre in CSCN is selected to be unavailable for
a certain number of days. Three levels of perturbation were defined. The low level is
when the node is unavailable for only 1 day, the medium one between 3 and 5 days and
the high level is when there are 5 or 8 days of unavailability.
444 M. Nouiri et al.

7 Experiments and Results


• Results of Scenario 1
The real data for 30 days with network configuration (2P, 3H, 2R) and different
replenishment policies was tested; Fig. 6 shows the simulator with network configuring.

Fig. 6. Multi-agent simulator with real case data configuration

Table 1 presents the results of the simulation tests of the first scenario with the first
configuration. All PI-hub inventory stocks are initialized with a fixed value 850. The
results of simulation tests are summarized in Table 1. As can be seen from Table 1,
the holding cost values of PISCN are lower than classical SCN while using both
replenishment methods. The deviation of HC equals to 0,57%.

Table 1. Results of simulation of first scenario

PISCN CSCN
(compared with
PISCN)
HC TC HC TC
Closest method 9368265 184020 +0,6% +23,7%
Random method 9386265 247996 +0,4% −8,2%

However the transportation cost gap between CSCN and PISCN while using the
closest method as a replenishment policy equals 23,68%. The PI reduces the transporta-
tion cost when the closest method is selected as best replenishment source. However, a
compromise between both KPI must be achieved.
A Multi-agent Model for the Multi-plant Multi-product Physical Internet 445

• Results of Scenario 2
Each configuration is tested for five replications and the average value of the objective
function is presented in bold; the results of simulation tests with fixed value of initial
inventory stock are summarized in Table 2. As can be seen, for different tested replen-
ishment policies different values of KPI are obtained. The results prove the important
impact of the replenishment method on holding and transportation costs. Using the clos-
est method for replenishment decreases the HC and TC values compared to the random
replenishment method. The gap of HC between both methods is low and equals 0,46%;
however the gap of TC equals 34,55%. The closest method aims to favour the closest
node for replenishment to reduce travelled distance, CO2 emission and transportation
cost which are the main objectives of the PI paradigm.

Table 2. The results of second scenario with fixed value of initial inventory stock

Closest method Random


method
(compared with
Closest
Method)
HC TC HC TC
Test 1 9368265 184020 +0,6% +33,8%
Test 2 9368265 184020 0 +34,1%
Test 3 9368265 184020 +0,2% +34,8%
Test 4 9368265 184020 +0,6% +33,6%
Test 5 9368265 184020 +1% +36,5%
Average 9368265 184020 +0,5% +34,6%

Table 3 summarizes the results of the simulation with a randomly generated initial
inventory level of PI hubs. As we can see the deviation becomes more important for both
cases. The holding cost deviation equals 7,83% and the transportation cost deviation is
equal to 34,69%. Figure 7 presents the daily variation of inventory stock of hubs.

• Results of Scenario 3
The results of simulation tests under perturbations are summarized in Table 4. A random
PI-hub or distribution centre is selected to be unavailable for one day in case of low
level of perturbation and five days in case of medium level of perturbation. In Table 4,
the holding cost of CSCN is bigger than the PISCN. The gap of HC under low level
perturbation equals to 0.66% and under medium level equals 1,16%. However, the TC
decreased as there are five missing delivery missions due to perturbations. However in
case of PISCN, the perturbation does not affect too much the performance in term of
HC, due to the replenishment flexibility.
446 M. Nouiri et al.

Table 3. The results of second scenario with random values of initial inventory stock

Closest method Random method


(compared with
Closest Method)
HC TC HC TC
Test 1 6562065 184020 +10,5% +35,8%
Test 2 5933865 184020 +5,6% +34,9%
Test 3 5496465 184020 +28,4% +33,7%
Test 4 5914065 184020 −1,2% +34%
Test 5 5608065 184020 −3,5% +35%
Average 5902905 184020 +7,8% +34,7%

Fig. 7. Results of simulation with random method

Table 4. The results of simulation under perturbations

PISCN CSCN
(difference
with PISCN)
Perturbations HC TC HC TC
Average Low level 9368265 184020 +0.6% +32%
Average Medium level 9386265 229325 +1.1% −2%

8 Conclusion

This work refers to a multi-plant multi-product physical internet supply chain network.
A multi-agent model with three replenishment policies are proposed for multi-plant
multi-product PISCN. The proposed model is designed to support different supply chain
A Multi-agent Model for the Multi-plant Multi-product Physical Internet 447

configurations with large numbers of suppliers, hubs and retailers. A multi-agent simu-
lation tool was used to implement the proposed model. The performances of PISCN and
CSCN are compared using multi-agent simulation. After testing the simulator in differ-
ent static and dynamic scenarios, the results show that PSICN is more efficient than the
classical one especially for the transportation cost which was significantly improved in
the physical internet supply chain. The perturbations acting on PI-hubs or distribution
centres have a different impact on the supply chain performance. The simulation results
show the importance of the replenishment policy on transportation and holding costs.
In future work, additional constraints will be taken into consideration and other
approaches will be developed to optimize replenishment decisions. Future studies will
be led to test internal and external perturbations.
An interesting direction of this work is to couple this model with internal PI-hub
models to provide a closer view of internal decisions that affect the global decision and
performance of supply chain networks.

References
1. Dai, Z., Aqlan, F., Zheng, X., Gao, K.: A location-inventory supply chain network model
using two heuristic algorithms for perishable products with fuzzy constraints. Comput. Ind.
Eng. 119, 338–352 (2018)
2. Darvish, M., Larrain, H., Coelho, L.C.: A dynamic multi-plant lot-sizing and distribution
problem. Int. J. Prod. Res. 54(22), 6707–6717 (2016)
3. Montreuil, B., Ballot, E., Fontane, F.: An Open Logistics Interconnection model for the Phys-
ical Internet. In: 14th IFAC Symposium on Information Control Problems in Manufacturing,
Bucharest, Romania, vol. 45, no. 6, pp. 327–332 (2012)
4. Montreuil, B., Meller, R.D., Ballot, E.: Towards a physical Internet: the impact on logis-
tics facilities and material handling systems design and innovation. In: Proceedings of the
International Material Handling Research Colloquium (IMHRC), pp. 1–23 (2010)
5. Montreuil, B., Meller, R.D., Ballot, E.: Physical Internet Foundations. In: Borangiu, T.,
Thomas, A., Trentesaux, D. (eds.) Service Orientation in Holonic and Multi Agent Manufac-
turing and Robotics. Studies in Computational Intelligence, vol. 472, pp. 151–166 Springer,
Cham (2013)
6. Ji, S.F., Peng, X.S., Luo, R.J.: An integrated model for the production-inventory-distribution
problem in the Physical Internet. Int. J. Prod. Res. 57(4), 1000–1017 (2019)
7. Kantasa-Ard, A., Bekrar, A., Sallez, Y.: Artificial intelligence for forecasting in supply chain
management: a case study of White Sugar consumption rate in Thailand. IFAC-PapersOnLine
52(13), 725–730 (2019)
8. Kantasa-Ard, A., Nouiri, M., Bekrar, A., El Cadi, A.A., Sallez, Y.: Dynamic Clustering of PI-
Hubs Based on Forecasting Demand in Physical Internet Context. In: Borangiu, T., Trentesaux,
D., Leitão, P., Giret Boggino, A., Botti, V. (eds.) Service Oriented, Holonic and Multi-agent
Manufacturing Systems for Industry of the Future. Studies in Computational Intelligence,
vol. 853, pp. 27–39, Springer, Cham (2019)
9. Nouiri, M., Bekrar, A., Trentesaux, D.: Inventory control under possible delivery perturbations
in physical internet supply chain network. In: 5th International Physical Internet Conference,
pp. 219–231 (2018)
10. Nouiri, M., Bekrar, A., Trentesaux, D.: An energy-efficient scheduling and rescheduling
method for production and logistics systems. Int. J. Prod. Res. 58, 1–21 (2019). https://doi.
org/10.1080/00207543.2019.1660826
448 M. Nouiri et al.

11. Sallez, Y., Berger, T., Bonte, T., Trentesaux, D.: Proposition of a hybrid control architecture for
the routing in a Physical Internet cross-docking hub. IFAC-Papers On Line 48(3), 1978–1983
(2015)
12. Trentesaux, D., Giret, A.: Go-green manufacturing holons: a step towards sustainable
manufacturing operations control. Manuf. Lett. 5, 29–33 (2015)
13. Van der Heide, L.M., Coelho, L.C., Vis, I.F., Van Anholt, R.G.: Replenishment and denomi-
nation mix of automated teller machines with dynamic forecast demands. Comput. Oper. Res.
114 (2020). https://doi.org/10.1016/j.cor.2019.104828
14. Yang Y., Pan S., Ballot E.: A model to take advantage of Physical Internet for vendor inventory
management, IFAC-Papers On Line, Volume 48(3), 1990–1995 (2015)
15. Yang, Y., Pan, S., Ballot, E.: Mitigating supply chain disruptions through interconnected
logistics services in the physical internet. Int. J. Prod. Res. 55(14), 3970–3983 (2017)
Survey on a Set of Features for New Urban
Warehouse Management Inspired by Industry
4.0 and the Physical Internet

Aurélie Edouard1,3(B) , Yves Sallez2 , Virginie Fortineau1 , Samir Lamouri1 ,


and Alexandre Berger3
1 Arts et Métiers, Paris, France
{aurelie.edouard,virginie.fortineau,samir.lamouri}@ensam.eu
2 UPHF, Valenciennes, France
yves.sallez@uphf.fr
3 Le Groupe La Poste, Paris, France

alexandre.berger@laposte.fr

Abstract. City logistics is one of the most significant branches of supply chain
management. It deals with the logistics and transportation activities in urban areas.
This research area has recently experienced exponential growth in publications. In
this article, we introduce a new urban warehouse. This customer-oriented logistics
facility aims to respond to last-mile delivery challenges. To do so, the use of
Industry 4.0 methods and technologies, as well as elements of the Physical Internet
paradigm, are explored. The possibility of validating the elements mentioned with
a La Poste project, which aims to create an experimental laboratory, is proposed.

Keywords: City logistics · Last-mile delivery · Urban warehouse management ·


Customer-oriented logistics · Industry 4.0 · Physical Internet

1 Introduction
Logistics is presented as a significant branch of supply chain management (SCM) in the
literature. This subject relates to “the process of planning, implementing, and controlling
an efficient and effective flow and storage of goods, services and related information from
point of origin to point of consumption for the purpose of conforming to a customer’s
requirements” [1].
The logistics associated with the consolidation, transportation, and distribution of
goods in cities is called city logistics. This notion has been described by Taniguchi
as “The process of optimizing both logistics and transport activities done by private
companies in urban areas while considering the traffic environment, traffic congestion
and energy consumption within the framework of a market economy”.
Of the logistics activities, this article takes interest in the management of storage
spaces (warehouses), as they are essential components in logistics. A warehouse is an
intermediate storage point to smooth the relationship between time and demand, and can

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 449–459, 2021.
https://doi.org/10.1007/978-3-030-69373-2_32
450 A. Edouard et al.

perform distribution and value-added services [2]. Until recently, these parcel storage
spaces were located in outer suburbs [3].
Climate change issues, the trend of growing online shopping sales and demand for
instant delivery have put pressure on adopting more time-saving logistics practices and
locating order fulfilment facilities strategically in locations with direct access to con-
sumer markets [4]. Therefore, cities are facing the reintroduction of logistics spaces
and facilities in inner urban areas. Because the challenges of city logistics change con-
tinually, so do the opportunities to improve city logistics. Facing current challenges of
sustainable development, profitability, traceability, customer satisfaction and last-mile
deliveries, the aim of this article is to introduce a new warehouse type, an urban ware-
house, and demonstrate how the concepts of Industry 4.0 and Physical Internet can be
used to solve the issues of this new urban warehouses management.
The remainder of the paper is organized as follows: Sect. 2 presents a review of the
challenges of urban logistics in terms of the impact on warehouses and the issues to
solve. Section 3 explains how the concepts of Industry 4.0 and Physical Internet will
be used to resolve the issues of the new urban warehouses that have been presented.
Section 4 introduces an application case to demonstrate the applicability of the quoted
elements. Finally, Sect. 5 leads to a discussion and conclusion.

2 Challenges in City Logistics

2.1 City Logistics


The final objective of city logistics efforts is to elevate a city’s prosperity while alleviating
its emerging negative consequences such as pollution, traffic, safety and destructive
environmental effects.
As [3] reported: “the municipality of Paris has tended to neglect one important
dimension of the issues of urban goods’ movement in the Paris region: the “flight” to the
suburbs of many logistics facilities.” This phenomenon is called “logistics sprawl”, the
movement of logistics facilities (warehouses, cross-dock facilities, intermodal terminals,
etc.) outside of a city’s boundaries towards suburban areas. Inner urban areas constitute
a very limited supply of available and affordable commercial and industrial land to
establish and operate logistics facilities. As a result, most logistics facilities are currently
located in logistics clusters on the periphery of metropolitan areas, close to road networks,
major airports and seaports.
This has increased the distances that are travelled by truck and van and therefore
has also increased traffic congestion and environmental impacts. Taking this into con-
sideration, the current issues of climate change and new customer demands generated
in particular by e-commerce, the strategy of some private actors is to reintroduce these
spaces within inner urban areas of a metropolis.
The return of these spaces to a city centre, and the challenges related to last-mile
deliveries have prompted researchers and supply chain stakeholders to reflect on other
initiatives that could be taken in order to respond to their issues. What are, in particular,
the challenges affecting these new urban warehouses? And what are issues that the
warehouses face within this organisation?
Survey on a Set of Features for New Urban Warehouse Management 451

2.2 Last-Mile Deliveries: A Big City Logistics Challenge

The development of e-commerce has led to a rapid increase in demand for new urban
distribution services, such as “fast delivery,” “same-day delivery” (sometimes even going
down to 1-h and 2-h delivery options), “direct-to-consumer delivery,” including the
last-mile delivery challenges. This has imposed a heavy burden on urban traffic (such
as traffic congestion, inconvenience and inefficiency), the environment (such as visual
disturbance, greenhouse gas emissions, and a waste of resources) welfare (such as noise,
accidents and public health) and governance (such as land scarcity and uncontrolled
spread) [5]; topics covered in numerous publications. This trend also impacts logistics
facilities management.
The last mile of the supply chain is considered one of the most, or even the most,
expensive, inefficient and polluting part of the supply chain [6]. Last mile logistics take
place from an order penetration point, such as the urban warehouse presented above,
to the final consignee’s preferred destination point for the reception of goods [7]. In
this context, the main objective of logistics and supply chain management is the same:
providing good service (the right product at the right time and at the right place) at a low
cost.
So, how does this new kind of warehouse meet this objective? The next section
will discuss what this objective implies for the management of urban warehouses in the
context of last-mile delivery challenges.

2.3 Warehouse Issues

Warehouses perform the basic functions of receiving, storage, order picking, and
shipping [8]. Some are more complex, also performing distribution and value-added
activities. The main value-added activities are [9]:

• Total logistics management, inventory control and tracking;


• Packaging, labelling and bar coding;

Table 1. The new urban warehouse specifications versus classical warehouse ones

Classical warehouse New urban warehouse


Large space Small areas (a few hundred of m2 )
Located in the suburbs (Average distance from Located in city centre
city centre: 16 km) Direct access to the consumer markets (last
Rapid movement of goods miles) - Proximity stock
Cross docking Delivery in a few hours
Use typical material flow devices as conveyors, Few clients
forks lifts, AGVs… Challenge of optimising the environmental
Standardized installation for the entire logistic impact of buildings and transport
facility Personalized and customization offers
(modular spaces, trained employees,
monitoring new technologies)
452 A. Edouard et al.

• Procurement and vendor management;


• Returns processing;
• Customizing, adding parts and manuals;
• Quality control, testing of products;
• Installing and instruction, product training on customer’s premises.

Some specifications of the classical warehouses operating the mentioned functions


have to be adapted to the urban environment. Table 1 shows a brief comparison between
the specifications of classical warehouses sourced from [2] and the urban ones [10].
In the context of city logistics, the new urban warehouses have to respond to multiple
goals. Most of them have been found in Yavas’ article [8]. Juntao [11], Witkowski [12]
and Shiau’s articles [13] have also helped. Six key success factors to categorize these
goals have been proposed. These are summed up in Fig. 1. Further explanations of the
factors are provided below.

Fig. 1. Key goals for new urban warehouses

Optimization: The optimization of flows and processes in a warehouse has a major role:
it allows for gains in performance. In fact, working on the value chain helps identify and
reduce waste, unnecessary stock and handling activities and loss of space.
Survey on a Set of Features for New Urban Warehouse Management 453

Traceability: This makes it possible to know a product’s origin and to follow its path
throughout the supply chain. It allows customers to follow the progress of their order
preparation and shipment, as well as to manage the physical flows of reverse logistics,
such as consigned objects.

Reliability: An organization and its systems need to be reliable in order to meet the
deadlines promised to customers and to avoid registration errors caused by manual
recording. This first issue has a significant impact on customer satisfaction, and the
second can lead to inventory variances and loss of information in a computer system.

Responsiveness: The aim is to improve the organization via the data-processing system
in order to give the possibility of grouping orders for a single customer or to efficiently
manage selection errors.

Security: This is a critical factor for any organization. The employer must ensure the
safety of its employees. The layout and use of the facilities must comply with certain
rules. For example, companies must ensure the ergonomics of workspaces will prevent
musculoskeletal disorders (MSDs). The company needs also to ensure the security of its
data.

Flexibility: The ability of an organisation to respond rapidly to changes in demand both


in terms of volume and variety [14]. This factor includes the ability to customize products
to meet specific customer demands, adjust capacity to meet changes in quantities, launch
new or revised products, provide widespread access to products and to respond to a target
market’s needs.

3 Towards Optimal Urban Warehouse Management


As described in the previous section, the current boom in e-commerce has brought about
high demand and pressure on logistics services, and has triggered last-mile logistics,
which is essential for e-commerce business because it directly interacts with end cus-
tomers. Many articles present Industry 4.0 methods and technologies as an opportunity
to meet customer needs and to also contribute to the development in logistics and supply
chain management. In addition, the Physical Internet paradigm is increasingly being
mentioned to develop “an open global logistics system founded on physical, digital and
operational interconnectivity through encapsulation, interfaces and protocols” [15]. In
our opinion, the coupling of the concepts proposed by the PI and of the tools provided
by the manufacturing 4.0 can help to solve the new urban warehouses issues.

3.1 Survey of the Literature on 4.0 methods and Technologies that Can Be
Applied to Urban Logistics Issues
In the supply chain strategy, there are tensions between the competing priorities of
cost, flexibility, speed and quality [16]. It can also include the zero-emission goal, a
priority that has only recently been taken into consideration. Technologies supporting
454 A. Edouard et al.

Industry 4.0 can improve one or more of these priorities individually, or in combination
with one another. Many studies, such as German Manufacturing 4.0 [17], point out
key technologies. A literature survey was conducted to identify the main categories of
Industry 4.0 that is suitable for urban logistics.

Big Data (BD): Logistic processes generate a large amount of data, in particular col-
lected by IoT technologies. The collection and comprehensive evaluation of this data
from many different sources help support real-time decision making. There is a technol-
ogy by which we can conduct analysis and that is Big Data. It allows for the analysis
and separation of what is important and unimportant, helping to draw conclusions and
support more effective knowledge transfer to achieve business goals [12].

Cloud Computing (CC): This is a large-scale, low-cost and flexible processing unit for
computing and storage based on IP connections. Some of the most relevant features are:
the possibility of ubiquitous network access, the ability to increase or decrease capacity
at will and the independent location of resource pools [18].

Cyber-Physical systems (CPS): These systems contain two main functional compo-
nents: advanced connectivity which ensures real-time data collection from the physical
world and information cyberspace feedback, and intelligent data management, analysis,
and calculation functions to build cyberspace [19].

IoT: This technology makes the creation of information without human observation
possible; it allows also field devices to communicate and interact with each other [20].
For example, according to [16], Radio Frequency Identification (RFID) tags and readers
promise a revolution in tracking and monitoring inventory. Thanks to the Internet of
Things, the transport process of goods, parcels and letters can be monitored [12]. Track-
ing and tracing have become faster, more accurate, more predictable and more secure.
In the event of delays, customers can be informed of complications in advance.

Simulation and Digital Twins (SDt): Real-time data is used to reflect the physical world
in a “digital twin” (virtual model), which can include machines, products, and people
[20]. It can be used to test, simulate and optimize the organization and, for example,
reduce the time from a picked order to departure [21].

Artificial Intelligence (AI): Artificial intelligence uses computers to simulate natural


intelligence to interpret external data, learn from such data and use that learning for
descriptive, predictive or normative analysis [22]. This technology is used to eliminate
a sorting process that annoys staff, and helps avoid errors.

Stand-Alone Devices/Robots (Sd/R): Recent advances in sensor technology and arti-


ficial intelligence are enabling a new generation of robotic technologies that can be
deployed alongside human workers [16]. These collaborative robots are called “cobots”.
Their deployment involves automating manual tasks that can lead to repetitive strain
injuries. In addition, exoskeletons can help workers mitigate the injuries caused by lifting
heavy boxes in the warehouse [22]. Mobile robots, on their side, can improve productiv-
ity by, for example, bringing products directly to employees for picking, packaging and
Survey on a Set of Features for New Urban Warehouse Management 455

shipping. In addition, drones can carry different sensors to record data (visual and audio)
for monitoring and surveying operations. The automation of activities is motivated by
improving quality (robots can perform more precise tasks than humans repeatedly) or
safety (prevent musculoskeletal disorders (MSDs) [16].

Augmented Reality (AR): Augmented-reality-based systems enhance the physical


world with digitally generated visual or other sensory information. They support a vari-
ety of services, such as selecting parts in a warehouse, providing workers with real-time
information to improve decision making and work procedures [20].

Cybersecurity (Cyb): With large amounts of data now available through IoT, a company
will want to store such data in an accessible and secure manner. One possible solution
to this storage problem is blockchain. Blockchain is a distributed security ledger which
can be accessed and written from anywhere; its data is not stored in a central location,
and after a block is added to the chain it cannot be changed [16].

Additive Manufacturing (AM): Also known as three-dimensional (3D) printing, this is


a process that takes a digital 3D representation and produces the corresponding physical
object [16]. Locating AM machines close to assets will enable rapid on-demand printing
of a required part, with resulting inventory and transport reduction [20]. They can also
reduce waste such as packaging [23]. This technology can also be used to produce small
batches of customized products.

3.2 The Physical Internet Paradigm

This paradigm was first introduced by Benoit Montreuil. [24] states that “The Physical
Internet (PI) goal is to form an efficient, sustainable, resistant, adaptable and flexible
open global logistics web based on physical, digital and operational interconnectivity
through world standard norms, interfaces and protocols. The main PI building blocks are
new modular load units (named PI-containers), new supply chain interfaces: logistics
centres equipped with new handling and storage technologies and cooperative logis-
tics networks. PI includes forming of appropriate modular units (PI-containers), which
shall have "smart" characteristics and which shall enable full usage of load and storage
capacities. These load units shall optimally move across logistics networks due to their
capability to communicate with each other and with resources for transfer located in
the logistics hubs (π-hubs). This digital interconnectivity shall enable an encapsula-
tion of goods in world standard “smart” green modular containers with possibilities to
communicate between each other using all advantages of IoT.” Many studies discuss
this concept and promise interesting solutions for the near future. Among them, some
developments in last-mile delivery solutions have been proposed, such as smart locker
bank network design studies using PI containers that demonstrate that modular designs
can perform just as well as fixed-configuration designs, while being more flexible [25,
26].
456 A. Edouard et al.

3.3 New Urban Warehouse Management


For [7], the three top technology trends for last-mile logistics are Big Data, Internet of
Things (IoT), and Artificial Intelligence. Does this apply to urban warehouses presented
in this article? What methods and technologies from Industry 4.0 identified in part 3.1
address the issues identified in Sect. 2.2 Can the Physical Internet also be of use? Fig. 2
combines the goals presented in Fig. 1 with Industry 4.0 technology groups and a PI
solution.

Fig. 2. Actions combined with Industry 4.0 technology groups

4 An Application Case
In Vietnam, traditional postal service providers such as VN Post, EMS, and Viettel Post
have joined the fast delivery service business [7]. Le Groupe La Poste has decided to use
its existing nationwide post office and network of postmen (more than 80,000 throughout
the country) to deliver goods throughout France. This network allows “instant” delivery
to cities, if they are coupled with efficient logistics facilities. So, La Poste considered
taking advantage of another one of its assets, the availability of well-placed square meters
in city centres to create new urban warehouses (storage areas and fast delivery points).
The company refurbished a 600 m2 space, located in a high activity area in Paris’
city centre. This ongoing project aimed to create an experimental laboratory in order to
Survey on a Set of Features for New Urban Warehouse Management 457

maintain the lead in the market of La Poste Group. The company would to be able to
offer the best services related to stock management, order picking, delivery, and return
management. This place would allow for an experiment to be done on the adaptation
of Industry 4.0 methods and technologies to the problems of urban logistics, and more
precisely to the challenge of last mile delivery in order to develop new optimized urban
warehouses. It would also examine the benefits that the Physical Internet paradigm could
bring about. The project aims to be carried out in collaboration with start-ups and partner
schools to ensure that technologies of the future are kept up to date.
First offers are already being proposed as:

• Offset storage for automotive parts for B2B;


• Storage and delivery of products (B2C) with returnable container exchanges.

Others, like personalization of parcels, are planned.


In view of what has been presented in the previous sections of the article, proposals
for solutions can be made. First of all, for automotive parts, which may be very heavy,
the use of advanced robots such as exoskeletons could be a solution. In addition, reverse
logistics can concern reusable packaging, as well as damaged goods or products designed
to be repackaged. The implementation of solutions from IoT and a cloud-based WMS
system have to be studied to manage this flow.
Then, the personalization of an offer can be done with the use of 3D printers, by
creating, for example, personalized goods. This tool could also be used to reduce stock
levels. Moreover, combining IoT and Big Data could make it possible to optimize stocks
in these restricted spaces within cities. As customization offers result in short contracts,
space planning needs to be flexible to respond rapidly to changes in demand both in terms
of volume and variety [27]. For example, the company had the opportunity in 2017 to offer
personalised service to the Adidas brand for the distribution of customisable premium
football shoes, but the contract was for a fixed term. The use of simulation and Digital
twin tools could make it possible to work on possible scenarios before using mobile
robots to execute the choices in a fluid and optimized way.

Fig. 3. La Poste trolleys

The last solution that can be mentioned is the use of the concept of PI-containers
from the Physical Internet in order to create a package system adapted to the trolleys of
458 A. Edouard et al.

La Poste, the CE 30 (Fig. 3). This system is equipped with a product security system
and a tracking and routing system in order to ensure it is followed up on as well as the
optimization of its route. Moreover, it is a tool adapted to environmental impact problems
because it can be translated into reusable boxes.
All the elements cited remain suggestions. Faced with the work’ s complexity, com-
promises will have to be made in order to best respond to the problems of these logistics
facilities. The project will be divided into subsets in order to focus. The work must first
start with the formalization of expectations and the prioritization of the issues, followed
by a proposal of tools and methods, feasibility studies and then tests.

5 Discussion and Conclusion


In this article, it was shown that in order to meet last-mile delivery challenges it could
be of interest to develop new urban warehouses. To cope with customers’ expectations
and a city’s logistics issues, the use of Industry 4.0 methods and technologies, as well
as elements of the Physical Internet paradigm, seem to be tracks of interest to pursue.
It would also be of interest to reflect upon whether the solutions cited are the only
ones that could possibly be used. The aim is not to neglect any other possibilities that
could be effective.
Because this study is just a first step in exploring solutions for the new urban ware-
house management, the next step will focus on the validation of a state-of-the-art of
the combination of these facilities’ issues and Industry 4.0 technology groups that can
solve them. Further research will focus on proposing options to assist urban warehouse
managers with optimization of their facilities.

References
1. Mentzer, J.T., Keebler, J.S., Nix, N.W., Smith, C.D., Zacharia, Z.G.: Defining supply chain
management. J. Bus. 22(2), 1–25 (2001)
2. Higgins, C., Ferguson, M., Kanaroglou, P.: Varieties of logistics centers. Transp. Res. Rec.
2288, 9–18 (2012)
3. Dablanc, L., Rakotonarivo, D.: The impacts of logistics sprawl: how does the location of
parcel transport terminals affect the energy efficiency of goods’ movements in Paris and what
can we do about it? Proc. Soc. Behav. Sci. 2(3), 6087–6096 (2010)
4. Kang, S.: Relative logistics sprawl: measuring changes in the relative distribution from ware-
houses to logistics businesses and the general population. J. Transp. Geogr. 83, 102636
(2020)
5. Hu, W., Dong, J., Hwang, B.G., Ren, R., Chen, Z.: A scientometrics review on city logistics
literature: Research trends, advanced theory and practice. Sustainability 11(10), 1–27 (2019)
6. Gevaers, R., Van de Voorde, E., Vanelslander, T.: Characteristics of innovations in last mile
logistics - using best practices, case studies and making the link with green and sustainable
logistics. Assoc. Eur. Transp. Contrib. 1–8 (2009)
7. Phuong, D.T.: Last-mile logistics in Vietnam in industrial revolution 4.0 : opportunities and
Challenges, vol. 115, no. Insyma, pp. 172–176 (2020)
8. Yavas, V., Ozkan-Ozen, Y.D.: Logistics centers in the new industrial era: a proposed
framework for logistics center 4.0. Transp. Res. Part E Logist. Transp. Rev. 135, 101864
(2019)
Survey on a Set of Features for New Urban Warehouse Management 459

9. Grundey, D., Rimienė, K.: Logistics centre concept through evolution and definition. Eng.
Econ. 54(4), 87–95 (2007)
10. Den Berg, J.P.V., Zijm, W.H.M.: Models for warehouse management: classification and
examples. Int. J. Prod. Econ. 59(1), 519–528 (1999)
11. Juntao, L.: Research on Internet of Things technology application status in the warehouse
operation. Int. J. Sci. Technol. Soc. 4(4), 63 (2016)
12. Witkowski, K.: Internet of Things, big data, industry 4.0 - innovative solutions in logistics
and supply chains management. Proc. Eng. 182, 763–769 (2017)
13. Shiau, J.Y., Lee, M.C.: A warehouse management system with sequential picking for multi-
container deliveries. Comput. Ind. Eng. 58(3), 382–392 (2010)
14. Duclos, L.K., Vokurka, R.J., Lummus, R.R.: A conceptual model of supply chain flexibility.
Ind. Manag. Data Syst. 103(5–6), 446–456 (2003)
15. Pan, S., Ballot, E., Huang, G.Q., Montreuil, B.: Physical Internet and interconnected logistics
services: research and applications. Int. J. Prod. Res. 55(9), 2603–2609 (2017)
16. Olsen, T.L., Tomlin, B.: Industry 4.0: opportunities and challenges for operations manage-
ment. Manuf. Serv. Oper. Manag. 22(1), 113–122 (2020)
17. Cerfa, N.: Transformation numérique de l ’ industrie : l ’ enjeu franco-allemand (2018)
18. Dopico, M., Gomez, A., De la Fuente, D., García, N., Rosillo, R., Puche, J.: A vision of
industry 4.0 from an artificial intelligence point of view. In: Proceedings 2016 International
Conference Artificial Intelligence ICAI 2016 - WORLDCOMP 2016, pp. 407–413 (2016)
19. Lee, J., Bagheri, B., Kao, H.A.: A cyber-physical systems architecture for industry 4.0-based
manufacturing systems. Manuf. Lett. 3, 18–23 (2015)
20. Rüßmann, M.: Industry 4.0: future of productivity and growth in manufacturing, Bost.
Consult., April 2015
21. Tjahjono, B., Esplugues, C., Ares, E., Pelaez, G.: What does Industry 4.0 mean to Supply
Chain? Proc. Manuf. 13, 1175–1182 (2017)
22. Tang, C.S., Veelenturf, L.P.: The strategic role of logistics in the industry 4.0 era. Transp. Res.
Part E Logist. Transp. Rev. 129, 11 (2019)
23. Taniguchi, E., Thompson, R.G., Yamada, T.: New opportunities and challenges for city
logistics. Transp. Res. Proc. 12, 5–13 (2016)
24. Maslarić, M., Nikoličić, S., Mirčetić, D.: Logistics response to the industry 4.0: the physical
internet. Open Eng. 6(1), 511–517 (2016)
25. Faugère, L., Montreuil, B.: Hyperconnected pickup and amp. delivery locker networks. In:
Proceedings 4th International Physics Internet Conference, vol. 6, p. 14, July 2017
26. Faugère, L., Montreuil, B.: Smart locker bank design optimization for urban omnichannel
logistics: assessing monolithic vs. modular configurations. Comput. Ind. Eng. 139, 14 (2018)
27. Christopher, M.: The agile supply chain. Ind. Mark. Manag. 29(1), 37–44 (2000)
Multi-objective Cross-Docking
in Physical Internet Hubs Under Arrival
Time Uncertainty

Tarik Chargui1,3(B) , Fatma Essghaier2 , Abdelghani Bekrar3 , Hamid Allaoui2 ,


Damien Trentesaux3 , and Gilles Goncalves2
1
RSAID, National School of Applied Sciences, ENSATe, University of Abdelmalek
Essaâdi, Tetouan, Morocco
tarik.chargui@gmail.com
2
LGI2A, Laboratoire de Génie Informatique et d’Automatique de l’Artois,
Université Artois, UR 3926, 62400 Béthune, France
{fatma.essghaier,hamid.allaoui,gilles.goncalves}@univ-artois.fr
3
LAMIH, UMR CNRS 8201, Université Polytechnique Hauts-de-France,
Le Mont Houy, 59313 Valenciennes, France
{abdelghani.bekrar,damien.trentesaux}@uphf.fr

Abstract. Cross-docking terminals or hubs are used by many logistics


distribution companies and play a critical role in the performance of the
global supply chain. Recent studies on Physical Internet cross-docking
hubs (PI-hubs) have shown promising results for optimization problems
in different structures such as Road-Road, Rail-Road and Road-Rail PI-
hubs. This paper addresses the optimization of trucks’ scheduling and
PI-containers’ grouping in Rail-Road cross-docking PI-hubs. The study
is performed considering uncertainty on trucks’ arrival time. The prob-
lem is formulated as a multi-objective chance constrained mixed integer
programming model. The objective is to minimize the number of used
wagons, PI-containers distance and the total tardiness of inbound trucks
according to a Lexicographic Programming order.

Keywords: Cross-docking · Rail-road physical internet hub · Chance


constrained programming · Truck scheduling · Uncertainty ·
Multi-objective optimization

1 Introduction

In the global supply chain, cross-docking hubs have a major role in maintaining
a continuous flow of products and raw materials from suppliers to retailers with
minimal temporary storage in between [14,26]. Inside the cross-docks, products
are transferred from inbound vehicles to outbound ones which can be trucks,
trains or ships. A high performance of the these platforms requires a good syn-
chronization between inbound and outbound vehicles in order to minimize the

c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 460–472, 2021.
https://doi.org/10.1007/978-3-030-69373-2_33
Multi-objective Cross-Docking in Physical Internet Hubs 461

waiting time at the docks and the temporary storage level which reduces signif-
icantly the total distribution cost [10].
Recently, the concept of Physical Internet has been introduced to change
the way products are transported, handled, stored and shipped through the
supply chain using standardized PI-containers [19]. The goal of the PI is to
improve the global sustainability of the supply chain from the environmental,
social and economical aspects. In the context of PI, the cross-docking hubs are
fully automated using standardized PI-containers instead of regular pallets [9,16]
and automated sorting zones instead of forklifts [1,20]. There are different types
of cross-docking PI-hubs depending of the nature of the inbound and outbound
vehicles, for example the Road-Rail PI-hubs are used to transfer PI-containers
from trucks to trains, the Rail-Road PI-hubs transfer PI-containers in the reverse
direction, the Road-Road PI-hubs transfer PI-containers between trucks.
This paper deals simultaneously with the truck scheduling and PI containers
grouping in Road-Rail PI-hub. Solving such issues involves a complete knowledge
about all the problem parameters that are generally fixed by experts or defined
based on historical data. However, in real-world situations, these parameters
are always subject to change due to unforeseen events. In the literature, only
few works have been performed to handle such circumstances. There are many
recent studies [15,24] where authors have particularly considered uncertainty in
truck arrival times. Our work follows this direction and proposes a new chance
constrained programming approach to solve the problem.
The remaining of this paper is organized as follows: Sect. 2 presents the lit-
erature review on the Road-Rail PI-hub related problems and the uncertainty
in the cross-dock scheduling literature. Section 3 defines the problem descrip-
tion and its mathematical formalization. Section 4 shows the obtained computa-
tional results. Finally, Sect. 5 concludes this work and gives some future works
directions.

2 Literature Review

The cross-docking optimization literature is very large and many reviews have
been proposed to classify cross-docking studies according to their decision level:
strategical, tactical and operational level [2,14,27]. In the context of Physical
Internet, the Road-Rail hubs represent cross-docks that manage the transfer
of PI-containers automatically from trucks to train’s wagons through different
steps. First, the products have to be unloaded from the inbound trucks and,
temporary stocked in a maneuvering area. Then, the PI-containers are trans-
ferred using PI-conveyors, and loaded to train’s wagons.The reader can refer to
[1,20] for a complete description of the functional design of the Road-Rail and
Road-Road cross-docking PI-hubs.
462 T. Chargui et al.

The optimization problems on Road-Rail and Rail-Road cross-docking PI-


hubs have been recently addressed through different approaches. Mixed integer
programming and meta-heuristics are often developed to solve the truck schedul-
ing and PI-containers grouping problems in the Road-Rail PI-hub [4], Rail-Road
PI-hub [3] or two-way Road-Rail PI-hub [5]. Other papers propose different
approaches to solve the problem such the multi-agent simulation used by [21]
for the Rail-Road PI-hub to study the impact of PI-containers grouping on the
truck loading.
Recently, few works have been developed for cross-docking problems consid-
ering uncertainty. This uncertainty may concern processing time, trucks’ assign-
ment to the dock doors, transshipment time, truck delays, etc. [29]. For instance,
authors in [13] focus on the scheduling of the inbound trucks at receiving docks by
applying a cost-stable strategy that takes into consideration uncertainty about
trucks arrival times. The authors have formulated the problem as a bi-objective
bi-level optimization model. A genetic algorithm based heuristic is then devel-
oped to solve the problem.
In [12], authors have considered trucks with uncertain arrival time. They have
proposed a mathematical formulation and four different approaches to solve the
problem. In another study [15], multiple robust mathematical models for truck
scheduling with time windows have been defined considering different sources of
uncertainty: transfer time of products, unloading time and trucks’ arrival time.
In [28], authors have developed a Best Fit Grouping Heuristic (BFGH) and a
simulated annealing based meta-heuristic for the assignment of the trucks to
the shipping docks. Then, a multi-agent simulation model has been proposed to
handle the perturbation in the availability of the shipping docks in real-time.
Among different approaches to handle uncertainty, Chance Constrained Pro-
gramming (CCP) has proved its efficiency in various optimization problems such
as project scheduling [30], ship fleet planning problem [18] and job shop schedul-
ing [25]. This approach is a powerful tool to deal with optimization problems
under uncertainty which makes it possible to quantify the feasibility of a given
constraint to a certain level. It has been first introduced by Charnes and Cooper
[6] in a probabilistic context and then adapted to other uncertainty theories such
the possibility theory that is the context of our work.
To the best of our knowledge, CCP has not been applied in the cross-docking
context and especially to solve Physical Internet issues. Therefore, in this work,
a new chance constrained mathematical model is proposed for truck scheduling
and containers’ grouping in a Road-Rail PI-hub considering uncertainty in the
trucks arrival time.

3 Problem Description

3.1 Road-Rail PI-hub

The Road-Rail PI-hub is used to transfer PI-containers from the inbound trucks
to the outgoing trains. As shown in Fig. 1, the Road-Rail PI-hub contains a set
Multi-objective Cross-Docking in Physical Internet Hubs 463

of wagons composing the outbound train on the Rail-Rail section, a set of PI-
conveyors in the Road-Rail section and temporary storage areas. In this platform,
the PI-containers received from inbound trucks are automatically carried out to
their destination in the train wagons. A complete functional design of the Road
Rail PI-hub with the different key performance indicators can be found in [1].

Fig. 1. The Road-Rail Physical Internet cross-docking hub

Different optimization problems related simultaneously to trucks scheduling


and PI-containers grouping can be found in the literature of the Physical Internet
[22]. Using solving methods such as mathematical modelling and meta-heuristics
can provide high quality solutions in a deterministic environment. However, in
real life logistics context, various perturbations such as traffic congestion, trucks’
engine failures, etc., can occur, implying delays in trucks arrival time. In this
case, the deterministic solution could be inefficient and the use of a reactive or
proactive solving method, such as stochastic programming, chance constrained
programming or multi-agent simulation become necessary to provide robust solu-
tions. This paper proposes a multi-objective chance constrained programming
model for truck scheduling and containers’ grouping considering uncertainty in
trucks arrival time.

3.2 Mathematical Formulation

The problem is formulated as a multi-objective chance constrained model based


on the mixed linear program proposed by [4]. All notations for parameters, vari-
ables and constraints are kept the same except for the inbound trucks arrival time
464 T. Chargui et al.

parameter and related constraints which have been adapted to handle uncer-
tainty. In fact, the uncertain arrival time of each truck is defined using a time
interval with minimal and maximal values instead of a single one, and chance
constrained programming have been used to solve the problem. Besides, for the
objective function the multi-objective Lexicographic Programming approach is
used instead of weighting coefficients. The advantage of using a Lexicographic
Programming approach is that the priority of each objective is defined by its
order. This is very useful especially when the objectives have different measure
units which is the case in the proposed model. The priority of each objective can
be defined by the decision maker.
The objective is to schedule inbound trucks and to group PI-containers in the
outgoing wagons while minimizing the distance travelled by these PI-containers
to arrive to wagons, the number of used wagons, and the total tardiness of
inbound trucks. However, inbound trucks can unload PI-containers having dif-
ferent destinations, which increases the complexity of assigning the trucks to the
dock doors.
The parameters and variables used in this paper are as follows:

Model parameters:

i Indices of the PI-containers (i = 1...N )


k Indices of the docks (k = 1...K)
d Indices of the destinations (d = 1...D)
w Indices of the wagons (w = 1...W )
h Indices of the trucks (h = 1...H)
Q Capacity of the wagons

Eh Uncertain arrival time of truck h
Fh Expected departure time of truck h
Jh Processing time of truck h
Y Vertical distance from the wagons’ docks to the trucks docks
V Changeover time of trucks
Pk Position of the center of the dock k
Rw Position of the center of the wagon w
Li Length of PI-container i
M A big positive number, M ≥ max(W, 5 × W agon Length, P lanning
Horizon)


1 if the container i is in the truck h
Ahi =
0 Otherwise

1 if d is the destination of the container i
Sdi =
0 Otherwise
Multi-objective Cross-Docking in Physical Internet Hubs 465

Decision variables
Binary variables:

1 if the container i is assigned to the wagon w
xiw =
0 Otherwise

1 if the truck h is assigned to the dock k
yhk =
0 Otherwise

1 if the wagon w is used
uw =
0 Otherwise

1 if d is the destination of the wagon w
ewd =
0 Otherwise


⎨1 if trucks h1 and h2 are assigned to the same
gh1 h2 = dock and h1 is a predecessor of h2


0 Otherwise

Continuous variables:

rh Start time of unloading truck h


sh End time of unloading truck h
fh Tardiness of truck h
diw Distance travelled by the PI-container i to the wagon w

Objective function
The objective of the mathematical model is to minimize three objective func-
tions in a given Lexicographic Programming order. This order is defined by the
decision maker:
• F W : The number of used wagons
• F D : The distance travelled by the PI-containers
• F T : The tardiness of the inbound trucks

 
  
W 
N 
W 
H
Minimize : F W , F D, F T = uw , diw , fh (1)
w=1 i=1 w=1 h=1
466 T. Chargui et al.

Constraints

W
xiw = 1 (∀i = 1...N ) (2)
w=1


N
xiw Li ≤ Q (∀w = 1...W ) (3)
i=1


D
xiw + xjw ≤ Sdi Sdj + 1 (∀i, j = 1...N, ∀w = 1...W : i = j) (4)
d=1

xiw ≤ uw (∀i = 1...N, ∀w = 1...W ) (5)


ewd ≤ Sdi + 1 − xiw (∀i = 1...N, ∀w = 1...W, ∀d = 1...D) (6)

D
uw = ewd (∀w = 1...W ) (7)
d=1


W
|w1 − w2 | + 1 ≤ ewd + M (2 − (ew1 d + ew2 d ))
w=1
(8)
(∀d = 1...D, ∀w1 , w2 = 1...W, w1 = w2 )

W
|w1 − w2 | + 1 ≤ uw + M (2 − (uw1 + uw2 )) (∀w1 , w2 = 1...W, w1 = w2 ) (9)
w=1

u1 = 1 (10)
diw ≥ |Pk − Rw | + Y − M (2 − (Ahi yhk + xiw ))
(11)
(∀i = 1...N, ∀w = 1...W, ∀k = 1...K, ∀h = 1...H)

K
yhk = 1 (∀h = 1...H) (12)
k=1

yh1 k + yh2 k − 1 ≤ gh1 h2 + gh2 h1 (∀k = 1...K, ∀h1 , h2 = 1...H, h1 = h2 ) (13)


gh1 h2 + gh2 h1 ≤ 1 (∀h1 , h2 = 1...H) (14)

rh ≥ E h (∀h = 1...H) (15)
rh2 ≥ sh1 + V − M (1 − gh1 h2 ) (∀h1 , h2 = 1...H) (16)

sh ≥ rh + Jh (∀h = 1...H) (17)


fh ≥ sh − Fh (∀h = 1...H) (18)
xiw , yhk , uw , ewd , gh1 h2 ∈ {0, 1}
(19)
(∀i = 1...N, ∀w = 1...W, ∀k = 1...K, ∀h, h1 , h2 = 1...H, ∀d = 1...D)
rh ≥ 0, sh ≥ 0, fh ≥ 0, diw ≥ 0 (∀i = 1...N, ∀w = 1...W, ∀h = 1...H) (20)

Constraint (2) ensures that each PI-container is loaded into one wagon. Con-
straint (3) ensures that the capacity of each wagon is not exceeded. Constraint
(4) guarantees that each wagon contains PI-containers with the same destina-
tion. Constraint (5) forces the wagon to be used if a PI-container is assigned to
Multi-objective Cross-Docking in Physical Internet Hubs 467

it. Constraints (6) and (7) define the destinations of each wagon. Constraint (8)
ensures that the wagons are consecutive if they have the same destination. In
constraint (9) all the used wagons must be consecutive. Therefore, there must
be no empty wagon between two used ones. Constraint (10) ensures that the
wagons are used from the beginning of the train. Constraint (11) calculates the
PI-containers’ distance. In constraint (12), each truck is assigned to one dock.
Constraints (13) and (14) handle the assignment and the sequencing of the trucks
at the docks. The constraint (16) guarantees that the unloading of a truck always
begins after the unloading of the previous one and changeover time, if there is
more than one truck assigned to the same dock. Constraints (15), (17) and (18)
handle the scheduling of the trucks. Constraints (19) and (20) ensure that the
variables are binary and positive.

3.3 Possibilistic Chance Constrained Programming (PCCP)


In the literature, the majority of truck scheduling optimization problems have
been performed assuming a deterministic formulation of the problem. Yet, with
the development of uncertainty theories there is an increasing interest to offer
a more realistic modelling that considers unforeseen changes that may occur in
real world situations. In this work, a triangular fuzzy representation is consid-
ered [7,11] for uncertain trucks arrival time and possibilistic chance constraint
programming to solve the problem [17,23].
In fact, possibility theory is characterized by the use of two dual set functions
that are the possibility (Pos) and the necessity (Nec) measures [8]. In this work,

uncertainty in trucks arrival time is considered. Rather than a single value, E h
parameter denotes a fuzzy value for the trucks arrival time. It is defined as
a triplet (Eh , Eˆh , Eh ) where Eˆh defines the average arrival time, Eh and Eh

represent respectively, the minimal and maximal values of E h . In our model,
the fuzzy arrival time impacts directly the constraint (15). The satisfaction of
this constraint depends on the value of these measures and can be expressed as
follows:

P os (rh ≥ Eh ) ≥ α

N ec (rh ≥ Eh ) ≥ β
These inequalities imply that constraint (15) is satisfied if the possibility and
necessity measures are at least equal to the thresholds α and β, respectively.
These values range on the unit interval [0, 1] and are generally defined by the
decision maker depending on his objectives. Besides, α and β define a certain
degree of flexibility for the constraint satisfaction where a low degree α implies
a weak constraint and conversely a high degree β states a strong constraint. The
different possible combinations of these values are defined as follows:
1. α = 0 and β = 0:

P os (rh ≥ Eh ) ≥ 0

N ec (rh ≥ Eh ) ≥ 0
468 T. Chargui et al.

These inequalities are always satisfied since possibility and necessity degrees
are between 0 and 1. The obtained solution is similar to the deterministic
case.
2. 0 < α < 1 and β = 0:

P os (rh ≥ Eh ) ≥ α ⇒ rh ≥ α Eˆh + (1 − α) Eh

N ec (rh ≥ Eh ) ≥ 0 ⇒ rh ≥ Êh

The satisfaction of the constraint is ensured by the possibility measure. The


necessity measure is always satisfied.
3. α = 1 and β = 0:

P os (rh ≥ E h ) = 1 ⇒ rh ≥ Êh

N ec (rh ≥ E h ) ≥ 0 ⇒ rh ≥ Êh

The constraint is always verified for the necessity measure. Only the possibil-
ity measure has to be checked.
4. α = 1 and 0 < β < 1:

P os (rh ≥ E h ) = 1 ⇒ rh ≥ Êh

N ec (rh ≥ E h ) ≥ β ⇒ rh ≥ β E h + (1 − β) Êh

α = 1 means that the satisfaction of the constraint is totally possible. The


necessity degree defines to what extent its satisfaction is compulsory.
5. α = 1 and β = 1:

P os (rh ≥ E h ) = 1 ⇒ rh ≥ Êh

N ec (rh ≥ E h ) = 1 ⇒ rh ≥ E h

This is the most difficult case. The satisfaction of the constraint depends on
the worst scenario E h .
According to the chosen value of α and β the arrival time of each truck will
vary in the interval [E h , E h ]. This variation impacts the quality of obtained
solution.

4 Experimental Results
This section is devoted to test the feasibility of the PCCP model. The experi-
ments are performed using IBM CPLEX (Version 12.9) on a PC with 2.4 GHz
Intel Core i3 CPU and 4 GB of RAM. The model is tested on a small illustrative
example of a Road-Rail PI-hub composed of three inbound docks, an outbound
train with ten outgoing wagons, and three inbound trucks. Each truck carries
PI-containers with various sizes and three different destinations. The changeover
time is set to 5 min. The arrival time, expected departure time and the process-
ing time for the three trucks (Eh , Fh , Jh ) are set to (10, 34, 24), (17, 29, 12) and
Multi-objective Cross-Docking in Physical Internet Hubs 469

(24, 60, 36). Then, a variation of +/− 10 min for the arrival time of each truck

Eh has been considered. The objective is to find a scheduling of inbound trucks
and a grouping of PI-containers in the wagons while minimizing the number of
used wagons (F W ), the PI-containers’ travelled distance (F D ), and the total
tardiness of inbound trucks (F T ).
The problem has been solved using a multi-objective Lexicographic Pro-
gramming approach where the order of objectives defines their priority. The
mechanism in such approach consists in minimizing the first objective function
without considering the other ones. Then, once an objective is fixed, the latter
is considered without changing the obtained value of the previous one. The fixed
objective becomes a constraint for the remaining ones. The process continues
until all the objectives are minimized. In the proposed experiments, the three
objectives are minimized in four different combinations of lexicographic order.
Table 1 presents the obtained results. The first column shows the four com-
binations of objectives’ lexicographic orders. For each combination, the model
is tested on different uncertainty scenarios: In the first one (α = β = 0), where
trucks arrive their expected time without uncertainty. Then, different levels of
uncertainty have been considered by varying the values of α and β between 0
and 1. This variation implies the variation of the trucks arrival time.

Table 1. Results on different priorities of the objective functions (F1 , F2 and F3 )

Objectives Priority Values of α and β Wagons Distance (m) Tardiness Computational


(min) Time (s)
F W , F D, F T α = 0, β = 0 4 260 10 1,16
0 < α < 1, β = 0 4 260 4,3 1,36
α = 1, β = 0 4 260 10 1,01
α = 1, 0 < β < 1 4 260 20 0,98
α = 1, β = 1 4 260 30 1,63
F D, F W , F T α = 0, β = 0 8 160 10 5,33
0 < α < 1, β = 0 8 160 4,6 5,75
α = 1, β = 0 8 160 10 5,75
α = 1, 0 < β < 1 8 160 20 5,63
α = 1, β = 1 8 160 22 5,71
F D, F T , F W α = 0, β = 0 10 160 0 4,34
0 < α < 1, β = 0 10 160 0 5,29
α = 1, β = 0 10 160 0 4,90
α = 1, 0 < β < 1 10 160 3,3 5,02
α = 1, β = 1 10 160 10 4,76
FW , FT , FD α = 0, β = 0 4 340 0 1,04
0 < α < 1, β = 0 4 340 0 0,75
α = 1, β = 0 4 340 0 0,70
α = 1, 0 < β < 1 4 340 4,6 0,61
α = 1, β = 1 4 340 10 0,54

As can be seen in Table 1, the lexicographic order of the objectives has


a clear impact on the quality of solutions. The number of used wagons is at
its optimal value in the first and fourth combinations where it was the first
470 T. Chargui et al.

objective to optimize. Similarly, when the distance is the prior objective (in the
second and third combinations), the minimal travelled distance value is obtained.
For each combination, the two objectives (number of wagons and PI-containers’
distance) are not affected by the variability of trucks’ arrival time. The objectives
are impacted only by the lexicographic order. Considering the minimization of
trucks’ tardiness, the delays are considerably high when the tardiness objective
is the less prioritized one as in the first and second combinations. Nevertheless,
in the third and fourth combinations, once the priority of tardiness is higher than
the number of wagons or the distance, the delays are considerably minimized,
especially for scenarios where the constraint is weakened (cases where values of
α and β are low).
When the value of α is low, especially for the first two combinations of the
objective functions, the tardiness is minimized, which results in an improved
solution. In the case of α = 1 and β = 0, an average solution is obtained equal
to the deterministic scenario. However, once β starts to increase, the tardiness
become higher which provides a more stable solution that can remain feasible
in case of any trucks delay. Both thresholds α and β can be set by the decision
maker (manager of the Road-Rail PI-hub) depending on the intensity of the
daily trucks congestion. The lexicographic order of the objective can also be set
depending on the priority and importance of each objective for the Road-Rail
PI-hub.

5 Conclusion

This paper proposes a multi-objective chance constrained programming model


for trucks’ scheduling and PI-containers grouping in a Road-Rail cross-docking
PI-hub. The model minimizes three objectives: the number of used wagons, the
PI-containers distance and the trucks tardiness while considering uncertainty on
inbound trucks arrival time. Different combinations of lexicographic order were
considered. The results show that the number of wagons and the PI-containers
are impacted only by the lexicographic order of the objectives and not by the
uncertainty on the trucks arrival time. For the trucks’ tardiness, the variation
of the uncertainty thresholds’ values α and β have a significant impact on this
objective.
Future research will focus on extending the uncertainty to other factors such
as the transfer time of PI-containers in the sorting zone. Another extension may
concern the optimization of the problem using robust meta-heuristics or multi-
agent simulation.

Acknowledgements. This research was supported by the ELSAT2020 project of


CPER sponsored by the French ministry of sciences, the Hauts-de-France region and
the FEDER. This work was supported also by the ANR PI-NUTS Project (grant ANR-
14-CE27-0015).
Multi-objective Cross-Docking in Physical Internet Hubs 471

References
1. Ballot, E., Montreuil, B., Thivierge, C.: Functional design of physical internet
facilities: a road-rail hub. In: 12th IMHRC Proceedings, Gardanne, France (2012)
2. Boysen, N., Fliedner, M.: Cross dock scheduling: Classification, literature review
and research agenda. Omega 38(6), 413–422 (2010)
3. Chargui, T., Bekrar, A., Reghioui, M., Trentesaux, D.: Multi-objective sustain-
able truck scheduling in a rail-road physical internet cross-docking hub considering
energy consumption. Sustainability 11(11), 3127 (2019)
4. Chargui, T., Bekrar, A., Reghioui, M., Trentesaux, D.: Proposal of a multi-agent
model for the sustainable truck scheduling and containers grouping problem in a
road-rail physical internet hub. Int. J. Prod. Res. (2019). https://doi.org/10.1080/
00207543.2019.1660825
5. Chargui, T., Bekrar, A., Reghioui, M., Trentesaux, D.: A simulation-optimization
approach for two-way scheduling/grouping in a road-rail physical internet hub.
IFAC-PapersOnLine 52(13), 1644–1649 (2019). 9th IFAC Conference on Manufac-
turing Modelling, Management and Control MIM 2019
6. Charnes, A., Cooper, W.W.: Chance-constrained programming. Manage. Sci. 6(1),
73–79 (1959)
7. Dubois, D., Prade, H.: Operations on fuzzy numbers. Int. J. Syst. Sci. 9(6), 613–626
(1978)
8. Dubois, D., Prade, H.: Possibility theory. In: Meyers, R.A. (ed.) Encyclopedia of
Complexity and Systems Science, pp. 6927–6939. Springer, New York (2009)
9. Gazzard, N., Montreuil, B.: A functional design for physical internet modular han-
dling containers. In: Proceedings of 2nd International Physical Internet Conference,
Paris, France, 06–08 July 2015
10. Gibson, B.J., Hanna, J.B., Defee, C.C., Chen, H.: The Definitive Guide to Inte-
grated Supply Chain Management: Optimize the Interaction Between Supply
Chain Processes, Tools, and Technologies. Pearson Education, London (2014)
11. Jain, R.: A procedure for multiple-aspect decision making using fuzzy sets. Int. J.
Syst. Sci. 8(1), 1–7 (1977)
12. Konur, D., Golias, M.M.: Analysis of different approaches to cross-dock truck
scheduling with truck arrival time uncertainty. Comput. Ind. Eng. 65(4), 663–672
(2013)
13. Konur, D., Golias, M.M.: Cost-stable truck scheduling at a cross-dock facility with
unknown truck arrivals: a meta-heuristic approach. Transp. Res. Part E Logist.
Transp. Rev. 49(1), 71–91 (2013)
14. Ladier, A.L., Alpan, G.: Cross-docking operations: current research versus industry
practice. Omega 62, 145–162 (2016)
15. Ladier, A.L., Alpan, G.: Robust cross-dock scheduling with time windows. Comput.
Ind. Eng. 99, 16–28 (2016)
16. Landschützer, C., Ehrentraut, F., Jodin, D.: Containers for the physical Internet:
requirements and engineering design related to FMCG logistics. Logist. Res. 8(1),
8 (2015)
17. Liu, B., Iwamura, K.: Chance constrained programming with fuzzy parame-
ters. Fuzzy Sets Syst. 94(2), 227–237 (1998). https://doi.org/10.1016/S0165-
0114(96)00236-9
18. Meng, Q., Wang, T.: A chance constrained programming model for short-term liner
ship fleet planning problems. Maritime Policy Manage. 37(4), 329–346 (2010)
472 T. Chargui et al.

19. Montreuil, B.: Toward a physical Internet: meeting the global logistics sustainabil-
ity grand challenge. Logist. Res. 3(2–3), 71–87 (2011)
20. Montreuil, B., Meller, R.D., Thivierge, C., Montreuil, Z.: Functional design of
physical internet facilities: a unimodal road-based crossdocking hub. In: Progress
in Material Handling Research. vol. 12, edited by B. Montreuil, A. Carrano, K.
Gue, R. de Koster, M. Ogle, and J. Smith, 379–431. MHI, Charlotte, NC, USA
(2013)
21. Pach, C., Sallez, Y., Berger, T., Bonte, T., Trentesaux, D., Montreuil, B.: Routing
management in physical Internet crossdocking hubs: study of grouping strategies
for truck loading. In: Grabot, B., Vallespir, B., Gomes, S., Bouras, A., Kiritsis, D.
(eds.) Advances in Production Management Systems. Innovative and Knowledge-
Based Production Management in a Global-Local World, IFIP Advances in Infor-
mation and Communication Technology, vol. 438, pp. 483–490. Springer, Berlin
Heidelberg (2014)
22. Pan, S., Trentesaux, D., Ballot, E., Huang, G.Q.: Horizontal collaborative trans-
port: survey of solutions and practical implementation issues. Int. J. Prod. Res.
57(15–16), 5340–5361 (2019)
23. Pishvaee, M., Razmi, J., Torabi, S.: Robust possibilistic programming for socially
responsible supply chain network design: a new approach. Fuzzy Sets Syst. 206,
1–20 (2012)
24. Rajabi, M., Shirazi, M.A.: Truck scheduling in a cross-dock system with multiple
doors and uncertainty in availability of trucks. J. Appl. Environ. Biol. Sci 6(7S),
101–109 (2016)
25. Shen, J., Zhu, Y.: Chance-constrained model for uncertain job shop scheduling
problem. Soft. Comput. 20(6), 2383–2391 (2016)
26. Theophilus, O., Dulebenets, M.A., Pasha, J., Abioye, O.F., Kavoosi, M.: Truck
scheduling at cross-docking terminals: a follow-up state-of-the-art review. Sustain-
ability 11(19), 5245 (2019)
27. Van Belle, J., Valckenaers, P., Cattrysse, D.: Cross-docking: state of the art. Omega
40(6), 827–846 (2012)
28. Walha, F., Bekrar, A., Chaabane, S., Loukil, T.M.: A rail-road PI-hub allocation
problem: active and reactive approaches. Comput. Ind. 81, 138–151 (2016)
29. Walha, F., Chaabane, S., Bekrar, A., Loukil, T.: The cross docking under uncer-
tainty: state of the art. In: 2014 International Conference on Advanced Logistics
and Transport (ICALT), pp. 330–335 (2014)
30. Wang, X., Ning, Y.: Uncertain chance-constrained programming model for project
scheduling problem. J. Oper. Res. Soc. 69(3), 384–391 (2018)
A Hybrid Simulation Model to Analyse
and Assess Industrial Sustainability Business
Models: The Use of Industrial Symbiosis

Melissa Demartini(B) , Filippo Bertani, Gianluca Passano, and Flavio Tonelli

Department of Mechanical Engineering, Energetics, Management and Transportation (DIME),


University of Genoa, Via all’Opera Pia 15, 16145 Genoa, Italy
melissa.demartini@dime.unige.it, {filippo.bertani,
gianluca.passano,flavio.tonelli}@unige.it

Abstract. System thinking is the approach required to understand and analyse


complex systems. When systems are complex, their essential properties emerge
from the interaction and relationships between parts that cannot be isolated. In such
a context, the sustainability challenge requires a mutual synergy between markets
and governments to lead the world toward long term prosperity. In order to app-
roach sustainability with a chance of success, it must be seen as multidisciplinary
challenge involving different systems interconnected through mutual feedbacks.
Therefore, the scope of this paper is to adopt a hybrid approach composed of
Agent Based and System Dynamics that: i) analyses the redesign of companies
based on industrial symbiosis strategies, and ii) the impact of specific policies on
the transition to industrial symbiosis. Results show that industrial symbiosis lies
at the core of the positive feedback loop which can support transitioning towards
a zero-waste economy. The larger the number of firms adopting symbiosis, the
greater the reduction in waste disposal costs and profits and the greater the number
of firms engaging in the transition. Policy making can both amplify or reduce the
feedback effect by means of public investments and/or tax incentives. The model
allows for testing different assumptions in the industrial system while generating
a top-down perspective to take effective policies towards a zero-waste economy.

Keywords: Industrial sustainability · Industrial symbiosis · System dynamics ·


Agent based

1 Introduction
Resources of the planet are finite; in 2019 it was considered that 1.75 planets are needed
to regenerate in one year the natural resources consumed. If the consumption of raw
materials rises, so does waste generation. Resources and waste management are key to
meeting the future needs of society in a sustainable manner (Demartini et al. 2016). Waste
prevention activities or policies such as restricting planned obsolescence in electronic
products and measures like minimizing product weight or design for disassembly will
contribute to tackle these issues (Demartini et al. 2019). A reduction in the consumption

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 473–482, 2021.
https://doi.org/10.1007/978-3-030-69373-2_34
474 M. Demartini et al.

of natural resources and the amount of waste generated would also be accomplished if a
shift to circular economic and production systems mimicking the self-sustaining closed
loop systems found in nature (such as the water cycle), is put into practice. A circular
economy aims at transforming waste back into resources, by reversing the dominant
linear trend of extracting, processing, consuming or using and then disposing of raw
materials, with the ultimate goal of preserving natural resources while maintaining the
economic growth and minimizing the environmental impacts (Cobo et al. 2018). To this
purpose, ‘System thinking’ is the approach required to understand and analyze complex
systems.
A complex system can be defined as a ‘set of mutually dependent elements which
interact one to another towards a purpose’. In such a context, the sustainability challenge
requires a mutual synergy between markets and governments to lead the world toward
long term prosperity. In order to approach sustainability with a chance of success, it
must be seen as multidisciplinary challenge involving different systems interconnected
through mutual feedbacks (Williams et al. 2017). Industrial symbiosis (IS) is considered a
relevant strategy which can improve all the dimensions of sustainability. It is based on the
resources sharing, i.e. waste is used as raw material for other processes (Chertow 2000).
Companies in an industrial symbiosis context can cooperate to share energy, labour,
knowledge, logistics and expertise (Demartini et al. 2018). How to develop organisations
with a sense of purpose and how to build a sustainable competitive advantage are key
challenges.
Policy makers can play a fundamental role amplifying or reducing the effect of
circular economy by means of public investments and/or tax incentives, removing legis-
lation, technological or financial barriers through effective policy measures, leading to
steady economic growth with business opportunities across the whole economy. Through
subsidies and supportive taxation, policy makers can reduce the risks in establishing
sustainable business models such as defining recycling policies, global standards and
goals. It is also important to underline that there is another critical element that should
be carefully analysed; this is the development of policy considering the technological
advancement in recycling and waste processing and the interaction between the nega-
tive (i.e. pollution, emission) and positive (i.e. technological innovation) externalities.
However, the complexity of laws and different regulations throughout the world can
harm circular economy. This research proposes a hybrid approach (Agent Based (AB)
and System Dynamics (SD)) that favours the redesign of companies through indus-
trial symbiosis strategies and evaluate the impact of specific policies on its transition.
The paper is organized as follows. Section 2 describes the main characteristics of the
hybrid framework. Section 3 presents the model implementation while Sect. 4 reports
results and discussions. Finally, in Sect. 5 final consideration and future developments
are discussed.

2 Hybrid Framework: Modelling Industrial Symbiosis

In order to evaluate the effectiveness of IS in terms of environmental impact and economic


benefits, a case study developed in (Demartini et al. 2018) is analysed by considering
additional factors. The case study is composed of three different industrial sectors from
A Hybrid Simulation Model to Analyse and Assess 475

the steel industry, the pulp industry and the cement industry. The latter industrial sector
can buy wastes from steel plants and pulp plants; these wastes are used by the various
cement plants in substitution of virgin inputs during their output production processes;
we assume that the cement plants produce concrete, composed by inert and clinker. Steel
plants produce artificial inert that can be used by cement plants in substitution of the
natural inert and the industrial sector of pulp plants produce eco-clinker that can be used
by cement plants in substitution of the natural clinker. The hybrid model is composed
by seven different agent classes: the pulp industry which produces as waste artificial
clinker; the steel industry which produces as waste artificial inert; the cement industry
that can buy from both steel plants and pulp plants wastes in order to produce a “green
concrete”; public landfills providing a waste disposal service; virgin inert and clinker
suppliers performing as IS competitors; the Government acting as a regulator.
The different populations live within a main agent that works as “environment”:
inside this agent, that can be considered as a country or a region, the populations are
distributed randomly and can interact with each other developing various dynamics
including the IS. The interactions among agents, which are characterized by a finite
rationality and bounded capabilities of computation, take place through a decentralized
artificial market in which virgin and waste materials are sold and bought. Another impor-
tant feature of the model is its partial stock-flow consistency, both from a monetary and
a material (physical) perspective, see Lavoie and Godley (2014); Godley and Lavoie
(2016); Godin and Caverzasi (2014); this consistency is verified only for the IS interest
variables. This characteristic allows to evaluate the economic impact and benefits of the
IS, in fact, each agent (or firm) belonging to the various industrial sectors considered is
endowed with a balance sheet which includes the details concerning assets and liabilities;
in this way eventual economic benefits are accounted. The model structures has been
implemented according to a hybrid paradigm of simulation which consists in the union
of Agent-based and System dynamics modelling; the integration of the two paradigms
has been made following a “process-environment format” in which the System dynamics
is included in the Agent-based modelling and models the internal structure of the various
agent, see Abdelghany and Eltawil (2017) and Demartini et al. (2018).
The agent-based approach allows to represent the heterogeneity characterizing a
complex IS network, composed by firms each one different from the others; it enables
to capture effectively the effect of the interactions between the various industrial sectors
and moreover, it allows to take into account the external feedbacks originated by the
market dynamics at the macro level. System dynamics permits to catch the internal
feedbacks characterizing both management and manufacturing dynamics, see Sterman
(2000); Forrester (1961). Each agent (or firm) is composed by two different System
dynamics sub-structures: the manufacturing process structure and the economic one.
Notwithstanding these substructures are distinct, the manufacturing process structure
passes data to the economic one in order to account costs and revenues and also set
the output price through a mark-up mechanism. The various firms differ in geographic
location and size; they are randomly distributed inside a fictitious “square region”, and
the position is defined by two different uniform probability distributions, one for each
reference axes “X” and “Y”. The firms’ size is represented by their production capacities
that follow a Pareto distribution.
476 M. Demartini et al.

Fig. 1. The flowchart representing the IS decision-making process of a generic cement plant i;
rectangles indicate actions, while rhombus represent decisions

Figure 1 shows the IS decision-making process of cement plants: the rectangles


indicate actions, while the rhombus represents decisions. There are two different virgin
material prices (one for each of them) and it is assumed that they are adopted by all
the virgin material suppliers. These prices can change during the simulation follow-
ing the market dynamics; substantially they are influenced by the interaction between
demand and supply. The virgin material price dynamics aims to represent the effect of a
widespread IS on prices: the higher the quantity of waste recycled, the lower the aggre-
gate demand of virgin materials and accordingly the lower the prices; obviously the price
of a certain good cannot be totally influenced by an IS; the occurrence of other events
on the market is also modelled. The dependence of prices on the interaction between
supply and demand is a standard approach and in particular in our model it is consistent
with the Say’s law of market. According to the latter, the supply creates demand and, as
a matter of fact, if the market has an excessive supply, prices decrease tending to bring
the system back at the previous equilibrium increasing the demand. Furthermore, from
the virgin material supplier perspective, this mechanism could represent their attempt to
compete against the IS itself.
A Hybrid Simulation Model to Analyse and Assess 477

3 Industrial Symbiosis Dynamics


The symbiotic exchanges involve two different parts, namely buyers and sellers, and
for this reason IS uses two different fitness conditions that have the task of verifying its
economic feasibility. For sellers, the fitness condition is represented by the following
relation:

cp − pw ≤ l + ctrl (1)

where: cp are the pre-processing cost which a firm must sustain in order to adapt the
waste features to the receiving industrial plant needs, pw is the waste material selling
price, l is the landfill tax, and ctrl are the transportation costs to the landfill. This fitness
function verifies the existence of an economic convenience for the industrial plants which
sell waste: it compares the potential revenue of a hypothetical IS with the traditional
management waste costs. As a matter of fact, in absence of symbiosis exchanges, steel
plants and pulp plants search the nearest landfills in order to dispose their waste.
The waste price pw is crucial in order to establish an IS. Because of the competition
with virgin material suppliers to sell their products, sellers link their pw on the virgin
material price pv according to the following relation:

pw = h · pv (2)

where h is varied by the seller in order to find a symbiotic partner. In fact, the seller
reduces h until it establishes an IS or reaches the minimum price that verifies the economic
feasibility of IS1. Adopting this modelling assumption, sellers always put their products
into the market even if prices are not aligned with the average value. The landfill tax l
represents a demand-side policy which aims to stimulate the waste demand inside the
economic system by decreasing the average minimum price that verifies the IS feasibility:
the higher the l value, the lower the minimum price.

3.1 The Government Agent

During the years, various governmental actions have been implemented and the funda-
mental role of Government is highlighted by the results of several national programs; in
particular, one of the most famous governmental initiative related to IS is represented
by the NISP2 (National IS Programme), which was funded in UK by public institu-
tions; the efficacy of this initiative proves that policy makers can clearly stimulate the
establishment of IS networks, see Mirata (2004); the Government action can work on
several fronts, in particular on fiscal, informational and organizational level. Therefore,
not only top-down IS can be promoted and designed by politicians, but also a favourable
environment for bottom- up self-organized IS can be created using specific economic
and fiscal policies, for example tax incentives, subsidies and landfill tax. Thus, policies
can encourage the creation of an environment in which IS can constitute a real economic
benefit for the various industrial plants and an environmental and social advantage for
the entire community. Furthermore, the use and the effectiveness of these policies has
been widely studied, also through an agent-based model perspective, see Albino et al.
478 M. Demartini et al.

(2016); Fraccascia et al. (2017). In order to study the role of policy makers in promot-
ing the birth of self-organized IS, we develop an enriched version of the model which
includes the Government agent. The aim of this special agent is to foster the creation
of symbiosis between companies in a financially sustainable way mixing two different
market-based policies, i.e. landfill tax and economic subsidy. Being heterogeneous, IS
between the three populations of agents are promoted through different levels of taxes
and subsidies.

4 Sensitivity Analysis of the Baseline Model

In this section the results concerning the sensitivity analysis are reported. In particular
the investigation is related to two of the most important quantities which influence the
economic feasibility of the IS, namely the landfill tax l and the economic subsidy. First,
the sensitivity analysis is presented, concerning landfill tax which is characterized by
the absence of the economic subsidy.

Fig. 2. For any values of the landfill tax l, the figure shows median, first and third quartile of
distributions over 20 seeds of time averages of: the artificial clinker exchanged every week (a), the
average artificial clinker price (b), the virgin clinker used in the production process every week
(c), the number of IS established between pulp and cement plants (d). Time averages refer to
two-year-long time period.

In this respect, as shown in Fig. 2 (a), the increase of the landfill tax l involves an
enhancement of the symbiotic exchanges between industrial sectors; a higher landfill
tax stimulates potential sellers to put their waste on the market trying to sell it also
to a lower price, as it is visible in Fig. 2 (b). The landfill tax controls, with the other
variables visible in the Eq. 1, the minimum price threshold at which companies are
willing to sell their waste instead of disposing it into a public landfill. It’s interesting to
notice that in Fig. 2 (a) the various data distributions assume a S-shape with the landfill
tax increase reaching a maximum range of artificial material exchanged values, and
this is also reflected by the number of IS established, see Fig. 2 (d); this fact happens
A Hybrid Simulation Model to Analyse and Assess 479

notwithstanding the production of waste of both steel plants and pulp plants can saturate
the cement industry input demands.
As mentioned above, the dimensions of the steel plants and pulp plants industrial
sectors are related to the cement plants population size: in case of a full IS between
industrial sectors, the input demand of the cement industry is saturated by the waste
produced by the other sectors and the various cement plants do not use virgin material
in their production process. The landfill tax is not the only feasibility factor which can
condition the establishment of an IS, see Eq. 1. The model takes into account various
economic variables, as the costs related to pre-processing treatments cp that represent a
further economic barrier to the symbiosis feasibility; waste needs to be transformed in
order to be adopted by other industrial processes as input and the pre-processing activities
are executed internally by the waste producers. Furthermore, geographical factors can
influence and block symbiotic exchanges both from the sellers and buyers’ perspective;
steel and pulp plants could be located near a public landfill and, at the same time, cement
plants may be closer to virgin material sellers. For this reason, notwithstanding the high
sensitiveness presented by the model to the landfill tax, the virgin material consumption
results to be always greater than zero, as it is shown in Fig. 2 (c).

5 Computational Results: Policy Evaluation

In this section results related to the enriched version of the model are presented. Gov-
ernment intervenes dynamically in order to foster the birth of self-organized IS between
firms controlling two different policies: landfill tax and economic subsidy. Figure 3
shows the reserves owned by Government during simulation (SGPP and SGSP ).

Fig. 3. Three time series representing the reserves of money owned by Government and used in
order to finance economic expenditures. In particular, it shows: total money owned by Government
SG , reserves related to SPs and CPs symbiosis SGSP and reserves related to PPs and CPs symbiosis
SGPP . Time series refer to a specific replication which is representative of the system average trend.

Figure 4 hows raw materials used in production processes each week during simu-
lation. At the beginning of the simulation, CPs mainly purchase materials from virgin
suppliers because of the economic convenience of their products. It is worth noting
that there is also a certain number of self-organized IS inside the economy before a
480 M. Demartini et al.

Fig. 4. Various time series. In particular: virgin and artificial clinker used in CPs production
processes (a), virgin and artificial inert used in CPs production processes (b). All-time series refer
to a specific replication which is representative of the system average trend.

direct Govern intervention. As soon as the Government starts intervening, the level of
symbiosis inside the economy increases. Around the hundredth week, the consumption
of symbiotic material experiences a period of slight oscillation related to competitive
behaviour of virgin material suppliers that try to contrast IS practice. However, further
interventions are able to stabilize the system encouraging symbiosis at the expense of vir-
gin material suppliers. Even if these suppliers fight IS during simulation, Government
policies stimulates also learning-by-doing mechanism related to pre-processing costs
reinforcing symbiosis itself: the lower the pre-processing costs, the lower the minimum
price that verifies the economic feasibility of symbiosis for both SPs and PPs. Halfway
through the simulation, the quantity of symbiotic material sold surpasses the virgin one
pointing out the effectiveness of Government policy mix. At the end of the simulation,
the situation is reversed compared to the beginning: the quantity of symbiotic material
sold is considerably higher compared to the virgin one for both SPs and PPs. Further-
more, it is worth highlighting that this remarkable result is achieved without external
financial interventions but only mixing these two policies.
Therefore, the model shows the possibility to promote the creation of self-organized
IS inside the economy in an effective and sustainable way avoiding external financial
expenditures. This point turns out to be crucial because it witnesses the possibility to
promote the transition towards a Circular Economy effectively without forcing firms but
only creating an economic environment in which industrial innovation is convenient.
Obviously, in order to obtain these results, significant efforts in order to organize and
implement the Govern strategy are required. In this respect, Government should activate
specific taskforces devoted to the detection of industrial sectors potentially interested to
this practice and to the monitoring of symbiosis trend within the economic system.

6 Conclusions

This paper proposes the application of a hybrid approach to focus efforts towards the
(re)modelling of industrial symbiosis processes and effects on the transition towards
A Hybrid Simulation Model to Analyse and Assess 481

a zero-waste economy. The success of an industrial symbiosis transformation is deter-


mined by the government intervention. Policy maker have the responsibility to safeguard
the environment and accordingly the well-being of people, while companies try to maxi-
mize their profits and most of the time; the environment protection is in contrast with the
interests of industrial plants. Notwithstanding these considerations, the parallel works of
these entities can produce great results. As far as companies concern, learning economies,
specific researches and the willingness to abandon the classic production paradigms in
order to find new ways of gain confer to IS networks a certain degree of resilience.
As regards Government, a monitoring activity combined with certain instruments
as economic subsidies, fiscal incentives (for those firms involved or that would like to
be involved into an IS) or landfill taxes can favour the reinforcement (as well as the
development) of a prosperous environment for a bottom-up symbiosis and obviously an
industrial network resilience increase. Policy makers can clearly stimulate the establish-
ment of IS networks; the government action can work on several fronts, in particular on
fiscal, informational and organizational levels. So, not only top-down IS (the well-known
Eco-Industrial Parks) can be promoted by politicians, but also a favourable environment
for bottom-up self-organized IS could be created using specific economic and fiscal poli-
cies, for example: tax incentives, economic subsidy for industrial plants which involve
in an IS or landfill tax. Thus, policies can encourage the creation of an environment in
which IS can constitute a real economic benefit for the various industrial plants and an
environmental and social benefit for the entire community. The use and the effectiveness
of these policies has been widely studied, also in an agent-based model perspective, and
their benefits are considered as a fact. For this reason, a model consistent with reality
must react positively to these policies.

References
Demartini, M., Orlandi, I., Tonelli, F., Anguita, D.: Investigating sustainability as a performance
dimension of a novel manufacturing value modeling methodology (MVMM): from sustain-
ability business drivers to relevant metrics and performance indicators. XXI Summer School
“Francesco Turco” (2016)
Demartini, M., Evans, S., Tonelli, F.: Digitalization technologies for industrial sustainability.
Procedia Manuf. 33, 264–271 (2019)
Cobo, S., Dominguez-Ramos, A., Irabien, A.: From linear to circular integrated waste management
systems: a review of methodological approaches. Resour. Conserv. Recycl. 135, 279–295 (2018)
Williams, A., Kennedy, S., Philipp, F., Whiteman, G.: Systems thinking: a review of sustainability
management research. J. Clean. Prod. 148, 866–881 (2017)
Chertow, M.R.: Industrial symbiosis: literature and taxonomy. Ann. Rev. Energy Environ. 25,
313–337 (2000)
Demartini, M., Tonelli, F., Bertani, F.: Approaching industrial symbiosis through agent-based
modeling and system dynamics. In: Studies in Computational Intelligence, vol. 762, pp. 171–
185. Springer (2018)
Lavoie, M., Godley, W.: Kaleckian models of growth in a coherent stock-flow monetary framework.
In: The Stock-Flow Consistent Approach (2014)
Godley, W., Lavoie, M.: Monetary Economics: An Integrated Approach to Credit, Money, Income,
Production and Wealth. Palgrave Macmillan, UK (2016)
482 M. Demartini et al.

Godin, A., Caverzasi, E.: Post-Keynesian stock-flow-consistent modelling: a survey. Camb. J.


Econ. 39, 157–187 (2014)
Abdelghany, M., Amr, B.E.: Linking approaches for multi-methods simulation in healthcare
systems planning and management. Int. J. Ind. Syst. Eng. 26(2), 1–6 (2017)
Demartini, M., Bertani, F., Tonelli, F.: AB-SD hybrid modelling approach: a framework for eval-
uating industrial sustainability scenarios. In: International Workshop on Service Orientation in
Holonic and Multi-Agent Manufacturing, pp. 222–232. Springer, Cham (2018)
Sterman, J.: Business Dynamics: Systems Thinking and Modeling for a Complex World. MIT
Press, Cambridge (2000)
Forrester, J.W.: Industrial Dynamics. MIT Press, Cambridge (1961)
Mirata, M.: Experiences from early stages of a national industrial symbiosis programme in the
UK: determinants and coordination challenges. J. Clean. Prod. 12, 967–983 (2004)
Albino, V., Fraccascia, L., Giannoccaro, I.: Exploring the role of contracts to support the emergence
of self-organized industrial symbiosis networks: an agent-based simulation study. J. Clean. Prod.
112, 4353–4366 (2016)
Fraccascia, L., Giannoccaro, I., Albino, V.: Rethinking resilience in industrial symbiosis:
conceptualization and measurements. Ecol. Econ. 137, 148–162 (2017)
Optimal Production and Supply Chain
Planning
Realization of an Optimal Production Plan
in a Smart Factory with On-line Simulation

Hugo Zupan(B) , Marko Šimic, and Niko Herakovič

Faculty of Mechanical Engineering, University of Ljubljana, Ljubljana, Slovenia


{hugo.zupan,marko.simic,niko.herakovic}@fs.uni-lj.si

Abstract. The successful improvement of the competitiveness of companies


depends largely on the efficiency of assembly and handling systems and processes.
Their efficiency can be increased by various optimization methods, especially with
regard to cost reduction, shortening of throughput times, delivery times, increased
utilization of plant capacity, etc. One of the most effective methods for optimizing
such systems is optimization with online simulation. In this paper we present an
innovative expert system and an innovative methodology of online simulation,
where we have extended the conventional offline simulation with digital twin and
digital agents. This has enabled the continuous control and ongoing optimization
of the real production system and process. We have combined the digital AHSP
with the real system via the cloud, thus creating all the necessary framework con-
ditions for the online simulation and thus developing an expert system. The expert
system is in constant connection with the real system and constantly monitors and
optimizes it. The methodology for intelligent algorithms, digital agents and digi-
tal twins provides a framework for their practical application in a real production
environment.

Keywords: On line simulation · Smart factory · Digital twin · Digital agent ·


Intelligent algorithm · Expert system

1 Introduction

The optimization of assembly and handling systems and processes (AHSP) is important
to reduce costs, shorten lead times, delivery times, etc. and thus ensure the competitive-
ness of companies. It has been clearly shown [1–3] that failures can lead to a reduction
in overall equipment effectiveness (OEE), which can account for up to 50% of costs. For
this reason, it is essential to optimize AHSP. To optimize the AHSP, different approaches
and methods have been used to effectively achieve the optimal or near-optimal solution
[4]. One of the most effective methods to optimize such systems is the optimization with
on-line simulation using digital twins and digital agents [5–7].
For research and testing purposes of the expert systems for real integration into
the process control system of the production line, we have set up a production line in
the laboratory. The production line consists of six assembly workstations, intermediate
buffers at the assembly workstations, two handling industrial robots, variable tools with

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 485–495, 2021.
https://doi.org/10.1007/978-3-030-69373-2_35
486 H. Zupan et al.

six disposal stations and a material warehouse with a capacity of 45 storage locations for
incoming material and end products (Fig. 1). More than 3.67 × 1070 different products
and variants can be assembled on the production line.

Fig. 1. Production line of an assembly and handling system and process

Our approach consists of digital agents in combination with the digital twin, and
this allows to automatically making quick and smart decisions. It is well known [8, 9]
that the use of discrete event simulation or the digital twin is a very effective tool for
“what-if” scenarios for any kind of production system. In our case, we have transformed
a real production system with all its features and limitations into a virtual factory and
upgraded the digital twin with digital agents. The task of the digital agents is to receive
input from the digital twin and on this basis find a quick or best solution.
Each digital agent has its own work or task. It is the digital agent’s job to find a solution
quickly and automatically as soon as it receives a request from a digital twin and to return
this solution to the digital twin. Digital Agents communicate directly with the digital
twin. Some Digital Agents also communicate with each other, further accelerating the
operation of the digital twin itself. Digital Agents work completely independently of
each other, but there is also a connection to a global Digital Agent. The Global Digital
Agent’s main task is to coordinate the proper operation of all other Digital Agents.
The developed system of digital twins and digital agents was integrated into the expert
system (Fig. 2) and combined with the real production system via the cloud. The expert
system receives all input data about the production system and the orders (production
plan) via cloud. The expert system then calculates an optimal production plan and sends
it through the cloud to the real system, where the implementation of this optimal plan
begins. The expert system also checks the effects of disturbances that occur in the real
system and, if necessary, corrects the production plan so that it eliminates the influence
of the disturbances or the disturbance itself, and sends the data through the cloud to the
real system - all in real time.
Realization of an Optimal Production Plan in a Smart Factory 487

Fig. 2. On line simulation by using the digital twin as additional expert system integrated into
the process

The remainder of this short article is structured as follows. In the next section, the
research and development of an expert system is outlined, followed by the combination
of an expert system to an on line simulation in Sect. 3. Concluding remarks can be found
in Sect. 4.

2 Research and Development of an Expert System

The main element of the expert system is the digital twin (DT) of the process, since
it is the backbone of a smart factory. The digital twin was built in Tecnomatix Plant
Simulation. The research and development of the ASHP digital twin was done in two
basic steps:

• logical design of the digital twin model and


• computer model of digital twin.

2.1 Logical Design of the Digital Model

The basic goal of this step is to obtain information about the chronological sequence of
orders via the digital twin. For this reason, we have converted the entire ASHP of the
production line and segments into digital form (Fig. 3).
488 H. Zupan et al.

Fig. 3. Logical scheme of ASHP line

When a digital twin of the ASHP is built, the model itself must be designed to allow
inputs that are intended for actual production. The model allows the use of different
input data and is fully parametric. At the same time, it lists all essential features of an
actual ASHP.
Based on the assumptions of the digital ASHP model and the characteristics of the
actual ASHP, we have designed a logical scheme of the production model. In off-line
mode, the twin operates using the input data defined in Fig. 3 as boundary conditions of
the model. All other required data, parameters (times of individual operations, material
flow logistics, allowed movements of robots, times of installation and maintenance of
robots, conveyor speeds, etc.), constraints, characteristics and resources are obtained by
the digital twin from the knowledge base of the production process. Based on these input
data, constraints, properties and resources, the digital twin calculates the outputs. These
outputs are available in the form of indexes, tables, graphs or indicators.

2.2 Digital Twin


We programmed and built the digital twin (Fig. 4) in Tecnomatix Plant Simulation (PS).
In the DT, simple logical dependencies of the assembly and handling processes are
enumerated with standard PS software objects. More complex logical dependencies,
especially mathematical algorithms representing digital agents (DA), are created with
methods or subroutines in the SimTalk programming language.

2.3 Digital Agents


As mentioned above, each digital agent (DA) has its own job or task. DA works with
local data and therefore allows for rapid data analysis and computational operations,
which in combination with the digital twin allows quick and smart decisions. The task
of the digital agents is to receive input from the digital twin and to find a quick or best
Realization of an Optimal Production Plan in a Smart Factory 489

solution based on inputs. In this way, the digital agents enrich the digital twins and on
the other hand speed up the operation of the digital twins. It is very important that the
digital twin is properly verified and validated along with all digital agents, because only
then can we trust the results that the expert system provides us with.

Fig. 4. Traditional “off-line” simulation process

In our case, we have built several digital agents that cover a variety of important
tasks within the process:

• D.A.I. - a digital agent that sets the correct initial state of the warehouse, buffers,
robots and conveyor in the digital twin;
• D.A.O. - a digital agent that checks which orders can be carried out on the basis of
the desired orders received from the production plan, using the available resources;
• D.A.M. - a digital agent that plans the order production on assembly stations;
• D.A.J. - a digital agent that generates all handling and assembly operations for robots;
• D.A.D. - a digital agent responsible for quality control and troubleshooting;
• G.D.A. - a global digital agent that oversees the operation of all digital agents and
ensures the correct sequence of communications, and
• D.A.A. - a digital agent with an expert system designed to optimize the production
schedule using an intelligent algorithm.

Digital agents communicate directly with the digital twin some digital agents also
communicate with each other, which further accelerates the operation of the digital twin
itself. Digital agents work completely autonomously from each other, but there is also a
490 H. Zupan et al.

connection to a global digital agent. The Global Digital Agent’s main task is to coordinate
the proper sequence of operation of all other DA. See Fig. 5.

Fig. 5. Digital agents and their communication with the digital twin and with each other

3 Combining the Digital Twin and Digital Agents with the Expert
System

After proper validation and verification of the digital twin and all agents, we merged the
digital twin with all digital agents. The last agent we merged was the DA that includes
an expert system and is responsible for the product plan optimization (D.A.A.), which
includes our own intelligent Flip and Insert (FI) algorithm [10]. D.A.A. provides or
calculates the optimal or near-optimal production plan based on all input parameters.
The expert system has been integrated into digital twins using program code and tables.
Before starting the production plan optimization, D.A.A. is provided with a production
plan specified by D.A.N.
Three cyclical steps are required for the expert system to function properly:

• In the first step, D.A.A. creates the n iteration of the production plan, which it sends
to the digital twin,
• In the second step, the digital twin checks and calculates or gives a solution for this n
iteration of the production plan, and
• In the third step, the digital twin sends this solution back to the expert system (D.A.A.),
which checks this solution and decides whether it is the final solution (optimal or near
optimal) or the new iteration with the first step begins.

With the expert system and D.A.A., various target parameters can be optimized. In
our case we define two target parameters:
Realization of an Optimal Production Plan in a Smart Factory 491

• The first target parameter is the total throughput time of all orders (tALL ), which should
be kept as short as possible, and
• The second target parameter is the minimization of the total processing time of the
individual jobs (*tALL ).

Our experiment consists of six different scenarios. Each scenario contains different
orders. The results with the first target parameter are shown in Table 1 and the results
with the second target parameter in Table 2.

Table 1. Optimization results of individual scenarios with expert system and FI algorithm if the
target parameter is the total flow time of all orders

Scenario Initial time tALL Optimized time tALL Improvement [%] Calculating time
1 1 h 44 s 59 min 53 s 1,40% 58 s
2 1h3s 59 min 3 s 1,67% 58 s
3 1 h 4 min 48 s 1 h 4 min 28 s 0,51% 40 s
4 1 h 4 min 53 s 1 h 4 min 33 s 0,51% 40 s
5 1 h 8 min 56 s 1 h 5 min 20 s 5,22% 1 min 3 s
6 1 h 9 min 12 s 1 h 6 min 14 s 4,29% 1 min 43 s

Table 2. Optimization results of individual scenarios with expert system and FI algorithm if the
target parameter is the sum total of all the flow times of individual orders.

Scenario Initial time Optimized Improvement Optimized Calculating


*tALL time *tALL [%] time tALL time
1 9 h 35 min 7 h 25 min 29,33% 59 min 53 s 1 min 39 s
55 s 19 s
2 7 h 31 min 7 h 13 min 4,33% 59 min 3 s 1 min 21 s
56 s 11 s
3 9 h 38 min 8 h 14 min 16,94% 1 h 4 min 28 s 1 min 28 s
42 s 51 s
4 9 h 48 min 8 h 17 min 18,34% 1 h 4 min 33 s 1 min 14 s
54 s 39 s
5 9 h 52 min 8 h 38 min 14,20% 1 h 5 min 20 s 1 min 16 s
23 s 43 s
6 12 h 6 min 5 s 8 h 38 min 38,04% 1 h 6 min 14 s 1 min 47 s
43 s

From Tables 1 and 2 we can see that with the expert system and the FI algorithm we
can successfully optimize the production plan according to various target parameters.
492 H. Zupan et al.

At first sight the results in Table 1 do not seem to be good, but it should be considered
that the reason for this is mainly the long operating times and that the order plan itself
does not have such an influence on the end time of all orders. In Table 2 it can be seen
that the different order scheduling has a great influence on the sum of all throughput
times of the individual orders (up to 38% improvement). However, it is interesting to
note that in both cases we get the same throughput times of all orders. This means that
it makes more sense for the target parameter to take the sum of all order times (*tALL ),
since this optimizes each order individually and the order times of all orders (tALL ).

3.1 Transforming the Expert System into On-line Simulation

The connection between the real system and the digital world was established via the
cloud using the SQL Lite library. In order for the system to function properly, it is
important that all commands and interfaces are executed in the correct order. This is
done by a global digital agent that acts as the conductor of the entire system, sending
commands to other digital agents that perform their tasks. In our system, the sequence
of steps is as follows (Fig. 6):

Fig. 6. The order in which steps are taken to successfully combine the real system and the digital
system (digital twin and digital agents).

1. G.D.A. sends a command to D.A.I. to set the current state of the real system in DT.
2. D.A.I. retrieves information about the current system via the cloud.
Realization of an Optimal Production Plan in a Smart Factory 493

3. D.A.I. sets the initial state to DT, which reflects the real state of the real system.
4. G.D.A. sends the D.A.O. command to add orders to DT.
5. D.A.O. retrieves a production plan from the ERP system and checks which orders
can/cannot be made based on the boundary conditions of the system.
6. D.A.O. forwards orders that can be made in DT.
7. G.D.A. sends the D.A.M. command to sort the work on orders by individual
assembly workplaces.
8. D.A.M. selects the optimum workstation for each order issued by D.A.O. and sends
this information to DT.
9. G.D.A. sends the D.A.A. command to create an optimal production plan.
10. D.A.A. determines the optimal production plan through the iteration loop and DT.
11. G.D.A. sends the command to D.A.J. for sequencing and generating operations on
robots.
12. D.A.J. generates the operations that the robots must perform and their sequence
according to the orders. It then passes all this information to DT.
13. DT then calculates the scripts based on all the digital agents’ data obtained and
transmits all the data on that scenario back to the cloud.
14. The production plan is then started in the real system.
15. If there is a possible disturbance in the real system, G.D.A. sends a command to
D.A.D. to identify the disorder and, if possible, correct it.
16. D.A.D. checks the nature of the fault and suggests appropriate remedial action. It
sends these measures back to DT.

DT returns then to step 13 and repeats the steps until the production plan is
successfully completed.
In on-line simulation, it is important that the digital system (digital twin and digital
agents) and the real system are in constant two-way communication. This is important
because if there is a deviation of the real system from the planned schedule, the dig-
ital system can immediately detect and verify this deviation (eliminate disturbances,
repair the production plan, etc.) and perform an optimization to provide new data that is
immediately sent back through the cloud to the real system (Fig. 7).

Fig. 7. The feedback loop between the digital system and the real system

We also conducted an additional experiment to test the feedback loop between the
digital system and the real system. The experiment proved that the feedback loop between
the real system and the digital system also works. The experiment showed that the
494 H. Zupan et al.

production plan was changed due to the interference. When image processing detected
a product defect in a real system, it sent this information to the digital system via the
cloud. Then the D.A.D. checked what kind of defect it was and sent DT solutions to
correct the error. This information was then DT sent via the cloud to a real system. The
fault changes the production time and also increases the flow time.

4 Conclusion
In this article we presented how we have created an expert system that uses the digital
twin (DT) and digital agents (DA) to continuously monitor and optimize the assembly
and handling system and process (AHSP) of a production line.
The first step was to set up an off-line simulation model, which is the basis for DT
the AHSP. When building a digital twin, we considered all the features, resources and
constraints of a real production line. We built DT in the way that it behaves exactly like
the real AHSP.
In the next step we built several types of local digital agents (DA) to perform a real-
time simulation of AHSP. Each DA has its own functionality and intelligence that helps
perform DT with a specific task. The DA main task is to automatically find a solution from
the digital twin and send it back to the DT as soon as possible. The global digital agent is
responsible for the proper and chronological functioning of all agents. After successful
validation and verification of “DT and DA” we combined DT with the intelligent FI
algorithm and built an additional digital agent to cover these tasks. Experiments have
shown that with the digital system (DT + DA) we are able to optimize the real AHSP
of the line production, which was proven in the laboratory on the real AHSP of the
production line.
In the last part of the work we combined the developed digital system with the real
system over the cloud. In this way we created all necessary frameworks for the on-line
simulation and thus developed an expert system that is connected to the real system
and performs the control and optimization in real time. The cloud-based expert system
receives all inputs on the real line and the desired orders (production plan). Based on
these inputs it calculates and offers an optimal production plan with DT and DA and
then sends all this data back to the real system via the cloud. The real system starts to
implement the optimal production plan. The expert system also checks the effects of
disturbances that occur in the real system and, if necessary, corrects the production plan
so that it eliminates the influence of the disturbances or the error itself and sends the
data via the cloud to the real system - all in real time.
We have successfully used individual segments of the expert system (Flip and Insert
algorithm, digital twins and digital agents) in a real industrial environment. The mod-
ular and parametric structure of the expert system also enables further research and
development.

Acknowledgment. The work was carried out in the framework of the GOSTOP programme
(OP20.00361), which is partially financed by the Republic of Slovenia – Ministry of Educa-
tion, Science and Sport, and the European Union – European Regional Development Fund. The
authors also acknowledge the financial support from the Slovenian Research Agency (research
core funding No. (P2-0248)).
Realization of an Optimal Production Plan in a Smart Factory 495

References
1. Ylipää, T.: Correction, prevention and elimination of production disturbances. PROPER
project description, Department of Product and Production Development (PPD), Chalmers
University of Technology, Gothenburg (2002)
2. Andersson, C., Bellgran, M.: On the complexity of using performance measures: enhancing
sustained production improvement capability by combining OEE and productivity. J. Manuf.
Syst. 35, 144–154 (2015)
3. Bellgran, M., Aresu, E.: Handling disturbances in small volume production. Robot. Comput.-
Integr. Manuf. 19, 123–134 (2003)
4. Pinedo, M.L.: Scheduling: Theory, Algorithms, and Systems. Springer Science + Business
Media, LLC (2012)
5. Rao, Y., He, F., Shao, X., Zhang, C.: On-line simulation for shop floor control in manufacturing
execution system. In: Xiong, C., Liu, H., Huang, Y., Xiong, Y. (eds.) Intelligent Robotics and
Applications, pp. 141–150. Springer, Berlin (2008)
6. Kamat, V.R., Menassa, C.C., Lee, S.H.: On-line simulation of building energy processes:
need and research requirements. In: Proceedings of the 2013 Winter Simulation Conference:
Simulation: Making Decisions in a Complex World, pp. 3008–3017 (2013)
7. Ang, A.T.H., Sivakumar, A.I.: Online multi objective single machine dynamic scheduling
with sequence-dependent setups using simulation-based genetic algorithm with desirability
function. In: Winter Simulation Conference, pp. 1828–1834 (2007)
8. Zupan, H., Herakovič, N., Starbek, M.: Hybrid algorithm based on priority rules for simulation
of workshop production. Int. J. Simul. Model. 15, 29–41 (2016)
9. Zupan, H., Herakovič, N., Žerovnik, J., Berlec, T.: Layout optimization of a production cell.
Int. J. Simul. Model. 16, 603–616 (2016)
10. Zupan, H., Herakovič, N., Žerovnik, J.: A hybrid metaheuristic for job–shop scheduling
with machine and sequence-dependent setup times. In: 13th International Symposium on
Operational Research in Slovenia, Bled, Slovenia, pp. 129–134 (2015)
Dynamic Scheduling of Robotic Mildew
Treatment by UV-c in Horticulture

Merouane Mazar1(B) , Belgacem Bettayeb2 , Nathalie Klement3 ,


M’hammed Sahnoun1 , and Anne Louis1
1
LINEACT, CESI. 80, rue Edmund Halley, Rouen Madrillet Innovation,
76800 Saint-Étienne-du-Rouvray, France
{mmazar,msahnoun,alouis}@cesi.fr
2
LINEACT, CESI. 8, boulevard Louis XIV, 59046 Lille, France
bbettayeb@cesi.fr
3
Arts et Métiers Institute of Technology, LISPEN, HESAM Université,
59000 Lille, France
Nathalie.Klement@ensam.eu

Abstract. Thanks to new technologies, it is possible to make an auto-


matic robotic treatment of plants for the mildew in greenhouses. The
optimization of the scheduling of this robotic treatment presents a real
challenge due to the continue evolution of disease level. The conventional
optimization methods can not provide an accurate scheduling capable to
eliminate the disease from the greenhouse. This paper proposes a solution
to provide a dynamic scheduling problem of evolutionary tasks in horti-
culture. We first developed a genetic algorithm (GA) for a static model.
Then we improved it for the dynamic case where a dynamic genetic algo-
rithm (DGA) based on the prediction of the task amount is developed.
To test the performance of the designed algorithms, especially for the
dynamic case, we integrated our algorithms in a simulator.

Keywords: Dynamic scheduling · Simulation · Optimization ·


Multi-agent system · Mildew

1 Introduction

Robotics knows a huge evolution and starts being used in several fields such as
industry, rehabilitation, and agriculture. In horticulture, researchers are devel-
oping several types of robots to cultivate or treat plants [15]. In the literature,
several works on harvesting robots can be found, like the robot presented in [16]
which uses artificial vision to move easily between the cauliflower plants. In [14],
another harvesting robot that collects watermelons is presented, which shows the
ability of some robots to harvest heavy crops. Moreover, there are also robots
equipped with sprayers that are used for plant treatments. For instance, [11]
describes the design of a robot able to detect the powdery mildew and spray the
diseased plants in order to reduce the quantity of spray and avoid the treatment
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 496–507, 2021.
https://doi.org/10.1007/978-3-030-69373-2_36
Dynamic Scheduling of Robotic Mildew Treatment by UV-c in Horticulture 497

of safe plants. The work presented in this paper is part of an European project
called UV-Robot. In our case, we have a robot that treats the mildew fungus in
horticulture using C-type ultraviolet radiation (UV-c). UV-Robot replaces the
chemical spray treatment with UV-c treatment. In [17], the authors have shown
that UV-c treatment improves the production of strawberries.
The UV-Robot must treat the rows of plants that are affected by mildew
during their growth. The energy supply of the robot is based on a battery that
can ensure a continuous functioning during 3 h in average and needs 2.5 h to be
fully charged. Mildew disease has an evolutionary and spreading behaviour that
follows a stochastic process. To achieve an optimal treatment schedule, decision
support tools are needed. The simulation-optimization approach allows to solve
the dynamic scheduling of UV-Robot systems. This approach has showed good
performance in several works; it has the advantage of allowing the simulator
to learn and adapt over time with the best behaviour [20]. For instance, [12]
works on the problem of air transport of military aircraft of the United States
using approximate dynamic programming. This approach is also used within the
simulators of [8] and [21] which are developed to optimize the vehicles routing
for the collection of conditioned bio-waste.
Since few decades, the Multi-Agent Systems (MAS) paradigm has emerged
as an effective approach for complex systems modelling and simulation. [18]
describes the MAS environment as the context in which the agents will evolve.
For [4], a MAS is a set of entities called agents, sharing a common environ-
ment which they are able to perceive and on which they can act. The simulator
developed within this work represents the environment for testing our UV-Robot
processing system. We will run optimization algorithms inside the simulator in
order to plan the robot tasks.
Dynamic scheduling is a problem studied in several systems, such as on
robots, manufacturing machines or distribution chains. Several works have stud-
ied this problem from which we can cite the article by [5], where a machine
processes jobs that arrive continuously. A case on several resources (multipro-
cessor in a computer) is presented in [13], which executes a set of tasks that
arrive dynamically. In dynamic scheduling, it is generally the time of arrival or
departure (duration) of tasks which is dynamic. In our case, both the occurrence
and spread of mildew disease on plants are dynamic.
In the case of dynamic duration, we explored the lead of scheduling problems
with deteriorating jobs. In this problem, the tasks duration follows a degradation
process as in [7], where the temperature of the ingots drops after they come out
of the oven. In [6] and [1], the tasks are degraded in the same way starting from
T0 according to a linear degradation equation. The processing of tasks is done
by job batch in [6,7] and [1]. These batches can be processed in parallel, which
is not possible in our case with a single robot.
This paper proposes solving a dynamic scheduling problem with evolving task
duration due to the evolution of the mildew level. A sim-optimization approach
based on a genetic algorithm and a multi-agent simulation is used to implement
the dynamic solution. NetLogo [19] simulation software is used to implement the
498 M. Mazar et al.

optimization and simulation algorithms. In this study, the agent based archi-
tecture is used for its capacity to present and simulate complex systems with
centralized GA based decision making (only one active agent). To the best of
our knowledge, there is no work proposing the resolution of dynamic scheduling
of robot in horticulture.
The article is organized as follows. Section 2 presents the simulation model
with MAS. Section 3 details the disease behaviour model and its estimation.
Then, two optimization algorithms are proposed in Sect. 4. Section 5 describes
and discusses the results obtained. Finally, the conclusion and the perspectives
of this work are given in Sect. 6.

2 Simulation Model

This section presents the simulation model of the robotized mildew treatment
process which is based on MAS, and explains the role of agents and the interac-
tions between them. Before building the simulation model, we carefully studied
our system to define all the agents. In the UV-Robot treatment system, a robot
equipped with UV-lamps performs treatment missions of infected plant rows
in the greenhouse. The robot moves to the battery charging station after each
mission. The robot is also equipped with a smart e-nose to inspect the level of
plant disease. The smart e-nose absorbs chemicals substances around the plants,
then calculates the level of mildew for each plant section. The robot performs a
measurement of the entire greenhouse using e-nose and then begins treatment
knowing that the disease progresses during treatment. However, the robot can-
not launch a mission before being given the authorisation from the monitoring
agent. The monitoring system is composed of a central computer able to control
the robot missions, collect data about the greenhouse and represent the state
of the system on a dashboard for the grower. It allows the grower to manually
control the robot, plan its missions and update the mildew level.

Fig. 1. Agent-based conceptual model and simulator


Dynamic Scheduling of Robotic Mildew Treatment by UV-c in Horticulture 499

The model of our multi-agent system and a screenshot of the developed


simulator are shown in Fig. 1. This system contains seven agents interconnected
by some directed links that represent the interactions between them. In our
model, we only use the capacity of MAS to reduce the complexity of systems
in terms of modelling and simulation. The model has an active “Monitoring”
agent which receives all the information from the other agents and then makes
decisions regarding plants treatment and missions planning. The “Plants” and
“Robots” agents cannot make any decision and they are modelled as completely
reactive agents. Figure 1-(a) presents a more detailed view to show the operating
diagrams of some agents. Each agent has its own behaviour, and can interact or
communicate with one or more agents.
The “Grower” agent defines missions for the monitoring, and can manually
control the robot. The “Monitoring” agent receives information from the green-
house and data from the robot, and runs the optimization algorithms to create
the missions of the robot. The “Robots” agent has four main roles: (i) calculates
the plant disease level to send it to the monitoring; (ii) controls the UV-c lamps
(turns them on or off); (iii) moves to the charging station when battery level is
low; and (iv) moves between crop rows to treat infected plants. The “Plants”
agent has a disease level that can increase and/or spread and that is reset to
zero each time the plant is treated. The plants grow in the greenhouse and are
treated by the robot. The agent “UV-c Lamp” is installed on the robot which
controls it. The “Charge station” agent is placed in the greenhouse and allows
the robot to recharge its battery. The “Greenhouse” agent, which carries all the
agents, is the environment that can influence the appearance of fungus (mildew).

3 The Mildew’s Behaviour Model

The evolution of mildew infection level influences directly the UV-c dose to apply,
i.e. the duration of treatment. To adjust the UV-c treatment doses, the robot
changes its speed according to the infection level of the plant. When the infection
level is high, the robot treats the plant with a low speed, so the plant receives
a sufficient dose of UV-c radiation. Moreover, the energy consumption of the
robot is proportional to the applied treatment dose. As UV-c lamps represent
the biggest part of the robot energy consumption, when the robot is moving
slowly with lamps turned-on, it consumes more energy even if the consumption
of the motor is low.
In order to properly calibrate our resolution algorithms in our system, the
behaviour of mildew needs to be simulated to bring it closer to reality. We used
data from [2], which represents the evolution over time of the level of mildew
in vineyards in 2007. Figure 2 shows the IGT2007 mildew behaviour curve, and
our estimation curve fˆ(t). This data can be approached by the time series in
discreet or continues representation.
c
fˆ(t) = (1)
1 + be−at
500 M. Mazar et al.

The estimation function (1) is a three-parameter logistic function that rep-


resents the time series. Its parameters are the following:
• The numerator c is the limit of the function at infinity (the curve peaks under
a horizontal asymptote).
• The function is symmetrical with respect to its inflection point of abscissa
ln(b)
a and ordinate 2c .
• c = 30 is the maximum level of disease.

Fig. 2. Real mildew behaviour IGT2007 [2] and estimated logistic function fˆ(t)

To compute a and b, we take a point in the graph from the IGT2007 curve
(40, 0.468) and construct the following two Eqs. (2) and (3):
c
= 0.468 (2)
1 + be−40a
ln(b)
= 83.054 (3)
a
After solving both Eqs. (2) and (3), we obtain a ≈ 0.096 and b ≈ 2936. Thus
fˆ(t) is given by Eq. (4).
30
fˆ(t) = (4)
1 + 2936e−0.096t
To get a simulated disease behaviour similar to the one represented by func-
tion (4), we proceed by trying empirically several evolution probabilities that
steer the transition of plants infection rate from the current level to the superior
one. We carried out several tests using the simulator. The retained probability
function of disease level transition is given by Eq. (5).

P = 0.000005 ∗ [level mildew] ∗ [last treatment] (5)

[level mildew] is the plant’s current level of mildew and [last treatment] is
the number of days elapsed since the last treatment. For the propagation of
Dynamic Scheduling of Robotic Mildew Treatment by UV-c in Horticulture 501

disease between neighbour plants, we check if there is an infected plant within


a radius of 3 m and, if it is the case, the probability P is increased by 0.01.
There are 6 disease levels (0, 3, 6, 12, 20 and 30) defined with our partners in
the UV-Robot project. The speed of the disease evolution is not linear and can
be assimilated to the function fˆ(t). The speed of the disease increase is given by
the derivative function dfˆ(t)/dt. For example, the speed is low at the beginning
(between day 0 and day 40) and at the end (after day 120) whereas it is much
higher around the infection point (day 83).

4 Dynamic Scheduling
Bin-packing is a known operational research problem that can be used for the
modelling of robotic task planning. It consists in filling bins with items while
respecting the size of the bins. The goal of this problem is to minimize the number
of used bins. In our problem, bins correspond to missions, and items correspond
to treatment tasks performed during missions. The size of each mission is limited
by the capacity of the battery that provides the electric energy necessary for
robot movement and for UV-lamps operation when performing treatment tasks.
As the objective is to eradicate the disease from the greenhouse as soon as
possible, the goal is to minimize the number of missions. Table 1 summarizes the
analogy between our problem and the bin-packing problem. The mathematical
model of this problem is detailed in [9]. Authors have already resolved it in the
static case using a genetic algorithm and an exact method. The model will not
be detailed in this paper.
In this section, two genetic algorithms are proposed to solve the problem of
treatment missions planning for semi-static and dynamic cases. The dynamic
case takes into account the evolution behaviour of mildew in the greenhouse.

4.1 Genetic Algorithm in Semi-static Case


First, the genetic algorithm (GA) has been developed to resolve a semi-static
case, where the disease behaviour changes every 24 h. This period allows a long
time for the robot to execute its missions with a stable level of disease. We used
the operations of a classic GA (selection, crossing and mutation) [3]. The charg-
ing station is in the middle of the greenhouse. In order to optimize displacements,

Table 1. Analogy between our problem and the bin-packing problem

Bin-packing problem Our problem


Bins Missions
Items Infected plant rows treatment tasks
Size of an item Needed energy for the treatment task
Capacity of the bin Capacity of the battery
Minimize the number of bins Minimize the number of missions
502 M. Mazar et al.

the robot does all the treatments by visiting the rows selected within a mission
in an ascendant order of their identification number (from the left to the right
of the greenhouse).
The coding scheme of an individual in the GA is represented by a matrix
as shown in Fig. 3, where lines represent the treatment missions and columns
are the rows of the greenhouse. If a row j is to be treated during a mission
i, the value of the element ij is equal to 1. The sum of power consumption of
each mission (vector multiplication of “mission” line vector and “consumption”
column vector) should not exceed the robot battery capacity, otherwise the indi-
vidual is discarded. The evaluation of each chromosome takes into account the
energy consumed for the displacements of the robot to reach the charging sta-
tion. The initial population is created with a greedy heuristic, which takes the
treatments of large size at the beginning and fulfill the missions while respecting
the capacity of the robot. Since the goal is to minimize the number of missions,
the chromosome with fewer missions is the best individual. Figure 3 presents the
crossing between two parents at a random point which gives two children as the
output.

Fig. 3. Crossover operator

Figure 4 shows how the mutation operation works in the proposed GA. Each
child has a 60% of chance to be mutated. During the mutation, the algorithm
randomly selects 2 lines of the chromosome matrix. Then, each element equal to
1, in each line, has a 50% chance to switch with the corresponding element of
the other line (element with the same j). After each iteration of GA, 10% of the
best individuals are selected to be included in the new generation.

Fig. 4. Mutation operator


Dynamic Scheduling of Robotic Mildew Treatment by UV-c in Horticulture 503

4.2 Dynamic Genetic Algorithm

The proposed dynamic genetic algorithm (DGA) is considered as an improve-


ment of the GA in the dynamic case. Since the previous GA is limited when
disease behaviour changes over and over in the greenhouse, we improved it while
keeping certain operations. The added value of the DGA is the use of the function
fˆ(t) to predict the evolution of mildew. When the algorithm fulfills the missions
by the processing tasks, it takes into account an additional energy consumption.
Figure 5 shows a diagram of the operation of the DGA. The predicted disease
level increases over time because there is a waiting period before the execution of
each mission. The waiting time of a mission is the execution time of the previous
mission plus the battery charging time. Figure 6 shows the difference between
DGA and GA regarding energy consumption. DGA always allocates a part of
the battery capacity to the expected additional consumption, due to the evolu-
tion of infection level, for each mission. On the contrary, GA does not take into
account the evolution of the disease. Figure 6 also shows the execution times for
missions with a treatment time of 3 h and a charging time of 2 h and 30 min.
We used the estimation function fˆ(t) for the prediction calculation Pi as shown
in Eq. (6). In fact, we need to estimate the amount of the additional increase of
infection level Pi at the execution moment of the ith mission. For that, we first
estimate the global level of disease using the function fˆ(t) and then we remove
the measured value recorded at the last measurement action (using e-nose or
human estimation).

fˆ (3 + 5.5(i − 1) + tm ) αi
Pi = −M (6)
N brplants /4

α is an empirical parameter defined thanks to the simulation, M is the last


measured mildew level for the concerned plant, and tm is the estimated time
corresponding to the value of the last measured mildew level (M).

Fig. 5. Representation of the dynamic fulfilment of treatment tasks in missions


504 M. Mazar et al.

Fig. 6. Comparison of the dynamic and static attribution of treatment

5 Experimentation
We consider a greenhouse containing 100 rows of strawberry plants; each row
has 100 m of length and contains 100 plants. A UV-C robot with an autonomy of
3 h and a charging time of 2.5 h is used to treat the mildew. We assume that in
the initial conditions, 50% of plants are infected with different levels of disease.
In the first step of experiment, we launched several simulations with DGA
to calibrate its parameter α to comply the model developed for vineyards to
the evolution of mildew for strawberry. Then, we launched 5 simulations of each
algorithm (DGA and GA) in a dynamic environment where the disease evolves
at each simulation step. The time horizon of each simulation is 20 h, which is
enough for the treatment of all infected plants in the greenhouse. In both the GA
and the DGA, the population size is 20 and the limit number of generations is 50.
These parameters were determined empirically through several trials. Figure 7
draws the evolution of the level of the robot battery as a function of time. It
compares three curves with different values of the parameter α (36, 50 and 60).
The decreasing segment in the curves is linked to the time of treatment, and
the increasing segment is relative to the time of battery charging. In order to
increase the lifespan of the battery, we limit the consumption to 80% of battery’s
capacity, as it is recommended by experts [10]. To compare the curves in Fig. 7
we have added the line EL which represents 20% of the battery charge. So we
will choose DGA with α = 60 (DAG60) for the next simulations because its
curve respects the minimum level of energy.

Fig. 7. Evolution of energy consumption for different values of α


Dynamic Scheduling of Robotic Mildew Treatment by UV-c in Horticulture 505

Figure 8a shows the evolution of level of disease over time. There are two
curves in the figure: the blue solid curve which represents the level of mildew
using DGA60 and the green pointed curve which is related to the use of GA. We
can clearly observe that using GA allows the robot to finish the treatment of
mildew in 1000 min, whereas it takes 1100 min using DGA. Moreover, we notice
that using the GA is risky and does not allow using the robot in an autonomous
way. In fact, during the execution of the scenario using GA, the grower restarts
manually the robot several times because it has not enough energy in the battery
to return back to the charging station. In fact, an extra energy consumption may
happen when the actual level of mildew is more than expected. We also notice
that the level of disease increases from time to time in both graphs during the
robot’s charging period.

Fig. 8. Comparison between DGA60 and GA

In Fig. 8b, there are also two curves relative to DGA60 and GA drawing
the level of battery energy as function of time. Both curves correspond to the
average of several simulations for each algorithm. The GA scenario does not
respect the battery capacity constraint, where the robot uses more than 80% of
its battery capacity. As shown in Fig. 8b, this level reaches a 100% of battery
capacity several times, which necessitates the intervention of the grower to bring
the robot to its charge station.
The average time for a simulation with GA is 16 min and 41 s, and that
of DGA60 is 21 min and 36 s. This increase in computation time is due to the
additional computation of the prediction of mildew level.
As a conclusion, we can say that even if the use of GA allows less treatments
and computation times, it is still a risky scenario because it can not ensure the
total autonomy of the robot and needs the intervention of the grower several
times. Moreover, the use of DGA60 can be considered more efficient because
it respects the capacity constraint concerning the use of only 80% of the total
battery capacity, which allows a total autonomy of the robotic treatment. In
addition, the use of DGA60 gives realistic scenarios and allows possible real
implementation.
506 M. Mazar et al.

6 Conclusion
This paper studies the dynamic task scheduling problem, applied to UV-c treat-
ment of plants in horticulture. The difficulty was in scheduling tasks to treat dis-
eases having a dynamic evolutionary behaviour. The use of a simulator allowed
testing our algorithms in the dynamic case. We improved a Genetic Algorithm
(GA), previously proposed, to Dynamic Genetic Algorithm (DGA) allowing the
autonomous execution of treatment with respect to the battery capacity con-
straint in the dynamic environment. The results provided by DGA show better
accuracy of the treatment with more compliance to the technical battery con-
straints and give the possibility to launch real life horticultural tests.
In perspective, we plan to add a preventive treatment that allows controlling
the evolution and the propagation of disease. We will introduce the case of a
multi-robots for several greenhouses, which will increase the number of active
agents and allow the use of distributed intelligence such as Contact Net protocol,
or Potential Fields.

Acknowledgment. This research was possible thanks to e1.35 million financial sup-
port from the European Regional Development Fund provided by the Interreg North-
West Europe Program in context of UV-Robot project.

References
1. Cheng, T., Kang, L., Ng, C.: Due-date assignment and single machine scheduling
with deteriorating jobs. J. Oper. Res. Soc. 55(2), 198–203 (2004)
2. Claude, M.: Mildiou de la vigne - bilan de la campagne 2007. In: Actualités Phy-
tosanitaires, pp. 99–105. IFV (2007)
3. Davis, L.: Handbook of Genetic Algorithms. CumInCAD, NY (1991)
4. Hassas, S.: Systèmes complexes à base de multi-agents situés. Mémoire
d’Habilitation à Diriger les Recherches, University Claude Bernard Lyon (2003)
5. Li, J., Wang, P., Geng, C.: The disease assessment of cucumber downy mildew
based on image processing. In: 2017 International Conference on Computer Net-
work, Electronic and Automation (ICCNEA), pp. 480–485. IEEE (2017)
6. Li, J.Q., Song, M.X., Wang, L., Duan, P.Y., Han, Y.Y., Sang, H.Y., Pan, Q.K.:
Hybrid artificial bee colony algorithm for a parallel batching distributed flow-shop
problem with deteriorating jobs. IEEE Trans. Cybern. 50, 2425–2439 (2019)
7. Li, S., Ng, C., Cheng, T.E., Yuan, J.: Parallel-batch scheduling of deteriorating jobs
with release dates to minimize the makespan. Eur. J. Oper. Res. 210(3), 482–488
(2011)
8. Mazar, M., Constant-Meney, V., Sahnoun, M., Baudry, D., Louis, A.: Simulation et
optimisation de la tournée des véhicules pour la collecte de biodéchets conditionnés
(2017)
9. Mazar, M., Sahnoun, M., Bettayeb, B., Klement, N., Louis, A.: Simulation and
optimization of robotic tasks for UV treatment of diseases in horticulture. Oper.
Res. 1–27 (2020). https://doi.org/10.1007/s12351-019-00541-w
10. Mei, Y., Lu, Y.H., Hu, Y.C., Lee, C.G.: A case study of mobile robot’s energy
consumption and conservation techniques. In: Proceedings of the 12th International
Conference on Advanced Robotics, ICAR 2005, pp. 492–497. IEEE (2005)
Dynamic Scheduling of Robotic Mildew Treatment by UV-c in Horticulture 507

11. Oberti, R., Marchi, M., Tirelli, P., Calcante, A., Iriti, M., Tona, E., Hočevar, M.,
Baur, J., Pfaff, J., Schütz, C., et al.: Selective spraying of grapevines for disease
control using a modular agricultural robot. Biosyst. Eng. 146, 203–215 (2016)
12. Powell, W.B.: Approximate dynamic programming: lessons from the field. In: Sim-
ulation Conference, 2008, WSC 2008, Winter, pp. 205–214. IEEE (2008)
13. Sahni, J., Vidyarthi, D.P.: A cost-effective deadline-constrained dynamic schedul-
ing algorithm for scientific workflows in a cloud environment. IEEE Trans. Cloud
Comput. 6(1), 2–18 (2015)
14. Sakai, S., Iida, M., Osuka, K., Umeda, M.: Design and control of a heavy material
handling manipulator for agricultural robots. Auton. Robots 25(3), 189–204 (2008)
15. Sistler, F.: Robotics and intelligent machines in agriculture. IEEE J. Robot.
Autom. 3(1), 3–6 (1987)
16. Southall, B., Hague, T., Marchant, J.A., Buxton, B.F.: An autonomous crop treat-
ment robot: part I a Kalman filter model for localization and crop/weed classifi-
cation. Int. J. Robot. Res. 21(1), 61–74 (2002)
17. Takeda, F., Janisiewicz, W., Smith, B., Nichols, B.: A new approach for strawberry
disease control. Eur. J. Hortic Sci. 84(1), 3–13 (2019)
18. Tranier, J.: Vers une vision intégrale des systèmes multi-agents. Ph.D. thesis, Uni-
versité Montpellier II, Montpellier, Thèse de doctorat (2007)
19. Wilensky, U., Evanston, I.: NetLogo: Center for Connected Learning and
Computer-based Modeling, pp. 49–52. Northwestern University, Evanston (1999)
20. Wu, T., Powell, W.B., Whisman, A.: The optimizing simulator: an intelligent anal-
ysis tool for the military airlift problem. Unpublished Report. Department of Oper-
ations Research and Financial Engineering, Princeton University, Princeton (2003)
21. Xu, Y., Sahnoun, M., Mazar, M., Abdelaziz, F.B., Louis, A.: Packaged bio-waste
management simulation model application: Normandy region, France. In: 2019 8th
ICMSAO, pp. 1–5. IEEE (2019)
Understanding Data-Related Concepts in Smart
Manufacturing and Supply Chain Through Text
Mining

Angie Nguyen1(B) , Juan Pablo Usuga-Cadavid1 , Samir Lamouri1 , Bernard Grabot2 ,


and Robert Pellerin3
1 LAMIH CNRS, Arts et Métiers ParisTech, Paris, France
{angie.nguyen,juan_pablo.usuga_cadavid,samir.lamouri}@ensam.eu
2 LGP, INP/ENIT, Tarbes, France
bernard.grabot@enit.fr
3 Mathematics and Industrial Engineering Department,
Polytechnique de Montreal, Montreal, Canada
robert.pellerin@polymtl.ca

Abstract. Data science enables harnessing data to improve manufacturing pro-


cesses and supply chains. This has attracted attention from both research and
industrial communities. However, there seems to be a lack of consensus in sci-
entific literature regarding the definitions for some data-related concepts, which
may hinder their understanding by practitioners. Furthermore, these terms tend to
have definitions evolving through time. Thus, this study explores the use of six
data science concepts in research under the framework of Industry 4.0 and supply
chain management. To achieve this objective, a text mining approach is employed
to both contribute to disambiguation of these terms and identify future research
trends. Main findings suggest that even if concepts such as machine learning, data
mining and artificial intelligence are often used interchangeably, there are key
differences between them. Regarding future trends, topics such as blockchain,
internet of things and digital twins seem to be attracting recent research interest.

Keywords: Text mining · Smart manufacturing · Supply chain · Machine


learning · Data mining · Data analytics

1 Introduction
The ever-increasing computing power, storage capacity, data availability, and faster inter-
net connection have led to industrial systems recently supported by new technologies
such as virtual reality, Internet of Things or cloud computing. [1] identified nine main
pillars for the realization of Industry 4.0, also known as smart manufacturing. One of
these major enablers is big data analytics (BDA), which has arisen from the explosion of
industrial data generation, reaching around 1000 Exabytes per year [2]. BDA has been
shown to provide benefits both in production planning and control [3] as well as in sup-
ply chain management [4]. Indeed, data-driven companies have experienced increased

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 508–519, 2021.
https://doi.org/10.1007/978-3-030-69373-2_37
Understanding Data-Related Concepts in Smart Manufacturing and Supply Chain 509

productivity and profit [4], allowing for paybacks of around 10–70 times their initial
investments in data warehousing facilities [5].
In the past decade, academia and industry have widely studied how to manage and
leverage these newly massive amounts of generated data to gain efficiency and prof-
itability. This has led to the extensive use of already existing data-related concepts as
well as to the emergence of new ones. In fact, data science is a research field that gathers
multiple disciplines such as statistics, artificial intelligence (AI), information theory and
computer science, to manage and analyze large datasets. Hence, several related concepts
like data mining (DM), machine learning (ML), and AI, which are strongly related to
each other, have often been used interchangeably in practice. As stated by [6], the differ-
ences between these terms are not fully clear in literature. This is also suggested by the
authors of [7], who appreciate that there is little consensus around concepts such as big
data. One reason for this may be the fact that some of these ideas have rapidly evolved
in short time-lapses, making them difficult to define. Additionally, the development of
the data science field has been significantly influenced by marketing activities [7], which
may also explain existing trends in the usage of different terminologies. For instance, the
key concept of AI has been increasingly used over the past three years and sometimes
even morphed with the terms data science and ML [8]. Yet, a wrong use of concepts is
likely to hinder the adoption of BDA in companies, as the understanding of fundamental
concepts and tools of BDA is an essential prerequisite for implementation.
In this context, authors above outlined the need for fundamental statements and a
coherent nomenclature of data science. As the related concepts rapidly evolve, the time
dimension should be considered when exploring these definitions. Related work suggests
that data-related concepts have not been defined nor disambiguated yet through a text
mining approach in the context of smart manufacturing and supply chain. However, the
literature on data science should contain valuable information on the way the scientific
community understands and perceives the different data-related concepts. Therefore, this
paper has two main research objectives: first, to analyze the related literature based on a
text mining approach to build a consistent understanding of seven data-related concepts
in the context of smart manufacturing and supply chain management; and secondly, to
identify future trends in these two domains from the text mining analysis. The seven
data-related concepts (presented in extenso for the sake of clarity) that will be analysed
are the following: machine learning, data analytics, big data analytics, data mining,
artificial intelligence, data engineering, and data management.
The remainder of this article is organized as follows: Sect. 2 reviews related work and
explains the contribution of this study. Section 3 illustrates the research methodology.
Section 4 presents the results. Finally, Sect. 5 concludes this research and proposes future
work perspectives.

2 Related Work and Contribution

In the past, few authors have tried to define and analyse the relations between the various
concepts of data science from different perspectives. [7] defined big data and analytics
with a particular focus on unstructured data such as text, audio, video, and social media.
[9] proposed definitions of several key concepts such as data science, DM, ML, AI,
510 A. Nguyen et al.

big data, and analytics. The relations between these concepts and how they have been
understood and used over years were also explored based on Google trends [8, 9]. For
instance, the authors outlined that the term DM was more popular than data science until
2016 and had then evolved to become “a split concept between ML and data science
itself” [9]. Recently, [6] proposed a definition and disambiguation of the key concepts
of AI, DM, ML and statistics, based on their fundamental objectives. However, related
work suggests that data-related concepts have not been defined nor disambiguated yet
through a text mining approach in the context of smart manufacturing and supply chain.
Text mining can be defined as the use of statistical analysis, computational linguistics
and ML to extract information from text data [7]. It has been already used to explore
the literature of data-related topics applied in smart manufacturing or supply chain.
For instance, [10] used natural language processing to analyse about 4000 technical
abstracts, identify groups of topics, perspectives and research interest through years.
[11] performed a study using the VOSviewer to recognize research trends regarding the
field of supply chain resilience through the analysis of around 3000 research papers.
In their literature review, [3] employed VOSviewer to identify the keywords related to
most recent results on ML applied to production planning and control. Nevertheless,
even if these papers used text mining, none of them focused into providing definitions
or disambiguation for data-related concepts.
This paper builds on the hypothesis that the way in which authors have used the
data-related terminology over time may contain valuable information on key concepts
and trends. Thus, it aims to analyse the related literature based on a text mining approach
to build a consistent understanding of data-related terms. Additionally, future trends in
the context of smart manufacturing and supply chain are identified.

3 Methodology

The research approach adopted in this paper is detailed in Fig. 1. And consists of the
following steps: collect the research material; define the research objectives; perform
the analysis on a sample of 3858 scientific articles using text mining.
The research material was collected using the method proposed by [12] to perform
literature reviews. This method has been successfully applied in other studies whose
objective was to derive knowledge from scientific literature. For instance, [3] employed
it to select 93 articles to assess the state of the art of ML in production planning and
control; [13] analysed 23 case studies to review the state of Industry 4.0 in small and
medium enterprises, and [14] explored the integration of ERP systems in and between
organizations by reviewing 35 papers. However, these studies have all performed a full
text analysis of the paper sample, which greatly limits the number of papers that can
be considered. Instead, text mining allows handling a much larger number of articles.
The three scientific databases SCOPUS, ScienceDirect, and IEEE were surveyed during
the period 20–21 March 2020 using the following string chain in titles, abstracts, and
keywords: (“machine learning” OR “data analytics” OR “big data analytics” OR “data
mining” OR “artificial intelligence” OR “data engineering” OR “data management”)
AND (“supply chain” OR “Industry 4.0” OR “smart manufacturing”). As Industry 4.0
was first introduced in the Hannover Fair in 2011, only papers published from 2011
Understanding Data-Related Concepts in Smart Manufacturing and Supply Chain 511

were selected (R1). Additionally, only results labelled as “Review Articles”, “Research
Articles” or “Book Chapters in ScienceDirect (RSd) were considered. Finally, results
were merged, and duplicates were removed, resulting in a final sample of 3858 articles.

Fig. 1. Research approach adopted

This paper has two main objectives: first, to derive a common understanding of the
concepts of machine learning, data analytics, big data, data mining, artificial intelligence,
data engineering and data management, based on the way authors have used them; sec-
ond, to identify trends and research opportunities in the context of smart manufacturing
and supply chain. However, the term data engineering was eventually not analysed as
no paper using it in its title, abstract, or keywords was found.
The VOSviewer software was employed to analyse and visualize the text metadata
(i.e. titles, abstracts, and keywords) of the final sample of articles. It allowed defining a
lexical field of data science composed of 47 terms. It also enabled the identification of new
research trends in smart manufacturing and supply chain based on average publication
years. Additionally, the relatedness (based on the Jaccard coefficient and terms co-
occurrences in titles, abstracts and keywords) between the six concepts herein analysed
512 A. Nguyen et al.

and the 47 data science terms was computed and the usage frequencies of the concepts
over time were also compared.

4 Results

4.1 Disambiguating Data-Related Concepts

This subsection analyses how the key concepts of ML, data analytics, big data, DM, AI,
and data management have been used and understood in the scientific literature from
2011 to 2020. Figure 2 displays the usage frequency in related publications of the six
concepts analysed. It enables identifying trends in the terminology of data science.

Fig. 2. Usage frequency of ML, data analytics, big data, DM, AI, and data management from
2011 to 2020

Table 1 provides for each of these concepts the ten most related notions in the lexi-
cal field of data in industrial management. The relatedness (Rel.) between two terms is
obtained by computing the Jaccard coefficient based on terms co-occurrences in titles,
abstracts, and keywords. It enables disambiguating the concepts by identifying key dif-
ferences between them. Thus, the analysis of Fig. 2 combined with Table 1 allows
understanding how the six different key concepts of data science have evolved over
time.

Machine Learning
The concept of ML encapsulates several common notions with DM, which may explain
why both have often been used synonymously [6]. These notions, namely prediction,
classification, neural network, big data, and optimization refer to tasks and techniques
that enable to derive knowledge from large amounts of data. However, ML has been
more often used together with data sources (e.g. internet of things, sensors) while DM
has been rather associated with the notion of decision-making. This suggests that ML
refers to the application of techniques while DM rather refers to a decision-support
process which is in accordance with the definition proposed by [6]. In addition, Fig. 2
Understanding Data-Related Concepts in Smart Manufacturing and Supply Chain 513

Table 1. Top ten most related notions to each concept

Machine learning Data analytics Big data


Item Rel. Item Rel. Item Rel.
prediction 0,122 big data 0,372 data analytics 0,372
classification 0,113 internet of things 0,123 internet of things 0,195
neural network 0,109 cloud computing 0,079 cloud computing 0,136
artificial intelligence 0,107 sensors 0,075 artificial intelligence 0,116
big data 0,098 decision-making 0,062 machine learning 0,098
internet of things 0,098 machine learning 0,059 security 0,074
sensors 0,078 optimization 0,059 sensors 0,061
optimization 0,070 security 0,055 decision-making 0,060
deep learning 0,068 automation 0,054 automation 0,058
security 0,066 simulation 0,042 data management 0,055
Data mining Artificial intelligence Data management
Item Rel. Item Rel. Item Rel.
classification 0,088 internet of things 0,169 rfid 0,076
clustering 0,085 big data 0,116 security 0,058
prediction 0,077 machine learning 0,107 internet of things 0,057
neural network 0,059 robotics 0,089 big data 0,055
optimization 0,058 automation 0,088 blockchain 0,054
simulation 0,058 security 0,076 automation 0,043
forecasting 0,055 cloud computing 0,074 sensors 0,041
big data 0,053 blockchain 0,068 decision making 0,039
rfid 0,051 neural network 0,066 cloud computing 0,038
decision making 0,047 optimization 0,065 clustering 0,027

highlights that the usage of the term ML has been continuously increasing over the past
decade and was the most popular in 2019 among the six concepts herein analysed.

Data Analytics
In the 2010s, the terms big data and data analytics have often been used together to form
the concept of “big data analytics”, as Table 1 testifies it. Also, they have been widely
associated with terms that relate to the generation or processing of large amounts of data
by new technologies (e.g. internet of things, cloud-computing, sensors). While big data
514 A. Nguyen et al.

refers to these data streams, analytics rather refers to how to leverage them for decision
support, which is expressed by the terms decision-making and optimization.

Big Data
Since the term big data refers to the data streams that are generated by new technologies
such as the Internet of Things (IoT) and sensors, it is a core concept strongly related
to data analytics, AI, ML, data management and DM. Also, cloud-computing is found
to be close to big data, as it enables the processing of massive datasets. The number
of articles using this term in their title, abstract, or keywords has soared from 2011
onwards, exceeding 25% of the total number of articles in 2018. However, the concept
seems to have become less popular in 2019, which highlights a recent evolution of the
terminology used to describe these data streams.

Data Mining
The analysis of Table 1 suggests that the concept of DM has close meaning to data
analytics and ML. A key difference lies in the fact that DM and data analytics serve a
specific purpose which is decision-making, while ML relates to techniques allowing to
perform tasks (e.g. prediction, classification). Figure 2 highlights that the term DM has
significantly lost popularity from 2015 onwards while the term data analytics has been
increasingly used in publications. This suggests that the two concepts have been under-
stood as synonyms and that data analytics has progressively replaced DM. Nevertheless,
it is noteworthy that the concept of data analytics is more likely to be associated with
technologies such as cloud-computing and the Internet of Things. Hence, it seems that
concepts related to DM mainly raise theoretical considerations, while data analytics is
closer to pragmatic concepts.

Artificial Intelligence
Table 1 highlights that AI is a transversal concept that includes the notions of ML,
information systems security (e.g. blockchain, security), and data technologies (e.g.
cloud computing). As such, it can be considered as the broadest concept that encapsulates
ML, data analytics, big data, DM, and data management.

Data Management
Data management is generally defined as the process that collects, stores, prepares, and
retrieves data in a secure way and ensures data quality [7]. This is also suggested by
its high relatedness to the notions of data sources (e.g. rfid, sensors) and security (e.g.
security, blockchain). It is therefore a complementary concept to DM and data analytics,
which refer to processes that analyse and extract knowledge from data. However, the
decline in use of this term as well as the high relatedness between AI and the notions of
data sources, security, and information technology solutions suggest that the concept of
data management has been in practice included and replaced by the term AI.

4.2 New Trends in Smart Manufacturing and Supply Chain Management


The VOSviewer software allowed analysing the network of author keywords of the article
sample and identifying new trends and research opportunities in Industry 4.0 and supply
Understanding Data-Related Concepts in Smart Manufacturing and Supply Chain 515

chain. Figure 3 provides a visualization of the network of the top 50 most frequent terms.
In such a network, each concept is represented by a bubble. The keyword frequency is
depicted by the size of the bubble and its label. Co-occurrence of keywords is represented
by lines, where thicker lines suggest higher co-occurrence frequencies. Regarding the
spatial distribution of terms in the network, keywords which are normally used together
will be shown closer in the network. Finally, the scale at the bottom of the image maps the
colour of concepts to their average publication year (AvgY). Additionally, the concepts
used in the queries were highlighted with a red frame. Findings suggest that terms such
as blockchain (AvgY 2019.04), digital twin (AvgY 2018.91), the Industrial Internet of
Things or IIoT (AvgY 2018.90) are the most recent topics in research.

Fig. 3. Network visualization for the top 50 most common author keywords

Blockchain presents itself as the most recent topic. Additionally, it has a direct link
with the term traceability. This suggests that there may be a growing research interest
into the use of blockchain to improve the traceability through the supply chain or the
production process.
IIoT and digital twin being recent topics support findings in the literature review
performed by the authors of [3], which suggests that research in ML applied to man-
ufacturing rarely uses IoT technologies to collect data. Additionally, authors also state
516 A. Nguyen et al.

that there are few studies exploring the adaptation of ML models to the dynamics of
the manufacturing system through new data. Hence, the fact that recent trends focus on
IIoT and digital twins suggests that researchers are working on the integration of IoT
technologies and ML as well as on the adaptation of ML models by using digital twins
to obtain updated data of the production process.
Predictive maintenance (AvgY 2018.74) was found to be a recent trend too. This
indicates that research in manufacturing is still strongly focusing on harnessing main-
tenance data to improve the overall production process. Furthermore, deep learning
(AvgY 2018.69) still being a recent topic points out a growing interest in the use of neu-
ral network-based architectures to solve industrial problems. This is probably because
deep learning architectures normally excel at complex tasks in domains such as computer
vision or natural language processing.
Finally, it seems that concepts related to Industry 4.0 (2018.52) and smart manu-
facturing (AvgY 2018.25) are, by far, more recent than those related to supply chain
management (AvgY 2016.38) and supply chain (AvgY 2015.96). This may suggest
that researchers are widening the scope of concepts such as Industry 4.0 and smart
manufacturing to encompass topics related to supply chain.

5 Discussion

To further explore the disambiguation concepts, this section provides a summary table
with similarities and differences that have been identified through the text mining app-
roach presented above. In Fig. 4., the cells that are coloured in green describe similari-
ties between the six terms analysed while those coloured in red outline key differences
between them. Additionally, as this research aims at enhancing the understanding of data
science concepts, the cells on the diagonal of Fig. 4. Also provide proposed definitions
of the six terms, based on key references in the literature [7, 12–14]. This representation
was inspired from the study performed in [6].
From the results, it appears that terms such as data analytics, ML, AI, and DM
have very close meaning. Nevertheless, some differences can be appreciated: It seems
that the term data analytics is better suited for applications of techniques that harness
data to serve a specific purpose of decision-making. For instance, a system able to
improve customer service through the analysis of calls records should be referred to
as analytics [7]. Furthermore, data analytics employs a broad set of multidisciplinary
techniques to achieve its objective. Machine learning should be employed when referring
to algorithms that are able to improve performance on a specific task (e.g. using support
vector machines to pick the most appropriate type of production plan rescheduling [15]).
AI rather relates to systems able to behave like humans with respect to a specific task (e.g.
a conversational agent or chatbot aiming to provide human-like answers to questions
[16]). Finally, DM is mainly related to the use of statistical models or algorithms to
discover hidden patters and generate knowledge (e.g. using association rule mining to
harness textual maintenance reports [17]).
Understanding Data-Related Concepts in Smart Manufacturing and Supply Chain 517

Fig. 4. Proposed similarities, differences, and definitions for the explored data-related concepts

6 Conclusion and Future Work

This study employed a text mining approach and the method of a systematic literature
review to analyse 3858 scientific articles on data science in smart manufacturing and
supply chain. This analysis has allowed deriving a common understanding and disam-
biguation of six key concepts in data science, namely machine learning, data analytics,
big data, data mining, artificial intelligence and data management from the way authors
have used them in the literature. Furthermore, recent research trends in Industry 4.0 and
supply chain management with data-driven approaches were identified.
Regarding the disambiguation of data-related concepts, it was found that authors
have progressively used broader terms such as AI, that encapsulate several other key
concepts (e.g. machine learning and data management). The difference between ML,
DM and data analytics was discussed. The former was found to be strongly related to
algorithmic techniques such as neural networks to perform tasks such as classification
or clustering, while the two others were associated with decision-making processes.
518 A. Nguyen et al.

Recent literature pieces on Industry 4.0 and supply chain management have exten-
sively studied the use of blockchain to improve traceability, IIoT, and digital twins. Also,
predictive maintenance and deep learning applications were found to be still attracting
research interest.
Future work will mainly focus on the extension of this method to a general study
not exclusively focusing on Industry 4.0, smart manufacturing, and supply chain. Addi-
tionally, including industrial sources in the research material should provide further
information on global trends in using data-related concepts. For instance, this study has
revealed that the term data engineering is very little popular amongst academia and yet it
is considered by industrial practitioners as an essential component of data science [18].

References
1. Ruessmann, M., Lorenz, M., Gerbert, P., Waldner, M., Justus, J., Engel, P., Harnisch, M.:
Industry 4.0: the future of productivity and growth in manufacturing. Boston Consult. Group
9, 54–89 (2015)
2. Tao, F., Qi, Q., Liu, A., Kusiak, A.: Data-driven smart manufacturing. J. Manuf. Syst. 48,
157–169 (2018). https://doi.org/10.1016/j.jmsy.2018.01.006
3. Usuga Cadavid, J.P., Lamouri, S., Grabot, B., Pellerin, R., Fortin, A.: Machine learning applied
in production planning and control: a state-of-the-art in the era of industry 4.0. J. Intell. Manuf.
31, 1531–1558 (2020). https://doi.org/10.1007/s10845-019-01531-7
4. Waller, M.A., Fawcett, S.E.: Data science, predictive analytics, and big data: a revolution that
will transform supply chain design and management. J. Bus. Logist. 34, 77–84 (2013). https://
doi.org/10.1111/jbl.12010
5. Rainer, C.: Data mining as technique to generate planning rules for manufacturing control in
a complex production system. Springer (2013). https://doi.org/10.1007/978-3-642-30749-2
6. Schuh, G., Reinhart, G., Prote, J.P., Sauermann, F., Horsthofer, J., Oppolzer, F., Knoll, D.:
Data mining definitions and applications for the management of production complexity. In:
52nd CIRP Conference on Manufacturing Systems, pp. 874–879. Elsevier B.V., Ljubljana
(2019). https://doi.org/10.1016/j.procir.2019.03.217
7. Gandomi, A., Haider, M.: Beyond the hype: big data concepts, methods, and analytics. Int.
J. Inf. Manage. 35, 137–144 (2015). https://doi.org/10.1016/j.ijinfomgt.2014.10.007
8. Mayo, M.: The Data Science Puzzle - 2020 edn. https://www.kdnuggets.com/2020/02/data-
science-puzzle-2020-edition.html
9. Mayo, M.: The data science puzzle, explained. https://www.kdnuggets.com/2016/03/data-sci
ence-puzzle-explained.html/2
10. Sharp, M., Ak, R., Hedberg, T.: A survey of the advancing use and development of machine
learning in smart manufacturing. J. Manuf. Syst. 48, 170–179 (2018). https://doi.org/10.1016/
j.jmsy.2018.02.004
11. Bevilacqua, M., Ciarapica, F.E., Marcucci, G.: Supply chain resilience research trends: a
literature overview. IFAC-PapersOnLine 52, 2821–2826 (2019). https://doi.org/10.1016/j.ifa
col.2019.11.636
12. Mitchell, T.: Machine Learning. McGraw-Hill, New York (1997)
13. Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach. Prentice Hall Press,
Harlow (2009)
14. Tiwari, S., Wee, H.M., Daryanto, Y.: Big data analytics in supply chain management between
2010 and 2016: Insights to industries. Comput. Ind. Eng. 115, 319–330 (2018). https://doi.
org/10.1016/j.cie.2017.11.017
Understanding Data-Related Concepts in Smart Manufacturing and Supply Chain 519

15. Wang, C., Jiang, P.: Manifold learning based rescheduling decision mechanism for recessive
disturbances in RFID-driven job shops. J. Intell. Manuf. 29, 1485–1500 (2018). https://doi.
org/10.1007/s10845-016-1194-1
16. Leong, P.H., Goh, O.S., Kumar, Y.J.: An embodied conversational agent using retrieval-based
model and deep learning. Int. J. Innov. Technol. Explor. Eng. 8, 4138–4145 (2019). https://
doi.org/10.35940/ijitee.L3650.1081219
17. Grabot, B.: Rule mining in maintenance: analysing large knowledge bases. Comput. Ind. Eng.
139, 1–5 (2018). https://doi.org/10.1016/j.cie.2018.11.011
18. Dhungana, S.: On building effective data science teams. https://medium.com/craftdata-labs/
on-building-effective-data-science-teams-4813a4b82939. Accessed 16 May 2020
Benchmarking Simulation Software Capabilities
Against Distributed Control Requirements:
FlexSim vs AnyLogic

Ali Attajer1,2,3(B) , Saber Darmoul1 , Sondes Chaabane2 , Fouad Riane1,3 ,


and Yves Sallez2
1 Ecole Centrale Casablanca, Bouskoura Ville Verte, 27182 Casablanca, Morocco
{ali.attajer,saber.darmoul,fouad.riane}@centrale-casablanca.ma
2 LAMIH, UMR CNRS 8201, Université Polytechnique des Hauts-de-France, UPHF,
Le Mont Houy, 59313 Valenciennes, France
{sondes.chaabane,yves.sallez}@uphf.fr
3 LIMII, Hassan First University, Settat, Morocco

Abstract. Industry 4.0 communication and data management technologies enable


the development of distributed, product-driven control architectures, where intel-
ligent products can play active roles in manufacturing control processes. Although
simulation is a widespread practice to test, evaluate, compare and validate different
design alternatives, there is still a lack of papers that assess and discuss the capa-
bilities of available simulation software to meet and implement the requirements
of such distribution as a design alternative. This paper provides an analysis of
distributed, product driven control requirements and benchmarks them against the
capabilities of two commercially available simulation software, namely FlexSim
and AnyLogic. A comparison of the strengths and weaknesses of each software
is provided through a case study.

Keywords: Simulation software · Distributed control · Intelligent product ·


Simulation benchmark

1 Introduction
The advent of the Industry 4.0 paradigm introduces a set of information and communica-
tion technologies that allow both information processing to be distributed, and decision-
making to be decentralized over several autonomous and intelligent production entities
including smart manufacturing assets (machines, robots, material handling devices, etc.),
augmented operators and intelligent products [1]. This distribution/decentralization par-
ticularly encourages the design and development of distributed, product- driven control
architectures, where intelligent products can play more active roles in operational and
decision-making processes [1].
As manufacturers are often reluctant to experiment with new control architectures
directly on their production systems [2] mainly due to risk aversion considerations (loss

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 520–531, 2021.
https://doi.org/10.1007/978-3-030-69373-2_38
Benchmarking Simulation Software Capabilities 521

of production capacity, functionality, quality, performance, etc.), they prefer first assess-
ing the control architecture using simulation before implementing it on real scale. Indeed,
simulation is a widespread practice that offers a methodology and a set of tools [3] to test,
evaluate, compare and validate different design alternatives at lower costs and almost
without risk. However, as will be discussed in Sect. 2, there is still a lack of papers
that provide guidelines to benchmark the capabilities of available simulation software
against the requirements of distributed product-driven control in order to select the sim-
ulation software that offers the set of capabilities that better meet these requirements.
The aim of this paper is then to provide such guidelines based on the benchmark-
ing of two representatives of available simulation software: FlexSim and AnyLogic.
FlexSim (https://www.flexsim.com/) is considered due to its high ranking [4], and as a
representative of discrete event simulation software. AnyLogic (https://www.anylogic.
com/) is considered as a representative of multi-agent based modelling and simulation
software [5].
The remaining of the paper is organized as follows: Sect. 2 reviews the related works
with respect to the use of simulation in distributed manufacturing control. Then, Sect. 3
provides an analysis of distributed control requirements, which are illustrated on a case
study (cf. Sect. 4) and further matched against the capabilities of FlexSim (cf. Sect. 5.1)
and AnyLogic (cf. Sect. 5.2). Finally, a conclusion provides a comparison of the strengths
and weaknesses of each software, and some future works are presented.

2 Simulation in Distributed Control of Manufacturing Systems


In product-driven control architectures, intelligent products [6] can play active roles
in manufacturing control [1, 7, 8]. Although simulation is widely used to assess the
behaviour and performance of such architectures [2, 3, 9], most often authors do not
argument the selection criteria of the simulation software they use. Usually, authors
discuss the strengths and weaknesses of their control architectures. However, they do
not discuss the capabilities, ease of use, strengths and weaknesses of the simulation
software they used. They do not report on their user experience with the simulation
software to implement those architectures. These are some of the key observations that
motivate this paper. As a matter of fact, if no guidelines are available to help select
a simulation software before implementing a product-driven control architecture, then
misleading choices could be made or additional efforts (e.g. programming) could be
spent using a software that does not provide the necessary or satisfactory capabilities
that implement the sought after requirements.
Indeed, the Simulation Software Survey [10] is a useful source of information to sum-
marize the main characteristics of a variety of simulation software packages available
in the market. Some references focused on ranking simulation software [4], comparing
their capabilities [11] and providing frameworks for simulation software selection [12].
However, all of this is made independently of the distribution/decentralization deci-
sions and independently of any targeted control architecture. Several studies described
requirements to develop benchmarking testbeds [13], particularly using simulation [14],
to assess the behaviour and performance of distributed control architectures. In [15], a
recent review of the benchmarking initiatives aimed at Holonic Manufacturing Systems
522 A. Attajer et al.

performance assessment shows that very few of them exist. None of the above-mentioned
references provides an analysis of the capabilities of available simulation software to
meet requirements of distributed product-driven control. Therefore, this paper is an effort
to fill in this gap. The paper compares the strengths and weaknesses of two available
simulation software with respect to the implementation of distributed, product-driven
control. FlexSim is considered due to its high ranking, and as a representative of discrete
event simulation software [4]. AnyLogic is considered as a representative of multi-agent
based modelling and simulation software [5]. Both software provide trial and evaluation
versions available on-line that could be used to implement the case study considered in
Sect. 4.

3 Requirements for Simulation in Distributed Control


Distribution decisions are design decisions, the behaviour and performance of which
can be evaluated using simulation. However, for a successful simulation and evaluation
of product-driven control architectures, the simulation software (i.e. simulators) have to
satisfy several types of requirements, as shown in Fig. 1.

6 Functioning modes Informational


2 structure

K 3
1 Decisional
Scheduling I
Product #2 process
expert D
4

5 Resource #3

Product #1

Resource #2
Resource #1

LEGEND: Informational part Direct interaction

Physical/operative part Indirect interaction

Informational structure Environment

Fig. 1. Illustration of the requirements for simulation

1. Production entities (item numbered (1) in Fig. 1): simulators shall be able to
consider several features of several types of production entities:

(a) To achieve product intelligence, simulators should enable products to be aware


of their (design, operational and customer order) specifications, context and set
of required services to be manufactured.
Benchmarking Simulation Software Capabilities 523

(b) The resources that offer different services to the products (e.g. transformation
services for machines, maintenance and quality control as support services,
transport and storage services for material handling systems);
(c) Decision entities, human and/or artificial, to synchronize, coordinate and
perform analysis and decision-making processes.

2. Informational structures (item numbered (2) in Fig. 1): simulators shall enable
considering entity attributes and properties related to product and process specifi-
cations (e.g. bills of materials, routings, machining and process parameter settings,
tolerances, services required to obtain a given product, etc.), as well as indicators and
descriptors of the evolution of manufacturing processes (e.g. key performance indica-
tors, statuses and reports describing normal, tolerable, satisfactory and/or abnormal
operating conditions). To achieve product driven control, simulators shall enable
intelligent products to handle informational structures that are compliant with a DIK
model [16]:

(a) D (for Data) represents the properties and statuses of production entities and
processes, as well as the events generated by, or occurring to, production enti-
ties in interaction with each other and within their environment. Data can be
considered as raw facts, without meaning, issued from measurements (e.g. data
acquisition from sensors, such as velocity, temperature, pressure, etc.);
(b) I (for Information) obtained by some data processing to add meaning to raw
data, for example to have answers to questions, such as “what” event happened,
“when” and “where” did it happen, “who” or “what” generated it, “how” is it
described and eventually “who” is in charge of dealing with it;
(c) K (for Knowledge) represents expertise and can be seen as groups of information
that are linked by semantic relations.

3. Interactions: To achieve product-driven control, simulators shall enable intelligent


products to support different types of interactions with production entities:

(a) Direct interaction (item numbered (4) in Fig. 1), for example using direct
communication channels and exchanges of messages.
(b) Indirect interaction (item numbered (5) in Fig. 1), using the environment and
communication channels such as blackboards or stigmergy [17].
(c) More complex interactions, such as negotiation protocols, should be enabled.

4. Decision-making (item numbered (3) in Fig. 1): To achieve product-driven control,


simulators shall enable intelligent products to integrate different decision-making
processes; which are the set of activities, coordinated and synchronized within
business processes, that lead to the satisfaction of production objectives as well
as performance, behaviour and quality of service constraints and requirements.

(a) Depending on distribution design choices, the decision-making can be sup-


ported either by products, or resources, or decision-makers, or else by any
combination of these entities.
524 A. Attajer et al.

(b) Decision-making processes use informational inputs to generate decisions and


informational outputs that will be stored in informational structures.

5. Functioning modes (item numbered (6) in Fig. 1): To achieve product-driven con-
trol, simulators shall enable intelligent products to represent and be aware of all
operational settings of the manufacturing system in terms of both normal, degraded
and disturbed operational conditions.

It is worth noting that production entities and informational structures are common
to many production systems and thus easily handled by simulators. However, decision-
making processes, interactions and functioning modes are rather business dependent,
specific to each production system, and particularly related to the distribution design
choices and mechanisms of the suggested control architectures. The implementation of
these requirements will challenge the capabilities of simulation software in terms of
ease of use, ease of configuration, ease of custom code programming and existence of
pre-built libraries. The case study of Sect. 4 is built to evaluate the above requirements
through different interactions between entities.

4 Case Study

Let us consider an automated manufacturing system, as the example one shown in


Fig. 2. The system is composed of a main unidirectional conveyor loop (shown in blue
colour in Fig. 2) servicing production resources located aside secondary loops (shown
in black colour in Fig. 2), and two scrapping areas (shown in orange colour in Fig. 2).
The production resources include a raw material (RM) automated storage and retrieval

Fig. 2. Illustration of the case study


Benchmarking Simulation Software Capabilities 525

system (AS/RS), two equivalent machines M1 and M2 and an AS/RS to store work in
progress (WIP) and finished products. As the main purpose of the paper is not focused on
complex product design and manufacturing, product routings with only one operation (to
be performed interchangeability either on machine M1 or machine M2) are considered.
Machines are subject to failures, and products are subject to quality defects. The scrap
areas receive WIP products if they do not meet quality requirements. Decision and quality
control points are located on the main conveyor loop as milestones so that intelligent
products check updates about indicators and make decisions.

4.1 Product Decisions


Initially, a product leaves the raw material AS/RS without having a schedule. The product
moves on the conveyor and crosses decision point D1 , where it acquires its decision
indicators (cf. Sect. 4.2). According to this data, and using a decision mechanism such
as the one described in Sect. 4.2, the product selects the machine that will perform the
next operation in its routing. Then, the product moves on the conveyor. When it crosses
decision point D2 , it acquires the decision indicators and updates the decision it made
earlier accordingly. As a generalization, the product can update its decisions based on
indicators each time it crosses a decision or a quality control point. At decision points,
a product can make one among four possible product decisions (PD):

– PD1. Enter the resource loop;


– PD2. Stay on the main conveyor to wait until the resource is available;
– PD3. Go to the alternative resource loop;
– PD4. Return to stock and wait for the next production horizon.

When a product leaves a production resource (M1 or M2), it crosses a quality control
point where it acquires indicators about its quality. According to this data, and using a
decision mechanism (cf. Sect. 4.2), the product can make one of four possible product
decisions (PD) at quality control points:

– PD5. go to finished products inventory if quality indicators are acceptable;


– PD6. Rework on either machine M1 or M2 if quality indicators are tolerable and
machines are available and reliable (rework machine has to be selected);
– PD7. go to WIP inventory and wait for rework on either machine M1 or M2 if quality
indicators are tolerable and machines are either unavailable or unreliable (wait for a
pre-specified period of time before updating the decision);
– PD8. go to scrap otherwise.

These decisions are taken using a mechanism such as the one detailed in the next
paragraph.

4.2 AHP-Based Decision Mechanism


The Analytic Hierarchy Process (AHP) described in [18] was adapted to the purpose of
this paper to enable products to make decisions and consequently update their next step
526 A. Attajer et al.

at each decision and quality control point in reaction to availability and reliability distur-
bances. Starting from the raw material AS/RS, the global objective for each product is
to reach the finished products AS/RS. As in [18], three types of criteria are considered,
related to: a) production costs, b) processing and transportation times and c) machine
reliability. Each criterion type is associated with a set of indicators. A product applies
AHP at decision points to select a decision among PD1 to PD4, and at quality control
points to select a decision among PD5 to PD8. First the product acquires the indicators
associated with the type of decision point. Then the product performs pairwise compar-
isons between decisions according to each indicator. Then, compari-sons of decisions
according to indicators are aggregated to comparisons of decisions according to criteria.
Finally, the decision that best suits the global objective is selected. We suggest reference
[18] for more details relative to the different steps of implementation.

5 Capabilities Benchmarked Against Requirements


The case study is implemented in FlexSim and AnyLogic to evaluate their capabilities
to implement a product-driven control model.

5.1 FlexSim Capabilities


FlexSim offers a user-friendly interface and a wide library of standard objects that
enables drawing a simulation model quickly. Different library objects are used to build
a simulation model (see Fig. 3) that corresponds to the case study illustrated in Fig. 2.

Fig. 3. Simulation of the case study using FlexSim


Benchmarking Simulation Software Capabilities 527

Production Entities: FlexSim offers a rich and user-friendly library containing sim-
ulation model objects that can be used to design simulation models. A source node is
used to simulate and customize the product arrivals, and to assign the processing cost
and time indicators to products as flow items. Three queuing objects for raw material,
WIP and finished product inventories are constructed for the waiting areas. Conveyors
are constructed to move the flow item. Machines are simulated using two production
servers. Two sink nodes are used to simulate the scrapping areas 1 and 2.
Informational Structures: FlexSim allows different ways to store and process data
and information. It can route items through different resources based on data embedded
in a flow item since its creation at source nodes. It can connect with external data
sources (e.g. MS Excel spreadsheets) and databases, such as ERP, MES, HMI, PLC and
OPC servers and exchange data using Open Database Connectivity1 (ODBC) connection.
FlexSim defines specific indicators for each model object. For example, the holding
costs associated with inventory are defined in the queuing objects. All indicators are
communicated by objects and stored in two global tables named “Indicators at Di” and
“Indicators at Qj”. The role of the tables is to provide data to decision and quality
control points in order to perform the AHP calculations. Global tables enable indirect
communication between products and production resources. When a decision is assigned
to a product, FlexSim stores this decision in a global list that can be exported to Excel.
Interactions: In FlexSim, library objects are connected to define different process flows
and to allow the exchange of physical flows between model objects. The flow of informa-
tion between objects can be implemented by sending direct messages on state conditions.
Custom communication protocols between objects can be programmed on the FlexSim
snippet using either FlexScript (FlexSim’s proprietary language) or C++. Products are
considered as inert flow items that move through the model objects according to their
predefined routings, without having active roles or interactions with other entities. At
decision and quality control points, the supervisor can change the criteria and weightings
of the AHP mechanism using FlexSim interfaces.
Decision-Making: as products are inert flow items, they cannot directly process infor-
mation or do calculations, and consequently cannot be directly endowed with decision
mechanisms. To solve this problem, the AHP mechanism is implemented on decision
and quality control points (i.e. outside the product). When a product reaches a decision
or quality control point, a customized logic encoded on each point allows updating the
product routing. Such logic can be programmed using FlexScript or C++ programming
languages. FlexSim can directly compile custom code written in C++ via its snippet.
It can create.dll2 files in C++ and link them to FlexSim. It can connect with other
programming environments and languages, such as Python and R.
Functioning Modes: FlexSim enables defining customized indicators to represent
product quality, and customized routines to change the values of these indicators. For
1 ODBC is a standard application programming interface (API) for accessing database manage-
ment systems (DBMS).
2 Dynamic-link library (DLL) is Microsoft’s implementation of the shared library concept in the
Microsoft Windows and OS/2 operating systems.
528 A. Attajer et al.

example, the value qp is created to represent a measurement of a product dimension.


This quality parameter is susceptible to random events that can change its value (to
model product defects). The quality indicator can be directly consulted by the various
FlexSim objects. Machine failures can be generated by the MTBF/MTTR fault profile
in the FlexSim “toolbox”. Using probability distributions, FlexSim can model the first
failure time, the down time, and the up time of the machine. Based on mean of machine’s
up time and the machine operating time, the machine reliability is calculated using the
exponential distribution.

5.2 AnyLogic Capabilities

AnyLogic enables modelling all entities (products, resources, storage, and scrapping
areas) as agents using agent-based modelling.

Fig. 4. Simulation of the case study using AnyLogic

Production Entities: The Process Modeling Library of AnyLogic is used to build a


simulation model that corresponds to Fig. 2: A source that generates productAgent(s);
Three queuing agents for raw material, WIP and finished product inventories; a con-
veyor agent that moves productAgent(s) at a certain speed, preserving order and space
between them; delay agents are associated with machines M1 and M2; two agents are
associated with the sink nodes that represent the scrapping areas 1 and 2. Their role is to
destroy incoming productAgent(s). A systemProductionAgent embeds all other agents.
AnyLogic enables defining positions on the conveyor where product agents can make
calculations. Four decision points Di (red points in Fig. 4), and two quality control points
Qj (green points in Fig. 4) are created to enable such calculations.
Benchmarking Simulation Software Capabilities 529

Informational Structures: Product cost and time indicators are defined as productA-
gent(s)’ related parameters since their creation. Each product knows its production cost
and processing time on M1 and M2. Before a product is processed on a machine, the
quality indicator qp does not contain any value. AnyLogic assigns the value of the indi-
cator qp embedded in productAgents using a set of statistical distribution functions to
simulate product defects. ProductAgents can consult each time the other indicators from
all agents. The native Java environment supports custom Java code, external libraries,
and external data sources.

Interactions: AnyLogic is a multi-paradigm simulation software, which features


Agent-Based Modeling (ABM). In ABM, the primary consideration is about individual
agents, their rules, behaviours and interactions with each other and with the environment.
Agents living in one environment can directly communicate via sending messages to
each other. The simulation runs on one computer system which means all agents share
the same ontology and the use of an Agent Communication Language is not necessary.
For example, when the product reaches the decision point D2, if machine 1 is in the
product routing, the machine1Agent sends a message to the productAgent to inform of
its availability.

Decision-Making: the AHP mechanism is implemented using the Java programming


language. It is embedded directly on productAgents, and is only triggered when product
agents reach decision or quality control points. AnyLogic can work with R3 programming
language through the use of the Java library “Rcaller”, which increases the data analytics
capability. Also, it can integrate Artificial Intelligence in simulation models using a link
to Python language or Skymind’s library to enable reinforcement learning. The artificial
intelligence models can be constructed externally (e.g. in Python) and may be trained
before their integration into AnyLogic.

Functioning Modes: Machine failures can be generated by the “ResourcePool” block.


Recurrent downtime and maintenance activities are scheduled by the different types
of triggers or using the AnyLogic Schedule element. In our case, the downtime and
maintenance tasks are defined using the properties of the “ResourcePool” block.

6 Conclusion and Perspectives


This paper discussed the capabilities of two available simulation software to implement
the requirements of distributed product driven control. A case study provided the product
with some intelligence enabling it to play an active role in the decision-making processes
using AHP mechanism. The implementation of the case study on AnyLogic and FlexSim
allows to examine the capabilities of these simulation software according to the require-
ments of distributed product driven control. To do this, a matrix is established (Table 1)
to map the simulation software capabilities to the case study requirements.
From Table 1 it can be noticed that AnyLogic is a consistent simulation software
to implement and model a distributed product-driven control in industrial context, due
3 R is a programming language and free software environment for statistical computing and
graphics.
530 A. Attajer et al.

to the conjunction of multi-paradigm simulation and high capabilities in data analytics.


AnyLogic offers good interoperability with standard programming languages, such as
Java, Python and R, which extends its core capabilities to those of the rich libraries
available in these languages. Agent based modelling can be combined with discrete event
simulation, which further extends the capabilities of simulation using AnyLogic. Agent
based modelling enables achieving product intelligence in terms of data processing,
communication, interactions and decision-making.

Table 1. Summary of simulation software capabilities for product driven distributed control
requirements. (Legend: Good, Fair, Poor)

AnyLogic FlexSim
Production entities
Products
Resources
Decision-making entities (human and/or artificial)
Interactions
Product-Product
Product-Production resource
Product-Human
Production resource- Production resource
Functioning modes and disturbances
Machine disturbances
Product disturbances
Intelligence level of the entities (except product for FlexSim)
Associate informational structures with production entities (except
product for FlexSim)
Decision-making (except product for FlexSim)

On the other hand, FlexSim is very strong in 3D animation and is characterized


by its user-friendly interface and ease of use. Compared with AnyLogic which offers
multi-paradigm simulation, FlexSim offers only discrete event simulation. In this type
of simulation products are represented and handled as flow items. This introduces lim-
itations with respect to implementing product-based decision-making and interactions.
Custom made developments and extra programming is needed to overcome these lim-
itations and achieve product intelligence. As FlexSim offers less interoperability with
other programming languages compared to AnyLogic, this is an extra limitation.
Because of the differences between packages, none of them is suitable for use with
every type of manufacturing problem [11]. The most appropriate simulation software
should be selected for the specific application being studied.
Benchmarking Simulation Software Capabilities 531

We are considering an extension of the work to take into account several types of
disruptions in the production environment (e.g., late delivery of raw materials, conveyor
breakdown, etc.) in order to progress more on this work and further develop our model.
Even several products can be interconnected (a network of products able to communicate
with each other); in this case the products can share their experiences when they make
a decision, and they can update the set of actions.

References
1. Derigent, W., Cardin, O., Trentesaux, D.: Industry 4.0: contributions of holonic manufacturing
control architectures and future challenges. J. Intell. Manuf., 1–22 (2020)
2. Leitão, P., Mařík, V., Vrba, P.: Past, present, and future of industrial agent applications. IEEE
Trans. Ind. Inf. 9(4), 2360–2372 (2013)
3. Mourtzis, D.: Simulation in the design and operation of manufacturing systems: state of the
art and new trends. Int. J. Prod. Res. 58, 1–23 (2019)
4. Dias, L.M.S., Vieira, A.A.C., Pereira, G.A.B., Oliveira, J.A.: Discrete simulation software
ranking - a top list of the worldwide most popular and used tools. In: Proceedings of the 2016
Winter Simulation Conference, pp. 1060–1071 (2016)
5. Abar, S., Theodoropoulos, G.K., Lemarinier, P., O’Hare, G.M.P.: Agent based modelling and
simulation tools: a review of the state-of-art software. Comput. Sci. Rev. 24, 13–33 (2017)
6. Meyer, G.G., Framling, K., Holmstrom, J.: Intelligent products: a survey. Comput. Ind. 60,
137–148 (2009)
7. Kovalenko, I., Tilbury, D., Barton, K.: The model-based product agent: a control oriented
architecture for intelligent products in multi-agent manufacturing systems. Control Eng. Pract.
86, 105–117 (2019)
8. Dias-Ferreira, J., Ribeiro, L., Akillioglu, H., Neves, P., Onori, M.: BIOSOARM: a bio-inspired
self-organising architecture for manufacturing cyber-physical shopfloors. J. Intell. Manuf.
29(7), 1659–1682 (2018)
9. Zhang, L., Zhou, L., Ren, L., Laili, Y.: Modeling and simulation in intelligent manufacturing.
Comput. Ind. 112, 103123 (2019)
10. Swain, J.J.: 2019 Simulation Software Survey, Software Survey (2019). https://pubsonline.
informs.org/do/10.1287/orms.2019.05.10/full/. Accessed 18 Apr 2020
11. Guimarães, A.M.C., Leal, J.E., Mendes, P.: Discrete-event simulation software selection for
manufacturing based on the maturity model. Comput. Ind. 103, 14–27 (2018)
12. Fumagalli, L., Polenghi, A., Negri, E., Roda, I.: Framework for simulation software selection.
J. Simul. 13(4), 286–303 (2019)
13. Schreiber, S., Fay, A.: Requirements for the benchmarking of decentralized manufacturing
control systems. In: IEEE International Conference on Emerging Technologies and Factory
Automation, ETFA (2011)
14. Mönch, L.: Simulation-based benchmarking of production control schemes for complex
manufacturing systems. Control Eng. Pract. 15(11), 1381–1393 (2007)
15. Cardin, O., L’Anton, A.: Proposition of an implementation framework enabling benchmark-
ing of Holonic manufacturing systems. In: Studies in Computational Intelligence, vol. 762,
pp. 267–280 (2018)
16. Ackoff, R.: From data to wisdom. J. Appl. Syst. Anal. 16(1), 3–9 (1989)
17. Valckenaers, P., Kollingbaum, M., Van Brussel, H.: Multi-agent coordination and control
using stigmergy. Comput. Ind. 53(1), 75–96 (2004)
18. Ounnar, F., Ladet, P.: Consideration of machine breakdown in the control of flexible production
systems. Int. J. Comput. Integr. Manuf. 17(1), 69–82 (2004)
A Proposal to Model the Monitoring
Architecture of a Complex Transportation
System

Issam Mallouk1,2(B) , Thierry Berger1 , Badr Abou El Majd2 , and Yves Sallez1
1 University Polytechnique des Hauts-de-France, LAMIH UMR CNRS n°8201,
59313 Valenciennes, France
{Issam.Mallouk,Thierry.Berger,Yves.Sallez}@uphf.fr
2 Mohamed V University, FSR, CeReMAR, LMSA Lab, Rabat, Morocco

Issam_Mallouk@um5.ac.ma, Abouelmajd@fsr.ac.ma

Abstract. Enterprises of the transportation sector must face a huge competition.


So, an efficient vehicles fleet management is crucial. The present work proposes
a generic model adapted to the monitoring of a fleet of vehicles. This model
is able to describe the information chain and the different decisional processes
associated to the monitoring architectures. On a first “vehicle” level, each vehicle
and also its context (cargo, user, environment and task) are considered. The vehicle
composition is modelled according to a holonic hierarchy. On a second “fleet”
level, data collected from all vehicles are analysed. The model is then applied to
the monitoring of trucks tyres for a transport application of dangerous substances.

Keywords: Transportation · Monitoring architectures · Holonic modelling ·


Tyres monitoring

1 Introduction

Nowadays, enterprises in the transportation sector must face huge competition and must
deal with important societal, economic and environmental issues [1]. Fleet operators aim
to maintain and increase the availability and reliability of the fleet to optimize the ratio
between operation and maintenance. Many studies have concluded that maintenance
accounts for more than 60% of the overall life cycle costs of complex moving systems
(e.g., planes, trains, ships…) [1]. In this context, CBM (Condition-based Maintenance)
and PHM (Prognostics and Health Management Maintenance) are essential approaches
to improve the performance of a fleet [2]. These two approaches lay on the exploita-
tion of an efficient monitoring architecture, diagnosing the failures and degradation of
vehicles’ subsystems and evaluating their impact on availability and reliability of the
fleet. If many works propose monitoring architectures in different domains (manufac-
turing, transportation…), they use often dedicated tools and there is a lack in terms of
generic model. To solve this issue, this paper proposes a generic model to design efficient
monitoring architectures.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021


T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 532–542, 2021.
https://doi.org/10.1007/978-3-030-69373-2_39
A Proposal to Model the Monitoring Architecture of a Complex Transportation System 533

The paper is organized as follow. In the first section, a short literature review of the
monitoring architectures is presented. The second section is dedicated to the proposition
of a holonic model able to describe the information chain and the different decisional
processes associated to the monitoring architectures. In the third section, this model is
applied to the monitoring of a truck fleet. Finally, conclusion and prospects are offered
in the last section.

2 Motivations
This part presents a brief analysis of the existing literature in the domain of monitoring
architectures. A monitoring architecture aims to detect, localize and identify the problems
that occur in a system [3]. Two levels are generally considered: the “vehicle” level
relative to the moving systems (i.e. vehicles) and the “fleet” level (e.g. maintenance
centre) where stakeholders (e.g., fleet manager, fleet maintainer) can take decisions to
improve the value chain associated with the transportation process [4]. In the rest of
the paper, it is assumed that a fleet is composed of vehicles, which are complex moving
systems that can be decomposed into subsystems (which can themselves be decomposed
into subsystems).
In the domain of condition monitoring and diagnostics of machines, the ISO
13374 standard provides a generic model of a monitoring architecture, using 6-layer
processing blocks [5]. These layers progress from raw data acquisition to maintenance
advisories. A fundamental part of any monitoring architecture is the stage “fault diagno-
sis”, responsible for detecting faults and isolating the faulty components to be repaired.
This standard is used as a guideline to monitor not moving machines in industrial pro-
cesses as well as vehicles [6]. Depending of the distribution of these six layers between the
“vehicle” level and the “fleet” level, several monitoring architectures can be considered.
In [7], the author proposes a typology with four types of architectures: “centralized”,
“edge-centralized”, “decentralized” and “decentralized and cooperative”.
In centralized architectures, the six layers introduced before are located in the “fleet”
level, which is responsible for collecting and processing all raw data from the vehicles
[7]. Most of these architectures use Big Data technologies [8]. However, this type of
architecture (primarily due to latency and data throughput) does not allow diagnosis
algorithms to be executed in real-time and (to limit data volume) does not consider rele-
vant information on the local context of the vehicle (e.g., external temperature, specific
functioning mode) [9].
The edge-centralized architecture is more recent and characterized by the introduc-
tion of intermediate “cloud node networks” for computing and communication using
edge-computing technologies [10]. This architecture lays on the creation of an interme-
diate level (between the vehicle level and the fleet level) insuring storage and processing
of raw data. This type of architecture can be considered as a technological evolution of
the previous centralized architecture.
In decentralized architectures, embedded diagnosis units (i.e. located on the vehicle)
support the 4th ISO layer and realize fault diagnosis of some vehicle’s sub-systems.
However these units, operating independently, use only limited observations of their
subsystems and do not communicate with each other [11]. Such architectures are then
534 I. Mallouk et al.

unable to perform cross analysis for errors discrimination and can lead to erroneous
diagnoses [7].
In the decentralized and cooperative architecture, the embedded diagnosis units
at the same level cooperate among them to enrich their local observations, take into
account the context of the subsystems and provide then more robust diagnose [12]. This
last architecture was applied successfully by our team to the fault diagnosis of door
systems of a railway transportation system [13]. In [7], this architecture was equally
applied for the monitoring of a fleet of trains.
In recent works [14], our team has proposed an approach to model the information
chain from “product” to “stakeholders”. This approach is based on a holonic view of the
product and a decision-making view of the processing of DIK (i.e. data, information, and
knowledge) collected on the product and its operational context. This approach (tech-
nologically agnostic) focuses on the final decision maker (i.e. stakeholder) so the choice
of a specific type of architecture (i.e., centralized, decentralized…) can be considered
in a second stage. In the next section, the generic model associated to this approach is
adapted to support the monitoring architecture of a fleet of vehicles.

3 Proposition

The proposition exhibits the following requirements:

– Requirement #1: The model must be able to deal with the complexity of a fleet of
vehicles (e.g., trucks, trains, planes…) composed of several subsystems;
– Requirement #2: The model must be sufficiently generic to deal with the various needs
of the stakeholders implied in the fleet management;
– Requirement #3: The model must be technologically independent.

The next paragraphs propose a modelling approach able to fulfil these requirements.

3.1 The Vehicle and the Associated Functions


In [14, 15], a modelling approach based on primary and secondary functions was pro-
posed. Primary functions are representative of the core activity of the transportation
fleet. For example, at the vehicle level, the most important primary function FP1 “trans-
port cargo from A to B” concerns the physical transportation process performed by the
vehicle.
As depicted Fig. 1, the vehicle (1), denoted Vi , is immersed in a context Ci (2)
composed of the following entities:

– The cargo on which the process is applied (e.g. freight transported from A to B by a
truck);
– The human in the vehicle generally called ‘driver’. This one plays several roles during
the transportation process: obviously the main role is relative to the vehicle driving;
a second role is relative to the loading/unloading of the cargo; a third role, as local
stakeholder, concerns interventions on the vehicle (i.e. basic maintenance operations).
A Proposal to Model the Monitoring Architecture of a Complex Transportation System 535

– The task that defines how the process is managed for a specific function FPj . It is
characterized by prescribed procedures (e.g., how the cargo must be transported,
loaded and unloaded) and some performance criteria (e.g., transportation time, energy
consumption, safety of the cargo);
– The environment in which the transportation process occurs. Two main facets are
considered: physical (e.g., external temperature, hygrometry) and non-physical (e.g.
freight transportation legislation).

The secondary functions aim to enhance performance criteria (e.g., fleet availability
via predictive maintenance of the vehicles, monitoring of the freight). As described in
Fig. 1, these functions are handled by support systems (3) located on “vehicle” (i.e.
internal support system) and “fleet” (i.e. external support system) levels.

Primary Secondary
functions functions

3
4
STAKEHOLDER #1
FLEET
Fleet level

Vehicle level
STAKEHOLDER
VEHICLE Vi
Intervention
Raw
data
Cargo
1
Vi
Intervention

Environment
2 Context Ci

Fig. 1. Vehicle and the associated functions

At the “vehicle” level, internal support systems exploit the raw data flow collected
on the transportation process by sensors and instrumentation associated to the different
subsystems (e.g. data sent by the Electronic Control Unit associated to the engine).
As previously explained, in the Decentralized and Cooperative architecture, the internal
support systems can generate refined and accurate information (i.e. diagnosis) to external
support systems (e.g. the remote maintenance centre).
At the “fleet” level, external support systems composed of diagnosis expert resources
(human and/or artificial) generate expertise results to the implied stakeholders (4). Exter-
nal support systems are immersed in a context relevant of the “fleet” level (i.e., transporta-
tion regulatory, financial aspects). Taking into account the generated expertise results,
536 I. Mallouk et al.

the stakeholders can then schedule interventions on the vehicle (e.g. replacement of a
part that is wearing out).
The generic characteristics of the primary and secondary functions allow fulfilling
Req. #2. The modelling of the secondary functions is detailed in the next section.

3.2 Holonic Modelling of the Secondary Functions


To fulfil the first requirement (Req. #1), a holonic architecture is retained [16] to encap-
sulate the activities associated to the secondary functions (from the “fleet” level to the
lower level of the vehicle subsystems), see Fig. 2.

Decomposition
levels

STAKEHOLDER
Fleet level

fs0n Fleet diagnosis

FLEET
0
H0

STAKEHOLDER
fs1n Vehicle diagnosis

VEHICLE #2
1 fs2n
Vehicle level

H1 H2

2 fs11n fs12n
H11 H12 H121 H22
fs111 n
V12 V22
3 H111 H112 H113 H114 H115 H211 H212
C12 C22
V111 V112 V113 V114 V115 V211 V212
Engine Engine
C111 C112 C113 C114 C115 C211 C212
C11 C21
Wheels C1 Wheels C2

Vehicle #1 Vehicle #2
V12
V22

V111 V112 V113 V114 V115 V211 V212

LEGEND: Hi : Holon i Ci : Context of Vi

Vi : Vehicle / system i Collaboration space

Fig. 2. Example of holarchy

As depicted Fig. 2, at each level of the decomposition, a triplet (Fsi , Vi , Ci ) is


associated to each holon Hi . The head of the holon Hi contains the secondary function
A Proposal to Model the Monitoring Architecture of a Complex Transportation System 537

fsi n considered (fsi n ∈ Fsi ; Fsi set of all secondary functions for Hi) associated with the
system Vi . The latter and its context Ci constitute the holon body.
At each level i, a holon autonomously supports a decisional process relative to the
considered secondary function. In addition, the holon interacts with other holons at the
same level and (when relevant) with holons located on i + 1 and i − 1 levels, through
a collaboration space. In this last one, only concerned holons (on the same level and
upper and lower levels) can optimize their decisions. The optimization strategy can be
based on numerous principles from the literature, e.g. from the multi-agent community.
Depending of the complexity, this cooperation can be realized through, for example,
simple data sharing or a more complex distributed decision making process.
In the holarchy represented in Fig. 2 the vehicle is composed of several sub-systems
(e.g., engine, wheels, undercarriage…) and at each holon Hi a diagnosis process is
supported by a secondary function fsi n , giving birth to recursive diagnostic structures.
To perform its activity, each holon exploits data collected, relative to level, vehicle
subsystems and their respective contexts, and applies a decisional process; this process
is detailed in the next paragraph.

3.3 Decisional Process


To each secondary function a behaviour is associated. In a general approach, this
behaviour can be characterized by the semantic of processed data and treatments’ cog-
nitive levels. In previous works [14], both the DIK [17] and the Rasmussen models [18]
were considered as reference models. According to the DIK model:

– D (for Data) are raw facts without meaning issued from measurements (e.g.,
temperature of the environment, pressure of the vehicle tyres).
– I (for Information) are obtained by adding informative details to data (D), such as
“when”, “where”, “who”, “how”, “what” (e.g., vehicle (what); in a warehouse (where);
at a specific time (when)).
– K (for Knowledge) can be seen as groups of information (I) that are linked by semantic
relations (e.g. engine V12 used in vehicle V1 in an operational context where the
external temperature of the environment is 35 °C).

According the Rasmussen’s approach [18], three levels can be distinguished for the
decision-making processes:

– At lower level, a reactive behaviour (or skill-based behaviour) can be used to generate
alarms by exploiting raw data (D). For example, a vehicle can send an alarm in case
of problem on the cargo (e.g. break in the cold chain for perishable goods).
– At mid-level, rule-based behaviour can exploit different information (I) sources to
generate refined information. For example, a model-based approach can be used by
an embedded diagnosis system to identify suspect components if a failure occurs on
vehicle equipment.
– At higher level, knowledge (K) can be used to improve understanding of the situation.
For example, a detailed analysis of the transportation context can lead a human expert
to explain a problem on the vehicle.
538 I. Mallouk et al.

The two previous models (i.e. DIK and Rasmussen) taken as references are generic
and technologically agnostic, fulfilling then the third requirement (Req. #3). They per-
mit to define both the nature of informative details (DIK) and cognitive levels of the
decisional process. In this context, it can be envisaged to exploit matching theoretical
fields (e.g., AI, analytics, ontology…) and associated technologies (e.g. embedded, edge
computing, cloud computing…).
In the next section, the proposed model is instantiated through a use case of
monitoring of a trucks fleet.

4 Use Case

The use case concerns a fleet of vehicles transporting hazardous substances. The oper-
ational information relative to the transport of dangerous substances is provided by
STMF [19], a leading Moroccan company in this sector. The project is currently ongo-
ing; only an overview of the planned developments is given and no implementation detail
is provided.
Ensuring a high level of reliability of the vehicles is a very important parameter
in order to achieve properly the delivery missions. A minor accident as a tyre burst
can cause the detachment of the tank containing the hazardous substance with dramatic
consequences for the transport company, the other road users and the environment.
Ignoring or failing to correctly set the tyre pressure may lead to accidents, can affect the
vehicle’s fuel consumption and tyre lifespan [20].
Relatively to the modelling approach presented in the previous part, the primary
function FP1 “transport cargo from A to B” is considered with:

– The vehicle Vi and its hierarchical decomposition in subsystems, as in Fig. 2;


– The context Ci of the vehicle defined by:

• The type of cargo (e.g., dangerousness level of the substance),


• The task is defined by an initial itinerary defined at the fleet level. The definition
of the itinerary must take into account the health state of the vehicle, the driver
status (e.g., habilitation for dangerous substance) and others factors (e.g., weather
information, road characteristics).
• The driver is considered as decision-maker (i.e. local stakeholder) assisted by the
internal support system to take autonomous decisions (e.g., modification of the
initial itinerary, stop to a repair shop).
• The environment (e.g., road characteristics - mountain, plain, external temperature).

In this application, the focus is held on secondary functions relative to the vehicle
tyres. According Fig. 2 and the holonic approach, several levels are considered, as
exhibited Fig. 3:
At the “tyre” level (Level #3), for each tyre a secondary function (e.g. fs111 n for the
holon H111) performs monitoring via temperature/pressure raw data. The direct tyre
pressure monitoring system employs temperature/pressure sensors on each wheel. The
current TPMS (Tyre Pressure Monitoring Systems) are based on pressure thresholds
A Proposal to Model the Monitoring Architecture of a Complex Transportation System 539

[21]. However, the internal temperature varies with the pressure and can generate a
false alarm. In fact the variation of tyre internal temperature can be considerably high,
especially in the hot Moroccan climate. The pressure can then exceed the fixed threshold
and induce a false fault alarm. It is imperative to dynamically determine the appropriate
pressure at certain temperature to avoid a false alarm and also to detect early tyre defects.
In [21], a rule-based approach, mixing temperature and pressure data, is used to evaluate
the tyre status and generate alarm. The future developments will take inspiration of this
approach.
At the “rolling undercarriage” level (Level #2), a secondary function (e.g., fs11 n
for the holon H11) takes into account the status of all the vehicle tyres (i.e. those of the
trailer and those of the tractor) and evaluates the possible consequence on the vehicle
handling. First, false alarms can be filtered taking into account the context and the
condition of the other tyres. For example, depending of the payload and of the vehicle
speed, in a left bend, if all the right wheels have a different pressure from the other left
wheels, the pressure problem can be explained by the context. This filtering approach
was successfully applied by our team for the diagnosis in railway applications [13].
Secondly, a vehicle handling status can be obtained via precise modelling and simulation
[22] taking into account tyre wear rates, temperatures, pressures and the position of the
tyres in the whole undercarriage. This vehicle handling status assessment is done by
using a rule-based approach and can be displayed to the on-board stakeholder (i.e. the
driver), detailing truck and trailer status.
At the “vehicle” level (Level #1), a secondary function (e.g., fs1 n for the holon
H1) determines the impact of the vehicle handling status on the transport mission. A
“knowledge-based” approach is used to integrate multiple factors: the vehicle handling
status determined by the previous fs11 n function, information on the current mission
(e.g., distance remaining to be covered, emergency associated to the freight) and on the
vehicle context (e.g., outside temperature, type of road). This impact is communicated
to the driver who must take a decision (e.g., stop and change the tyre, go to the nearest
repair shop, continue the mission by reducing the speed or changing the itinerary).
The previous secondary functions will exploit modelling and simulation support
organized in a vehicle digital twin [23]. As depicted in Fig. 3, the raw data collected on
the tyres are used to update the digital twin.
At the “Fleet” level (Level #0), a secondary function fs0 n , supported by an external
system, performs a global comparative analysis of the data provided by the different
vehicles relatively to the tyres. A predictive analysis on the collected data can generate
information for conditional maintenance operations, suggestion on driving patterns on
various road conditions to both the driver and fleet owner [24]. At the end of this analysis,
a view of the health of the vehicles tyres (i.e. remaining useful life (RUL)) and recom-
mendations (e.g. swapping tyres on axles) are presented to the concerned stakeholders
(i.e., maintenance expert, fleet manager). As outlined in [25], maintenance predictions
can be enhanced by combining the deviations in on-board data (from internal support
systems) with off-board data (from external support system) sources such as mainte-
nance records and failure statistics. At this level, Big Data analytics tools are classically
used [24]. For example, approaches to classify time series and to detect abnormality can
540 I. Mallouk et al.

be useful. In this context, deep learning and more precisely LSTM (Long Short Term
Memory) models are promising candidates.

Health of the
vehicles tyres
/
Fleet level

fs0n Recommenda
0 Monitoring of the
-tion
FLEET
tyres collective MAINTENER

Update Experience on the tyres behaviour

fs1n
1 VEHICLE Simulation Transport mission Prediction
DIGITAL results prognosis VEHICLE
TWIN DRIVER
Knowledge-based
Vehicle
Vehicle level

dynamics Vehicle holding status


modeling fs11n
2 Simulation Vehicle handling
results diagnosis
Simulation
Rule-based
Tractor tyres Trailer tyres
Tyre status status status
fs111n
3 Tyre
monitoring
Rule-based

Raw data: Temperature / Pressure

Fig. 3. Application to the tyres monitoring

The result of the global analysis allows updating information (e.g. tyre wear rate) and
knowledge (e.g. tyre grip behaviour depending of the road surface) used in the vehicle’s
digital twin.

5 Conclusion
This paper has proposed a model for the informational chain of systems in the field of
transportation, that can be customized for the concerned stakeholders. This model con-
siders each subsystem and its operating context. The system composition is modelled as
A Proposal to Model the Monitoring Architecture of a Complex Transportation System 541

a holonic hierarchy. To each holon head is associated one or several secondary functions
that provide the diagnosis of the health status of the targeted system in the holon’ body.
A use case illustrates how the proposed system can be used for tyre monitoring,
in the context of a fleet of vehicles transporting hazardous substances. Four levels of
monitoring, analysis and decision were proposed: tyre, set of tyres, vehicle and fleet of
vehicles.
The future work will be devoted to completing the use case and to detail how to
choose the candidate architecture and technologies to implement this preliminary system
architecture. More particularly, the focus will be put on the different analytics tools used
on-board (at vehicle level) and off-board (at fleet level).

References
1. Mbuli, J.W.: A multi-agent system for the reactive fleet maintenance support planning of a fleet
of mobile cyber-physical systems: application to rail transport industry. Doctoral dissertation.
Université Polytechnique Hauts-de-France (2019)
2. Trentesaux, D., Branger, G.: Data management architectures for the improvement of the
availability and maintainability of a fleet of complex transportation systems: a state-of-the-
art review. In: Service Orientation in Holonic and Multi-Agent Manufacturing, pp. 93–110.
Springer, Cham (2018)
3. Pencolé, Y., Cordier, M.O.: A formal framework for the decentralised diagnosis of large
scale discrete event systems and its application to telecommunication networks. Artif. Intell.
164(1–2), 121–170 (2005)
4. Bengtsson, M.: Condition Based Maintenance on Rail Vehicles-Possibilities for a more
effective maintenance strategy (2003)
5. ISO 13374-1:2003 - Condition monitoring and diagnostics of machines - Data processing,
communication and presentation - Part 1: General guidelines. https://www.iso.org/cms/ren
der/live/en/sites/isoorg/contents/data/standard/02/18/21832.html
6. Alanen, J., Haataja, K., Laurila, O., Peltola, J., Aho, I.: Diagnostics of mobile work machines
(2006)
7. Adoum, A.F.: An intelligent agent-based monitoring architecture to help the proactive main-
tenance of a fleet of mobile systems : application to the railway field, Doctoral dissertation.
Université de Valenciennes et du Hainaut-Cambrésis (2019)
8. Chen, J., Lyu, Z., Liu, Y., Huang, J., Zhang, G., Wang, J., Chen, X.: A big data analysis and
application platform for civil aircraft health management. In: 2016 IEEE Second International
Conference on Multimedia Big Data (BigMM), pp. 404–409. IEEE (2016)
9. Jianjun, C., Peilin, Z., Guoquan, R., Jianping, F.: Decentralized and overall condition moni-
toring system for large-scale mobile and complex equipment. J. Syst. Eng. Electron. 18(4),
758–763 (2007)
10. Klas, G.: Edge computing and the role of cellular networks. Computer 50(10), 40–49 (2017)
11. Qiu, W., Kumar, R.: Decentralized failure diagnosis of discrete event systems. IEEE Trans.
Syst. Man Cybern.-Part A: Syst. Hum. 36(2), 384–395 (2006)
12. Zhang, Q., Zhang, X.: Distributed sensor fault diagnosis in a class of interconnected nonlinear
uncertain systems. Ann. Rev. Control 37(1), 170–179 (2013)
13. Le Mortellec, A., Clarhaut, J., Sallez, Y., Berger, T., Trentesaux, D.: Embedded holonic fault
diagnosis of complex transportation systems. Eng. Appl. Artif. Intell. 26(1), 227–240 (2013)
14. Basselot, V., Berger, T., Sallez, Y.: Information chain modeling from product to stakeholder
in the use phase - application to diagnoses in railway transportation. Manuf. Lett. 20, 22–26
(2019)
542 I. Mallouk et al.

15. Sallez, Y., Berger, T., Deneux, D., Trentesaux, D.: The lifecycle of active and intelligent
products: the augmentation concept. Int. J. Comput. Integr. Manuf. 23(10), 905–924 (2010)
16. Koestler, A.: The ghost in the machine (1967)
17. Ackoff, R.L.: From data to wisdom. J. Appl. Syst. Anal. 16(1), 3–9 (1989)
18. Rasmussen, J.: Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions
in human performance models. IEEE Trans. Syst. Man Cybern. 3, 257–266 (1983)
19. STMF. https://www.stmf.pro/
20. Mallouk, I., El Majd, B.A., Sallez, Y.: Optimization of the maintenance planning of a multi-
component system. In: MATEC Web of Conferences, vol. 200, p. 00011. EDP Sciences
(2018)
21. Egaji, O.A., Chakhar, S., Brown, D.: An innovative decision rule approach to tyre pressure
monitoring. Expert Syst. Appl. 124, 252–270 (2019)
22. Domprobst, F.: Heavy truck vehicle dynamics model and impact of the tire. In HVTT14: 14th
International Symposium on Heavy Vehicle Transport Technology, Rotorua, New Zealand
(2016)
23. Damjanovic-Behrendt, V.: A digital twin-based privacy enhancement mechanism for the auto-
motive industry. In: 2018 International Conference on Intelligent Systems, pp. 272–279. IEEE
(2018)
24. Preethi, V., Sasi, R.S., Rohit, J.M.: Predictive analysis using big data analytics for sensors
used in fleet truck monitoring. Int. J. Eng. Technol. 8(2), 6 (2016)
25. Prytz, R.: Machine learning methods for vehicle predictive maintenance using off-board and
on-board data. Doctoral dissertation, Halmstad University Press (2014)
Author Index

A C
Abdoune, Farah, 123 Caillaud, Emmanuel, 246
Ahmad, Bilal, 99 Capawa Fotsoh, Erica, 169
Allaoui, Hamid, 460 Cardin, Olivier, 81, 123, 151, 169, 274, 385, 435
Alvarado-Valencia, Jorge Andrés, 151 Castagna, Pierre, 123, 169
Chaabane, Sondès, 343, 520
André, Pascal, 385
Chargui, Tarik, 460
Anton, Florin, 3, 53, 66
Chauvin, Christine, 313
Anton, Silvia, 3, 66
Antons, Oliver, 193 D
Arias-Paredes, Gloria Juliana, 151 Darmoul, Saber, 520
Arlinghaus, Julia C., 193 David, M., 398
Attajer, Ali, 520 Demartini, Melissa, 473
Azzi, Fawzi, 385 Demesure, Guillaume, 286
Derigent, William, 367, 398
Dosoftei, Cătălin, 41

B E
Babiceanu, Radu F., 3 Ebuy, Habtamu Tkubet, 355
Basson, Anton H., 81, 111, 135, 181, 299 Edouard, Aurélie, 449
Bekker, Anriëtte, 299 El Majd, Badr Abou, 532
Bekrar, Abdelghani, 435, 460 Essghaier, Fatma, 460
Benelmir, Riad, 355
Berdal, Quentin, 313 F
Berger, Alexandre, 449 Fernandes, Florbela P., 262
Berger, Thierry, 532 Fortineau, Virginie, 449
Berrah, Lamia, 231
G
Bertani, Filippo, 473
Gely, Corentin, 327
Bettayeb, Belgacem, 496 Geraldes, Carla A. S., 262
Bonte, Thérèse, 313 Giret, Adriana, 435
Borangiu, Theodor, 3, 53, 66 Goncalves, Gilles, 460
Bril El-Haouzi, Hind, 286, 355, 367 Gonzalez-Neira, Eliana, 151
Brintrup, Alexandra, 421 Grabot, Bernard, 508

© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2021
T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 543–544, 2021.
https://doi.org/10.1007/978-3-030-69373-2
544 Author Index

H Passano, Gianluca, 473


Herakovič, Niko, 409, 485 Pellerin, Robert, 508
Herrera, Manuel, 421 Pires, Flávia, 99
Huftier, Arnaud, 246 Pontes, Joseane, 262
Human, C., 111 Proselkov, Yaniv, 421

J R
Jankovič, Denis, 409 Răileanu, Silviu, 3, 53, 66
Jimenez, Jose-Fernando, 151, 203 Rault, Raphaël, 246
Joseph, A. J., 135 Redelinghuys, A. J. H., 81
Riane, Fouad, 520
K Rossouw, Johan J., 181
Klement, Nathalie, 496 Ruiz-Cruz, Carlos Rodrigo, 203
Kolski, Christophe, 343
Kozhevnikov, Sergey, 215 S
Kruger, Karel, 81, 111, 135, 169, 181, 299 Sahnoun, M’hammed, 496
Sakurada, Lucas, 262
L Sallez, Yves, 449, 520, 532
Lamouri, Samir, 449, 508 Sénéchal, Olivier, 327
Lebrun, Yoann, 343 Šimic, Marko, 409, 485
Leitão, Paulo, 99, 262 Skobelev, Petr, 215
Lepreux, Sophie, 343 Souza, Matheus, 99
Louis, Anne, 496 Sparrow, Dale, 299
Svitek, Miroslav, 215
M
Mallouk, Issam, 532
T
Mazar, Merouane, 496
Taylor, Nicole, 299
McFarlane, Duncan, 367
Tonelli, Flavio, 473
Meza, Sebastian-Mateo, 203
Trentesaux, Damien, 151, 231, 246, 313, 327,
Mohafid, Abdelmoula, 274
435, 460
Morariu, Cristina, 3
Morariu, Octavian, 3
Murcia, Nicolas, 274 U
Usuga-Cadavid, Juan Pablo, 508
N
Nguyen, Angie, 508 V
Nouiri, Maroua, 123, 435 Valette, Etienne, 286
Vispi, Nicolas, 343
P
Pacaux-Lemoine, Marie-Pierre, 313, 327 W
Pănescu, Doru, 41 Wan, H., 398
Pannequin, Rémi, 355
Parlikad, Ajith Kumar, 421 Z
Pascal, Carlos, 41 Zupan, Hugo, 485

You might also like