Professional Documents
Culture Documents
Textbook Intelligent Systems in Production Engineering and Maintenance Anna Burduk Ebook All Chapter PDF
Textbook Intelligent Systems in Production Engineering and Maintenance Anna Burduk Ebook All Chapter PDF
https://textbookfull.com/product/software-engineering-and-
algorithms-in-intelligent-systems-radek-silhavy/
https://textbookfull.com/product/transportation-systems-managing-
performance-through-advanced-maintenance-engineering-sarbjeet-
singh/
https://textbookfull.com/product/modelling-and-intelligent-
optimisation-of-production-scheduling-in-vcim-systems-1st-
edition-son-duy-dao-auth/
https://textbookfull.com/product/software-engineering-
perspectives-in-intelligent-systems-proceedings-of-4th-
computational-methods-in-systems-and-software-2020-vol-1-radek-
Software Engineering Perspectives in Intelligent
Systems: Proceedings of 4th Computational Methods in
Systems and Software 2020, Vol.2 Radek Silhavy
https://textbookfull.com/product/software-engineering-
perspectives-in-intelligent-systems-proceedings-of-4th-
computational-methods-in-systems-and-software-2020-vol-2-radek-
silhavy/
https://textbookfull.com/product/progresses-in-artificial-
intelligence-and-neural-systems-anna-esposito/
https://textbookfull.com/product/asset-maintenance-engineering-
methodologies-first-edition-farinha/
https://textbookfull.com/product/site-reliability-engineering-
how-google-runs-production-systems-1st-edition-betsy-beyer/
https://textbookfull.com/product/cybernetics-and-algorithms-in-
intelligent-systems-radek-silhavy/
Advances in Intelligent Systems and Computing 835
Intelligent
Systems in
Production
Engineering and
Maintenance
Advances in Intelligent Systems and Computing
Volume 835
Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: kacprzyk@ibspan.waw.pl
The series “Advances in Intelligent Systems and Computing” contains publications on theory,
applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all
disciplines such as engineering, natural sciences, computer and information science, ICT, economics,
business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the
areas of modern intelligent systems and computing such as: computational intelligence, soft computing
including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms,
social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and
society, cognitive science and systems, Perception and Vision, DNA and immune based systems,
self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics including
human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent
data analysis, knowledge management, intelligent agents, intelligent decision making and support,
intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings
of important conferences, symposia and congresses. They cover significant recent developments in the
field, both of a foundational and applicable character. An important characteristic feature of the series is
the short publication time and world-wide distribution. This permits a rapid and broad dissemination of
research results.
Advisory Board
Chairman
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
e-mail: nikhil@isical.ac.in
Members
Rafael Bello Perez, Universidad Central “Marta Abreu” de Las Villas, Santa Clara, Cuba
e-mail: rbellop@uclv.edu.cu
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
e-mail: escorchado@usal.es
Hani Hagras, University of Essex, Colchester, UK
e-mail: hani@essex.ac.uk
László T. Kóczy, Széchenyi István University, Győr, Hungary
e-mail: koczy@sze.hu
Vladik Kreinovich, University of Texas at El Paso, El Paso, USA
e-mail: vladik@utep.edu
Chin-Teng Lin, National Chiao Tung University, Hsinchu, Taiwan
e-mail: ctlin@mail.nctu.edu.tw
Jie Lu, University of Technology, Sydney, Australia
e-mail: Jie.Lu@uts.edu.au
Patricia Melin, Tijuana Institute of Technology, Tijuana, Mexico
e-mail: epmelin@hafsamx.org
Nadia Nedjah, State University of Rio de Janeiro, Rio de Janeiro, Brazil
e-mail: nadia@eng.uerj.br
Ngoc Thanh Nguyen, Wroclaw University of Technology, Wroclaw, Poland
e-mail: Ngoc-Thanh.Nguyen@pwr.edu.pl
Jun Wang, The Chinese University of Hong Kong, Shatin, Hong Kong
e-mail: jwang@mae.cuhk.edu.hk
Editors
Intelligent Systems
in Production Engineering
and Maintenance
123
Editors
Anna Burduk Tomasz Nowakowski
Faculty of Mechanical Engineering Faculty of Mechanical Engineering
Wrocław University of Science Wrocław University of Science
and Technology and Technology
Wrocław, Poland Wrocław, Poland
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
v
vi Preface
We would like to thank all the authors for presentations and the representatives
of both industry and academia for participating in lively discussions. Special thanks
go to the members of the Program Committee for the reliable process of reviewing
the papers. We are also grateful to the Organizing Committee for the hard work at
the conference preparatory stage.
Executive Committee
Honorary Chair
General Chairs
Co-chairs
vii
viii Organization
Program Committee
Organizing Committee
Organized by
Contents
xiii
xiv Contents
1 Introduction
In the context of industrial mass production, e.g. the automotive industry, the term
“prototype” describes an abstracted physical or digital model or a mock-up depicting
selected properties of the specific product. The purpose of prototypes comprises the
assessment and validation of the respective product design in terms of functional,
geometrical and aesthetical aspects. The feedback information from testing with proto‐
types helps to adjust the initial product design at an early stage to minimize the risk of
costly design changes after start of mass production. As the properties of the prototype
may differ widely from the sellable product the manufacturing processes and the utilized
materials may vary as well in order to be cost-efficient and rapidly available [1, 2].
Since the emergence of terms like “digital mockup” and “virtual prototyping” [3]
the digitalization of prototype construction became even more important due to recent
technical improvements of virtual or mixed reality technologies (VR, MR) and their
increasing acceptance in both private and professional context. These technologies
visualize digital contents to different extents and allow the user to interact with the
displayed objects [4]. Since the construction and adaption of physical prototypes is
costly and time-consuming numerous applications in academia and industry aim at
reducing the number of physical prototypes by creating virtual prototypes utilizing VR
and MR technologies. Examples for the proven feasibility for these approaches are
described by Rademacher [5] and Schreiber [6]. However, physical prototypes are still
essential in today’s production facilities and are being regarded as indispensable even
beyond 2040 by Winkelhake [7].
Prototypes are built at different stages of the product development process and for
different purposes, but the prototype construction is usually located before the start of
serial production. It commonly incorporates a small batch production of multiple
versions of the same digital 3D component to build several slightly different pre-produc‐
tion parts or assemblies for assessment and validation. The prototype parts are usually
manufactured either in-house or by an external supplier. In both cases, the parts are spot-
checked in a randomized quality inspection on receiving before further processing.
A critical factor for the quality inspection is the process speed, as the single parts
must be rapidly available for further operations. Based on a project conducted with an
automotive supplier, this paper addresses problems for the quality inspection of received
parts in prototype construction facilities.
An initial step for the quality assessment involves the comparison of the target
geometry (as designed) and the part’s actual geometry (as manufactured), to detect
potential deviations, e.g. a missing drill hole. In the considered scenario, inspection
speed matters more than recognition accuracy. Therefore, an automated visual inspec‐
tion regarding previously defined characteristic points is preferred over a reverse engi‐
neering approach, i.e. scanning and reconstruction of the entirety of the part’s surface,
or using mechanical probes [8].
An essential prerequisite for the visual inspection is the detection of defined charac‐
teristics on the physical prototype. In this context the research field of Computer Vision
(CV) addresses techniques and theoretical approaches including image acquisition,
processing and analysis, with one goal being able to compare a given image with a refer‐
ence image. This is achieved by the recognition of elementary features, such as edges,
corners, circles or straight lines in the given image [9]. Following the detection of the
features within an image, the interest locations are described through a feature descriptor,
for the subsequent comparison, to find a relationship between the given and the reference
image. The detection and description of these salient points rests upon different mathe‐
matical methods, which can be computed by different algorithms, e.g. Features from
Accelerated Segment Test (FAST), Speeded-up Robust Features (SURF), Scale-Invar‐
iant Feature Transform (SIFT) or Oriented FAST and Rotated BRIEF (ORB). The actual
comparison of features in the given image and the reference is performed by feature
matching algorithms, such as Fast Library for Approximate Nearest Neighbors (FLANN)
or Brute Force (BF) [10]. Each of these algorithms has various strengths and weak‐
nesses, which must be considered for the respective application.
With a clear focus on automated visual inspection in prototype construction facilities,
this paper elaborates a concept for the detection of previously defined characteristics by
means of CV. These characteristics may refer to product and manufacturing information
(PMI), stored in the prototype’s 3D CAD document. In general, PMI are attached to a
CAD model, documenting a product in regards to design, manufacturing and inspection,
including data about e.g. dimensions, tolerances, manufacturing information or generic
annotations [11]. As mixed reality is a proven facilitator for the context sensitive
3D Geometry Recognition for a PMI-Based Mixed Reality 5
visualization and interaction with the displayed information [12], this technology is
being considered as part of the proposed overall concept.
The related use case is derived from a project conducted with the prototype construction
facility at Adient Ltd. Co. KG (Burscheid, Germany). Adient’s IT infrastructure supports
and accelerates business processes especially in production, as this is imperative as an
automotive supplier. The project entails investigations to digitalize the prototype part
receiving in the corresponding facility, with the purpose to reduce errors and speed-up
the prototype construction process.
As mentioned above, the received prototype parts require a quality inspection before
further processing, e.g. assembly and testing. In assistance with common image
processing and analysis methods, an automated visual inspection process is able to
recognize objects and their positioning, to check the completeness, shape, geometry and
dimensions, as well as the surface or to detect defects [8].
The visual inspection process within the proposed overall concept is focused on the
recognition of actual received prototype parts. Therefore, the aim of the paper at hand is
to find a way for the automatic identification of the actual part to be inspected to provide
the corresponding quality and inspection information of the specific prototype part.
Since the use case is derived from an industrial project, the general requirement is
the real-world usability. This implies a setup for the prototypical object recognition
comparable to the productive one. Therefore, an installation with cameras mounted over
a conveyer belt is ideal. In addition to this, specific requirements for the part recognition
arise. The first requirement concerns the clear identification of a prototype part by refer‐
ence to a previously defined set of potential parts. The recognition should ideally be
carried out based on 3D CAD geometry or alternatively with reference camera images.
Following a successful identification, the third requirement includes the automatic
retrieval of a part’s respective data, as this is necessary for the subsequent visual quality
inspection. Finally, high detection precision, repeating accuracy and a sufficient level
of efficiency are mandatory, as the detection should offer real-time results.
3 Concept
The overall concept for the PMI-based mixed reality assistant system consists of several
distinct modules. The main goal of the presented approach is an accelerated visual
quality inspection of received parts in a prototype construction facility by means of
computer vision (CV). To enable a quick analysis of the actual shape towards the target
shape, the inspection is carried out based on special previously defined characteristics,
regarding the part’s geometry, instead of scanning and analyzing the entire surface of a
part. As described in the introductory section, existing CV algorithms are able to detect
salient elements within an image to compare them towards a reference image. Therefore,
the geometrical characteristics must be visible, when transferred to the reference image,
or, to the contrary, covered and hidden features are not suitable for a visual inspection.
6 M. Neges et al.
The concluding step of the overall concept is the inspection results visualization to the
prototyping worker for a qualitative assessment.
For a better overview, Fig. 1 illustrates only the major components of the overall
concept. The previously described use case consists of four major process steps or
modules, which, in turn, comprise several activities. This paper focuses on the partial
aspects for the part recognition, labelled as module 1 and module 2 (Fig. 1).
The first step (Fig. 1) begins with the image acquisition of the specific prototype part.
Therefore, a physical input for this module is the respective part itself. As mentioned in
the requirements section, the part recognition works on an installation, equipped with
cameras and mounted over a conveyer belt for an in-line supply of the prototype parts.
The output of this step is an image of the actual part.
Whereas step 1 can be considered as a preparation, the second module entails the main
activities for the CV-based part recognition. This step requires comparative data, which
means different images, for the evaluation of similarities within these pictures. On the one
hand, there are the previously taken pictures of the actual prototype part from module 1
and on the other hand, 3D geometry models or reference images as an alternative serve
as potential “objects” to be identified (Fig. 1). The desired result of this module is the
successful identification of the actual prototype part, including an exact mapping to the
part-identifying information, e.g. a prototype part identification number (ID).
Based on the identification (step 2), the correlating input documents for the third
module are provided. This novel approach for a visual quality inspection utilizes both
geometric data as well as product and manufacturing information (PMI), derived from
3D CAD documents (Fig. 1). These characteristics serve as reference points, which need
to be detected in the previously acquired images and checked for the purpose of an
automated visual inspection. An example for a geometric reference in this context may
3D Geometry Recognition for a PMI-Based Mixed Reality 7
be a drill hole feature in the respective 3D CAD model, which must exist in the physical
part as well. Additional reference points can be extracted from the annotations and
inspection information in the model’s PMI data. A precondition for this approach is the
prior accumulation of the 3D CAD models with the required information in a suitable
way, e.g. through a PLM system. A result of this module is the information about devi‐
ations between the physical part and the 3D CAD model, e.g. a missing drill hole.
The fourth module (Fig. 1) addresses an assistant system using mixed reality (MR)
visualization, to support the prototyping worker in the quality assessment and decision-
making in an intuitive and interactive way. The deviation information resulting from
step 3 is therefore used to visualize the delta between the part and CAD model. The
worker can then generate a Deviation Report that bundles all gathered information.
As one requirement for this approach concerns the real-world usability, the integra‐
tion into an existing IT infrastructure is mandatory. Therefore, the necessary data for
the steps 2 to 4 are provided by a central data source, e.g. a PLM system.
As mentioned before, the goal of the paper at hand is the creation of a module for the
recognition of three-dimensional objects as part of a holistic concept for automated
visual inspection and measurement data management in prototype construction.
For this purpose, the authors created the following demonstrator hardware configu‐
ration in the laboratory, visible in Fig. 2. The demonstrator consists of four Raspberry
Pi 3, three of those with the Raspberry Camera Modules v2 as autonomous camera
systems and the fourth as host for the calculation and desktop application. The authors
mounted the three camera systems in a tunnel spanning over the conveyer belt, which
transports the prototype parts into the cameras’ field of view. The camera systems
provide a live video stream to the calculation unit. The prototype parts in this demon‐
strator scenario are randomly picked items from the universities workshops, consisting
of cogwheels, shafts and other mechanical parts or assemblies.
4.3 Evaluation
Following to the hardware installation and the prototypical software implementation,
the authors tested the entire recognition process, i.e. to detect the actual part or assembly.
Beside an accurate part recognition the run time of the recognition process is a significant
value, that should be minimal, to offer real-time results. As mentioned above in the
introductory section, the different detection, description and matching algorithms should
be evaluated against each other within the given use case considering recognition accu‐
racy and speed in particular. Therefore, the recognition of one out of five different
prototype parts with ten reference images each was tested with the following feature
detection and description algorithms regarding their run time:
• Features from Accelerated Segment Test (FAST),
• Speeded-up Robust Features (SURF),
• Scale-Invariant Feature Transform (SIFT),
• Oriented FAST and Rotated BRIEF (ORB).
Based on these algorithms, the authors also tested the following matching algorithms
regarding their run time and the overall detection accuracy:
• Fast Library for Approximate Nearest Neighbors (FLANN),
• Brute Force (BF)
Table 1 summarizes the results of the performed tests including measured run times
and the accuracy of the used feature matching algorithms. As indicated in the table as
well, the FAST algorithm does not support the usage as a feature descriptor. Regarding
the run times for feature detection and description, ORB and SURF are prominent,
though the feature matching and overall accuracy must be considered as well. The lowest
run time of the feature matching algorithms is achieved with ORB, thus the recognition
accuracy declines under 50%, which is inacceptable for this use case. Therefore, the
authors used a combination of both algorithms SURF and SIFT in the prototypical
implementation for the detection and description.
The subsequent step is the evaluation of the four above mentioned use cases based
on the same algorithm combination. Scenario 1 and 2 (Recognition by 3D CAD geom‐
etry and by reference image) do not show significant differences and deliver nearly the
same detection accuracy. In Scenario 3 (Repetition accuracy), the same part was placed
under the camera ten times. Each time, the part was successfully detected. In the last
10 M. Neges et al.
scenario regarding precision and exactitude, cogwheels of different sizes were placed
under the camera. Due to the utilized edge detection, the particular cogwheel was
detected correctly.
5 Conclusion
The authors have shown that the approach at hand enables the three-dimensional object
recognition by means of existing computer vision algorithms, utilizing both 3D CAD
geometry and camera images as reference objects. The presented approach is only the
foundational part of a holistic concept for automated visual inspection and the measure‐
ment data management in prototype construction. Future work will consider the extrac‐
tion of product and manufacturing information (PMI) and geometrical characteristics from
a 3D CAD model as reference points for an automated visual quality inspection within the
prototype construction. Requirements and interfaces for the viability of the concept, such
as a suitable information accumulation, will be considered in the future work.
Furthermore, the authors seek to implement a mixed reality based assistant system
for the interactive visualization of the inspection results, such as deviations, to enable
the prototyping shop worker a qualitative assessment of the prototype parts.
References
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must, at
no additional cost, fee or expense to the user, provide a copy, a
means of exporting a copy, or a means of obtaining a copy upon
request, of the work in its original “Plain Vanilla ASCII” or other
form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information