Professional Documents
Culture Documents
Semester VII
Chapter Name: Artificial Intelligence and Data Analytics for Manufacturing
In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data
for correlations and patterns, and using these patterns to make predictions about future states. In
this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with
people, or an image recognition tool can learn to identify and describe objects in images by
reviewing millions of examples.
Learning processes. This aspect of AI programming focuses on acquiring data and creating rules
for how to turn the data into actionable information. The rules, which are called algorithms,
provide computing devices with step-by-step instructions for how to complete a specific task.
Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to
reach a desired outcome.
While the huge volume of data being created on a daily basis would bury a human researcher, AI
applications that use machine learning can take that data and quickly turn it into actionable
information. As of this writing, the primary disadvantage of using AI is that it is expensive to
process the large amounts of data that AI programming requires.
Advantages
Good at detail-oriented jobs;
Reduced time for data-heavy tasks;
Delivers consistent results; and
AI-powered virtual agents are always available.
Disadvantages
Expensive;
Requires deep technical expertise;
Limited supply of qualified workers to build AI tools;
Only knows what it's been shown; and
Lack of ability to generalize from one task to another.
Machine vision. This technology gives a machine the ability to see. Machine
vision captures and analyzes visual information using a camera, analog-to-digital conversion
and digital signal processing. It is often compared to human eyesight, but machine vision isn't
bound by biology and can be programmed to see through walls, for example. It is used in a
range of applications from signature identification to medical image analysis. Computer
vision, which is focused on machine-based image processing, is often conflated with machine
vision.
Natural language processing (NLP). This is the processing of human language by a
computer program. One of the older and best-known examples of NLP is spam detection,
which looks at the subject line and text of an email and decides if it's junk. Current approaches
to NLP are based on machine learning. NLP tasks include text translation, sentiment analysis
and speech recognition.
Robotics. This field of engineering focuses on the design and manufacturing of robots.
Robots are often used to perform tasks that are difficult for humans to perform or perform
consistently. For example, robots are used in assembly lines for car production or by NASA to
move large objects in space. Researchers are also using machine learning to build robots that
can interact in social settings.
Self-driving cars. Autonomous vehicles use a combination of computer vision, image
recognition and deep learning to build automated skill at piloting a vehicle while staying in a
given lane and avoiding unexpected obstructions, such as pedestrians.
AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs.
Companies are applying machine learning to make better and faster diagnoses than humans. One
of the best-known healthcare technologies is IBM Watson. It understands natural language and can
respond to questions asked of it. The system mines patient data and other available data sources to
form a hypothesis, which it then presents with a confidence scoring schema. Other AI applications
include using online virtual health assistants and chatbots to help patients and healthcare
customers find medical information, schedule appointments, understand the billing process and
complete other administrative processes. An array of AI technologies is also being used to predict,
fight and understand pandemics such as COVID-19.
AI in business. Machine learning algorithms are being integrated into analytics and customer
relationship management (CRM) platforms to uncover information on how to better serve
customers. Chatbots have been incorporated into websites to provide immediate service to
customers. Automation of job positions has also become a talking point among academics and IT
analysts.
AI in education. AI can automate grading, giving educators more time. It can assess students and
adapt to their needs, helping them work at their own pace. AI tutors can provide additional support
to students, ensuring they stay on track. And it could change where and how students learn,
perhaps even replacing some teachers.
AI in law. The discovery process -- sifting through documents -- in law is often overwhelming for
humans. Using AI to help automate the legal industry's labor-intensive processes is saving time
and improving client service. Law firms are using machine learning to describe data and predict
outcomes, computer vision to classify and extract information from documents and natural
language processing to interpret requests for information.
AI in banking. Banks are successfully employing chatbots to make their customers aware of
services and offerings and to handle transactions that don't require human intervention. AI virtual
assistants are being used to improve and cut the costs of compliance with banking regulations.
Banking organizations are also using AI to improve their decision-making for loans, and to set
credit limits and identify investment opportunities.
AI in transportation. In addition to AI's fundamental role in operating autonomous vehicles, AI
technologies are used in transportation to manage traffic, predict flight delays, and make ocean
shipping safer and more efficient.
role in making AGI a reality and that we should reserve the use of the term AI for this kind of
general intelligence.
Fusion of technologies has added new dimensions to the innovations fueled by the industry
requirement to stay on the top and come up with something new and innovative for the customer.
IIoT and augmented reality are inevitably a match made in heaven.
IoT is all about gathering data and turning it into information and augmented reality is all about
providing people doing the work with that information in a form that allows them to do more with
it. IIoT in conjunction with big data analytics, would result in more effective and efficient use of
resources. Service companies can leverage IIoT-based solutions by helping their technicians to
monitor and assess issues visiting their customer's location. IIoT would help bridge the demand-
supply gap for businesses, especially small and medium enterprises, with the integration of
inventory management and customer relationship management systems. To bridge that gap,
companies will have to bring organized data into business systems already in use, so the data can
easily be accesed and analyzed.
IIOT and augmented reality have opened a new domain of its own, and the applications are
widespread throughout the industries. IIOT and Augmented reality can be applied to any industry:
Automotive – Digital helmet displays the health checkup of components of an Automotive
used in the industry. Caterpillar is using augmented reality technology for predictive
maintenance.
Media and Telecom - IIOT and augmented reality opens a new avenue for the innovative
media advertisement. Augmented Reality holds everything for people of every age.
Telecom industry also uses IIOT and Augmented reality for predictive maintenance.
Healthcare – Pharma companies, can provide more innovative drug information. Nurses
can find veins easier with augmented reality for the obese person. Patients can describe
their symptoms better though AR.
Retail & Consumer - AR and IIOT adds Proximity, Presence, and Interaction in the
Buying Experience. Seeing the various color selections and other modifications helps the
customer in modify or customizes selections. Visualizing and understanding products and
features also adds to the benefits of IIOT and AR.
Real Estate and public sector - The builders and the civil engineers are using it to design
their building in a copy book manner as presented in AR. The builders use it to attract the
customers by giving them real time feel of their projects. It is widely used in public sector
to plan the city and the construction.
Energy and Mining - Augmented and virtual reality and IIOT technologies have the
potential to improve mine productivity, reduce equipment maintenance costs, and protect
personnel.
Financial Services – The existing brick and mortar bank branch services can be enhanced
with AR and IIOT. 3D visualization and mixed reality enrich financial traders’ user
experience. VR provides an immersive learning experience to educate children about
money.
Hospitality and Leisure - AR and IIOT can help people decide their next Travel and
tourism destination by being in that place virtually, and it can help them decide what all
place they can visit according to their likings.
Manufacturing: IIoT can enable the transmission of acquisition and accessibility of far
greater amounts of data, at far greater speeds, and far more efficiently than before. This
can be a great support for modern manufacturing units. Nowadays, some companies have
started to implement the IIoT by leveraging intelligent, connected devices in their factories.
The terms “intelligent product” or “intelligent object” are commonly used within the context of
industrial production.
The Oxford Dictionary defines an intelligent device as having the ability “to vary its state or action
in response to varying situations, varying requirements, and past experience.”
• unique identification,
• capability to communicate with the environment,
• the ability to store and retain data,
• use of a language to display properties and requirements, and
• The capability to participate in the decision-making process about itself.
The three axes in Fig. with different dimensions are defined: level of intelligence, location
of intelligence and aggregation level of intelligence.
The level of intelligence comprises three categories, from information handling and
problem notification up to decision making. The dimension of the aggregation level opens
a new perspective on product intelligence. A differentiation between an intelligent item
(knowledge only about the object itself) and an intelligent container (knowledge about the
included components) is represented in this axis. The third dimension comprises the
location of the intelligence and distinguishes between the intelligence through the network
or at the object.
A real-time system is therefore a system which is able to process a specific task within a
specific time window. In this case, it is not crucial to finish a task as soon as possible, it is
more important to end at a fixed predefined time constraint. The execution of a task is
fulfilled in time if the corresponding time requirement has been complied with.
In addition, a distinction is made between hard and soft real-time requirements, seen in Fig
In a periphery with a hard real-time condition (A) an unmet target date (d) leads to an
unacceptable error, see Fig. 3. With respect to the production, hard real-time conditions,
such as turning, can be found on the shop floor level. The sum of the start date (r) and the
delta e (Δe) must always be less than the target date (d) in order to meet the hard real-time
requirements, as shown in Formula
Formula: A = r +Δe ≤d
Soft real-time conditions allow the exceedance of a time limit as the target date (d). The
system tolerates this condition and will continue to function, see Fig.
The longer the time limit is exceeded, the greater the negative impact on the system.
In the production, we find some soft real-time conditions. One example is when a user
starts a process and there is a delay between the user start time and the process start time.
Functional requirements:
Characteristics:
An embedded system must determine the information flow and provides an essential
function in the process.
Either horizontal or vertical communication needs to be implemented with the CPS. This
indicates exchange of info not limited to CPS but communication with other devices on
shop floor or with top level system like PLC and MES is also required
Decentralized control of sensors and actuators must be implemented.
Self-description as knowledge about its own status, possible reference functionality and
about the internal data must be incorporated in the CPS.
Classification scheme:
All three dimensions of the classification model must be applicable to the CPS.
Real-time characteristics:
Adequate communication technologies must be chosen in accordance with the real-time
requirements of the process.
Intelligent Chuck for Turning Machine
between product and chuck can be implemented. As a result, setup times can be reduced.
Furthermore, this self-regulated turning process allows higher product quality and
minimum component violation. In the vertical hierarchy, the intelligent chuck is fully
integrated into ME system through the use of OPC UA (Object Linking Embedding for
Process Control Unified Architecture) and Ethernet. The ME system thus represents the
centralized information platform that enables situational control decisions and order
scheduling.
9. Explain CPPS for Digital Production.
Cyber-Physical Production Systems (CPPS) consist of autonomous and cooperative elements and
subsystems that are connected based on the context within and across all levels of production,
from processes through machines up to production and logistics networks (Monostori et al., 2016),
with three main characteristics that describe them:
• Intelligence (smartness), i.e. the elements are able to acquire information from their surroundings
and act autonomously and in a goal-directed manner
• Connectedness, i.e. the ability to set up and use connections to the other elements of the system –
including human beings – for cooperation and collaboration, and to the knowledge and services
available on the Internet
• Responsiveness towards internal and external changes.
CPPS (see Figure) possess a number of properties that are common to ubiquitous context aware
systems, and recognize that the human machine-process-logistics connectivity across all levels of
production can lead to a variety of key applications
CPPS are independent systems that are able to autonomously gather and process data,
communicate them to their environment and make decisions based upon this information. Cyber
physical sensor systems convert a physically measured quantity into digital information, allowing
a signal process under the influence of external time variable information and by means of an
algorithm.
The cyber physical product (CPP) has two parts namely: the mechanical part and the virtual part.
The mechanical part is the actual physical system that is manufactured to perform a function such
as automobile transmission, air bag, ABS system, smart energy meter, and traffic light controller.
The products are equipped with single chip microcontroller that is embedded within the product.
The controller has some sensors, actuators, unique address and access code. It can be accessed
remotely from anywhere ant time to monitor, control, and track product from the day of
manufacturing to day of recycling.
The virtual part is the digital twin. It is a tiny microcontroller that is attached to the product. It
collects the location and the dynamic status of the product to enable the manufacture to track
products location and their health status for better performance and productive maintenance.
Besides the actual process monitoring, each cutting process also needs consideration of supporting
activities and all items that ensure this support. They can be classified as consumables.
Consumables include water-mixed cooling lubricants, oils in cutting and forming operations,
dielectrics in electrical discharge machining and electrolytes in electro—chemical machining
operations conducted on metallic materials. They can be characterized in laboratories and tested
either automatically or manually at both regular or irregular intervals. However, this
characterization is usually done offline and not continuously, so that a direct feedback of its
condition to the part quality is not possible. Yet there is ample evidence that the condition of those
consumables has tremendous effect on the quality of the manufactured products and on the
productivity of the manufacturing process. Status data are not available for direct processing in
models. One reason can be seen in the difficult and costly gathering of this data. A solution is
given by the use of model-based, miniaturized analysis systems, such as lab-on-a-chip systems.
They permit consumables to be characterized in terms of age, chemical composition or
contamination and are currently under development. As they are working online and parallel to the
running process, they might be used in real production environment and allow for a direct link to
the exerted influence of the consumable on the production system. As all sensors are integrated
onto one platform with an internal data processing unit, they can be defined as CPPS.
Advantage of technology apps for mobile devices is their large acceptance in the work force.
Mobile devices are so ubiquitous (everywhere) in society that virtually all employees are
accustomed to using apps. This in itself provides enormous potential for improving
communication in a digital production environment. Using those IT tools supports the
distribution as well as the exchange of information in the production environment. The utility of
social apps lies greatly with their intuitive use and them being designed for specific purposes. To
make tech apps as powerful as social apps, these aspects need to be considered when designing
them. Apps should have a slim user interface and be designed for specific technological purposes.
However, to ensure that the worker is not overwhelmed with manifold apps, different
technological aspects may be merged into the same app, as long as they can be called upon via the
same user interface.
While there is a collection of models that calculate the cutting force in a process with
geometrically defined cutting edge, complex processes cannot be reproduced using simple
tools and machine operators certainly cannot be expected to work out which process
boundary conditions prevail at a given time without using suitable models to draw
conclusions as to the cutting force or the moment required without software support.
Tech apps, however, can solve this matter very efficiently. They generate enormous added
value by drawing on complex models stored in a database or in a cloud. These values
create set points for designing the process, and the installed CPSS sensors can compare
these with the data supplied by integrated its sensors. With a suitable visualization app, this
comparison can be transferred directly to the worker who will be able to exploit this
information.
represent its foundational domain knowledge and its information available at run-time. By our
definition, the task-level executive is what makes a CPS an intelligent system and provides the
necessary flexibility to execute the suitable action in different situations.
Domain Model. The domain model encodes everything known a priori about the environment
(e.g., expected objects, places) and agents within (e.g., possible tasks or actions). A domain model
is particularly relevant for knowledge-based and planning systems.
World Model. To reason about the current situation, to plan into the future what steps to take to
achieve a certain goal, and to determine what to do next, the current knowledge about the
environment must be represented in a world model (WM).
Task Determination. With the world model as basis the task-level executive can distinguish
different situations to choose a suitable task. Thus certain situations have to be encoded in terms of
the knowledge in the world model that determine what goals to pursue, which tasks to achieve, and
which actions to take.
Multi-Robot Coordination. Regardless of which concept is used in the task determination, it is
important to consider parallel execution by multiple mobile robots as it allows to speed up and
scale the production process by increasing the number of used robots. Furthermore a group of
robots is more flexible and robust than a single one because it can, for example, compensate for a
robot needing maintenance and thus being unavailable.
When considering a group of robots, their task determination and execution has to be coordinated.
Some method of synchronization is required to avoid conflicts. First, robots need to communicate
their current intentions to avoid redundant work of multiple robots trying to achieve the same goal.
Second, robots need to allocate resources like processing stations for exclusive use by a certain
robot during a specified time interval.
Fault Tolerance. To preserve the autonomy of a CPS in case of faults, which are usually
impossible to preclude completely, it is important to have fault tolerance in the task-level
executive. Failures that can arise are an incorrect world model, e.g., due to uncertainty in sensor
data or wrong perception results, or unrecognized problems in the physical actions of the robot,
e.g., a robot dropping a work-piece while driving. The fault has to be recognized as soon as
possible to prevent a tail of counterproductive decisions based on wrong knowledge; early
recognition simplifies the identification of what went wrong. Fault tolerance also means to be able
to compensate for robots undergoing maintenance or being removed from the fleet.
The RCLL competition takes place on a field of 12 m × 6 m partially surrounded by walls (Fig.).
Two teams of up to three robots each are playing at the same time. The game is controlled by the
referee box (refbox), a central software component for sending orders to the robots, controlling the
machines and collecting teams’ points. Additionally, log messages and game reports are sent to the
refbox, which allows for detailed game analysis and benchmarks. It is also used in a simulation of
the RCLL. After the game is started, no manual interference is allowed, robots receive instructions
only from the refbox and must act fully autonomously. The robots must plan and coordinate their
actions to efficiently fulfill their mission (cf. for a characterization of the RCLL as a planning
domain). The robots communicate among each other and with the refbox through wifi.
Communication delays and interruptions are common and must be handled gracefully.
Each team has an exclusive set of six machines of four different types of Modular Production
System (MPS) stations. The refbox randomly assigns a zone of 2 m × 1.5 m to each station
(position and orientation also are random within the zone). Each station accepts input on one side
and provides processed work pieces on the opposite side. Machines are equipped with markers that
uniquely identify the station and side. A signal light indicates the current status, such as “ready,”
“producing,” and “out-of-order.”
Fig. Teams carologistics (robots with additional laptop) and solidus (pink parts) during the RCLL
finals at RoboCup 2015
CLIPS-Based Agent Program
The first and most sophisticated approach is based on the rule-based system CLIPS. The agent has
been used successfully for several years in the RCLL. It evaluates available information, decides
for actions to take to achieve the desired goal, and executes and monitors such actions. In general,
it encodes certain situations that it evaluates whenever a robot is currently idle if any applies, and
then decides for the next action to take. It bases this decision on its internal world model that is
shared with other robots.
CLIPS is a rule-based production system using forward chaining inference based on the Rete
algorithm consisting of three building blocks: a fact base or working memory, the knowledge base,
and an inference engine.
The first and most sophisticated approach is based on the rule-based system CLIPS. The agent has
been used successfully for several years in the RCLL. It evaluates available information, decides
for actions to take to achieve the desired goal, and executes and monitors such actions. In general,
it encodes certain situations that it evaluates whenever a robot is currently idle if any applies, and
then decides for the next action to take. It bases this decision on its internal world model that is
shared with other robots. CLIPS is a rule-based production system using forward chaining
inference based on the Rete algorithm consisting of three building blocks: a fact base or working
memory, the knowledge base, and an inference engine.
Open PRS
The Procedural Reasoning System (PRS) is a high-level control and supervision framework to
represent and execute plans and procedures in dynamic environments. It is based on the belief-
desire-intention (BDI) model. A PRS kernel has three main elements: a database containing facts
representing the belief about the world, a library of plans (or procedures) that describe a particular
sequence or policy to achieve a certain (sub-) goal, and a task graph which is a dynamic set of
tasks currently executing.
As part of a lab course two strategies were implemented using Open PRS. Students were given the
rules of the game, the simulation, a basic Open PRS integration, and the CLIPS agent as an
example. The task was then to design and implement the overall behavior and coordination of the
fleet.
Big Data and Machine Learning have become the reason behind the success of various industries.
Both these technologies are becoming popular day by day among all data scientists and
professionals. Big data is a term that is used to describe large, hard-to-manage, structured, and
unstructured voluminous data. Whereas, Machine learning is a subfield of Artificial
Intelligence that enables machines to automatically learn and improve from experience/past
data.
Both Machine learning and big data technologies are being used together by most companies
because it becomes difficult for the companies to manage, store, and process the collected data
efficiently; hence in such a case, Machine learning helps them.
Before going in deep with these two most popular technologies, i.e., Big Data and Machine
Learning, we will discuss a quick introduction to big data and machine learning. Further, we will
discuss the relationship between big data and machine learning. So, let's start with the introduction
to Big data and Machine Learning.
Big Data is defined as large or voluminous data that is difficult to store and also cannot be
handled manually with traditional database systems. It is a collection of structured as well as
unstructured data.
Big data is a very vast field for anyone who is looking to make a career in the IT industry.
Machine Learning is one of the most crucial subsets of Artificial Intelligence in the computer
science field. It is referred to as the study of automated data processing or decision-making
algorithms that improve themselves automatically based on experience or past experience.
It makes systems capable of learning automatically and improves from experience without being
explicitly programmed. The primary aim of a machine learning model is to develop computer
programs that can access data and use it for learning purposes.
With the rise in Big Data, Machine Learning has become a key player in solving problems in
various areas such as:
o Image recognition
o Speech Recognition
o Healthcare
o Finance and Banking industry
o Computational Biology
o Energy production
o Automation
o Self-driven vehicle
o Natural Language Processing (NLP)
o Personal virtual assistance
o Marketing and Trading
o The education sector, etc.
Machine Learning:
Machine learning is a field in which machines can ‘learn’ without explicit programming.
Evolved from the study of pattern recognition and computational learning theory in
artificial intelligence, machine learning explores the study and construction of algorithms
that can learn from and make predictions on data – such algorithms overcome following
strictly static program instructions `by making data driven predictions or decisions,
through building a model from sample inputs.
Machine Learning solutions for Cyber-Physical Systems (CPSs) in a Smart Factory are
outlined using productions plants as an example. The increasing complexity of production
plants is still a present issue within the industry. Due to an increasing number of product
variances, an increasing product complexity and increasing pressure for efficiency in a
distributed and globalized production chain, production systems are evolving rapidly:
They are becoming modular, can be parameterized and comprise a growing set of sensors.
Due to the enhancements of Internet of Things (IoT) and sensors deployments, the production of
big data in Industrial Internet of Things (IIoT) is increased. The accessing and processing of big
data become a challenging issue due to the limited storage space, computational time, networking,
and IoT devices end. IoT and big data are well thought-out to be the key concepts when describing
new information architecture projects. The techniques, tools, and methods that help to provide
better solutions for IoT and big data can have an important role to play in the architecture of
business. Different approaches are being practiced in the literature for evaluating the role of big
data in IIoT. These techniques are not handling the situations when complexity of dependency
arises among parameters of the alternatives. The proposed research uses the approach of Analytic
Network Process (ANP) for evaluating the role of big data in IIoT.
With the enhancement in IoT and sensors deployments, the production of big data in IIoT is
increased. The accessing and processing of big data become a challenging issue due to the limited
storage space, computational time, networking, and IoT device end.
Werner Vogels, Amazon’s chief technology officer, told the BBC: "You can never have
too much data—bigger is definitely better. The more data you can collect the finer-grained
the results can be.”
For companies, this means diverse challenges. One of the challenges is the interconnection
of the data: A huge amount of the data volumes exists without any connection to other
data. Therefore, a challenge to Big Data concepts is to connect data to gain competitive
advantages and savings, and to form new business.
The best-known applications of Big Data refer to customer data in internet (Google,
Facebook, Amazon and others), Big data in manufacturing holds a similar potential. Every
used kilowatt hour, from each produced screw to each car, even each switching of a
proximity sensor and each change of a temperature sensor generates raw data that
holds an enormous potential if it is stored and provided for intelligent analysis. The
acquisition, handling and analysis of these data present several challenges.
One of the most used Big Data platforms is the Hadoop ecosystem. A typical Big Data
platform is structured as follows: The CPS is connected via a standardized interface (e.g.
by OPC UA) with a Hadoop ecosystem. Hadoop itself is a software framework for
scalable, distributed computing. The process data is stored in a non-relational database
(HBase), which is based on a Hadoop Distributed File System (HDFS). In addition to
HBase, a time series database OpenTSDB serves as an interface for data analysis. This
database provides simple statistical functions such as mean values, sums or differences
that are usually not available in a non-relational data storage.
The interfaces of OpenTSDB or Hadoop thus enable data analysis directly on the storage
system. Because the algorithms can process the data locally, the volume of a historical
dataset does not need to be loaded into a single computer system. Via a web interface, both
the data and the calculated results can be visualized (e.g. using Grafana). Figure 1
illustrates the architecture of a Big Data platform using a Hadoop ecosystem.
machine learning techniques, which autonomously find relationships in the data and use these for
condition monitoring, for instance.
The Principal Component Analysis (PCA) is a commonly used method to reduce the
dimensionality of the input data. The method basically assumes that features with a low variance
provide a small contribution to the final model and therefore can be neglected. To minimize the
information loss, the PCA computes the principal components, which are new features that are
mostly uncorrelated. The dimensionality of the dataset is then reduced by using the most relevant
(see step six in the subsequent procedure for calculating the PCA) of the principal components to
Describe the dataset. When the PCA is used for visualization purpose, the three first principal
components are selected creating a three-dimensional figure. The negligence of the remaining
principal components is possible, because most of the variance of the original dataset, i.e. the
information, is represented by the first few principal components.
The principal components are determined by performing the following steps:
1. Construct the covariance matrix of the data.
2. Compute the eigen Vector.
3. Eigen vectors corresponding to the largest eigen values are used to reconstruct a large fraction
of variance of the original data.
System Optimization
Another application of smart services in manufacturing is the self-optimization of industrial
processes. Optimization can be carried out regarding different influencing variables (e.g. time or
speed), but in this section we focus on the optimization of the energy consumption. The goal is the
analysis and improvement of the performance and efficiency of a manufacturing plant, leading to
an optimized operation. Due to increasing energy prices, a special focus for this smart service is
the optimization of the energy efficiency in industrial automation systems.
Typically, the optimization of energy efficiency is a manual process, performed by experts of the
plant by exchanging old and inefficient drives against new and efficient drives. This is a useful and
necessary step, however, it still requires man power and finance investments. Further methods
require a manual time planning of the production steps in the Manufacturing Execution System
(MES) to obtain an energy-efficient process, or special energy controllers that are typically located
at the energy meter and monitor the trend of the energy consumption. If the trend points to
unwanted levels, the controller switches off equipment, based on certain priorities and other rules.
Typical time periods are in the range of 15–30 min