Professional Documents
Culture Documents
COMPUTER-BASED TECHNOLOGIES OF
CONTROL IN TECHNICAL SYSTEMS.
Lecture notes
St. Petersburg
2017
УДК 517.935 (07)
ББК З 973.23 - 018.2я7 + З 986 я7
S86
Approved
by the publishing Council of the University
as educational material
2
1. AUTOMATED CONTROL SYSTEMS. INTRODICTION AND
DEFINITIONS
1.1. Defining ‘computer technology’
In modern automation and control systems, the main instrument for
processing information, carrying out calculations and defining setting and
controlling actions is the computer. The integration of computers in the areas of
science, production and control are marked by the addition of ‘computer’ as an
adjective in every respective term. ‘Computer technology’ further includes
communications technology which is responsible for the flow of information
within information computer networks. Modern literature uses several terms to
refer to technologies in information processing and control. The most common of
them is ‘information technology’.
Note: The terms ‘computer technology’ and ‘information technology’ differ
in the sense that information can be processed without the help of a computer.
However, technological advancement, especially in such areas as automation of
production, product design and documentation, has made the computer a useful
and popular tool. Thus, in the context of this discipline, the terms shall be used
interchangeably as synonyms.
Computer technology is a process involving a number of means and
methods for data collection, processing and transmission with the aim of gaining
information on the state of a product, process or phenomenon.
Since this course is dedicated to computer technologies in automation and
control, and automation tasks are solved with the use of automation systems, we
may proceed to discussing and defining this term.
3
System organization is the internal order of interaction between the elements
of a system, defined by, among other things, a restriction on element variety within
the system.
The structure of a system is the content, the order and interaction principles
which define its main features. If the separate elements of a system have internal
connections, such a structure is referred to as hierarchic.
Note: The representation of a system in the form of elements and subsystems
depends on the level and the granularity of detail. This is especially true for more
complex systems. For instance, the top level of detail in a production control
system involves the following subsystems: economic, logistical, production and
power. At the same time, each of these subsystems can be viewed as a self-
contained system. On the lower levels of detail, the security alarm system and the
waste reclamation system can also be viewed as self-contained.
An Automated system – is the combination of personnel, technology,
software, mathematical methods and organization complexes which help to
rationally manage a complex object or process according to the objective.
The AS consists of
The main part, which includes information support, technical support and
mathematical support
The functional part, which includes interconnected programmes, automating
control functions.
In general automated systems are defined by the following features:
1) Building an AS requires a systemic approach
2) Any AS can be analysed, built and controlled based on system control
theory
3) An AS has to include a principle for further development and expansion
(extensibility and scalability)
4) The output product of an AS is information, on which decision making is
based
5) An automated system should be considered as a human-machine system for
information processing and control.
4
2 Processing of input information and representing it in a convenient
(required) form
3 Storage of information in the form of databases, information arrays and
files
4 Outputting information to customers or transferring it to another system;
5 Altering the input data according to the relevant law
There are two types of internal processes in automated systems:
Information processes Processes of creating and
supporting automated systems
Formalized processes whose Developing and setting up a system for
implementation does not alter the data solving a certain type of tasks,
processing algorithm,, which remains administrating (supporting access
specific (searching, registering, storing, service and user rights) and processing
data transmission, document printing requests.
simulator study, execution unit control
algorithm)
Unformalizied procedures which lead to Supporting the integrity and safety of
the creation of new unique information information
with the source information processing
algorithm is unknown (forming a Periodical revision of information;
number of alternatives from which one
is chosen)
Poorly formalized procedures where the Automation of data indexing, etc.
data processing algorithm can alter and
is not clearly defined (planning tasks,
efficiency evaluation, etc.)
Automated systems can be an effective means for solving the following tasks:
1 Achieving more rational ways of solving management tasks through the
integration of mathematical methods;
2 Automation of manual labour
3 Increasing the reliability of the data on which decision making is based
4 Improving the structure of data flows (including document circulation)
5 Cutting production expenses (including information)
Note: It should be mentioned that the development and integration of
automated systems, especially at the initial stages, is a highly expensive process.
5
This is due to the necessary purchase of calculating machines and software, taking
on new staff and providing re-training.
6
– Automated process control systems
– Automated control systems for technical resources
Automated process control systems (APCS) are aimed at automating
production personnel functions. These systems control and use data which
determines the state of technological equipment and provide the necessary mode
for technological processes. They are often referred to as industrial automation
systems. The SCADA system (Supervisory Control and Data Acquisition) is
incorporated into the APCS. Direct software control of technological equipment is
carried out through the CNC system (Computer Numerical Control) based on
controllers (specialized industrial computers) built into the equipment.
Automated design systems are aimed at automating the functions of design
engineers, constructors, architects and designers in the creation of new equipment
or technology. The system’s main functions are engineering calculations, creating
graphics (drawings, schemes and plans) and project documentation, modelling the
objects to de designed.
Integrated (corporate) automated control systems are used to automate the
main functions of an enterprise, encompassing the whole operation cycle from
project design to distribution or even waste management. Creating these systems
can be a complicated task as it requires a systematic approach with consideration
of the main objectives, like gaining profit and control over the market. Such an
approach can lead to significant changes in the structure of the enterprise, which
makes it a difficult decision for many enterprise managers.
7
Automated Control System
8
Software includes system-wide and special programmes as well as technical
documentation.
System-wide software refers to user-oriented programme complexes aimed at
solving universal tasks in information processing and management. These are used
to broaden the functional capacity of computers, to provide control and manage
data processing.
Special software refers to a group of programmes developed during the
creation of a specific automated system. This includes packets of applied
programmes, which implement the developed models of varying levels of
adequacy representing the functioning of a real object.
The technical documentation for software comes with a description of tasks,
algorithmisation, economic and mathematical task model and model examples.
Software
General Special
9
Legal support is a set of legal provisions which regulate the information
management systems’ creation, legal status and functioning, as well how the
information is acquired, converted and used. The main objective of legal support is
to ensure lawfulness. It is based on laws, orders, government decrees, instructions
and other regulatory documents issued by various public authorities.
Legal support consists of a) a general part, regulating the functioning of any
ACS as an information management system, and b) a local part, regulating the
activity of a specific subsystem.
Legal support includes:
– System status
– Rights and obligations of the developer (or supplier) and the
customer
– Rights, obligations and responsibility of the personnel
– Legal positions of separate types of control process and the
order of creating and using information
10
Fig. 2.1. – The main stages of a product’s lifecycle.
Design
11
CAD,
CAM,
CAE, PDM Preproduction engineering
SCM, ERP,
CPC
Manufacturing
MES ,
SCADA,
CNC
Distribution
CRM
Operation
IETEM
Disposal
12
Furthermore, it is possible to coordinate the operation of several partner
enterprises using Internet technologies in the integrated information environment
called CPC (Collaborative Product Commerce).
At the stage of product distribution, it is essential to carry out customer and
supplier management and market analysis and to determine the demand for the
product. These functions are performed by the CRM (Customer Relationship
Management) system.
Personnel training is performed by the IETM (Interactive Electronic
Technical Manuals). These help conduct diagnostic operations, search for
malfunctioning components, order replacement parts and other operations at the
operation stage.
Data control within the universal information environment is performed by
the PLM system (Product Lifecycle Management) at all levels of the cycle. One
important feature of this system is that it supports the interaction of various
automated system at various enterprises, in other words, PLM technology
(including CPC) is the basis for integrating the information environment in which
all the automated systems at several enterprises are functioning.
13
Fig. 2.3. General structure of enterprise control
14
intermediate processing or without it; the application triggers the controlling
action.
This process is present is all control systems. An information system is an
information environment that makes it possible to determine when, where and in
what circumstances the event occurred.
Using information requires a number of controlling actions to be given by the
artificial automated information system. This way, automated information is
created artificially by humans.
An automated information system generally performs the following operations:
1 Collection, initial processing and verification of information
2 Conversion of information, i.e. recoding and rerecording when the
information presentation method or the carrier is incompatible with the
usage unit
3 Transmission of information to the storage unit and storage
4 Secondary processing (when the information received cannot be used
directly, i.e. when it cannot trigger the required controlling action in its
present state)
5 Output of information to the user (information presentation)
6 Providing computer support for decision-making
7 Providing information to be used by the decision maker (a person) for the
purpose of solving control tasks.
It should be noted that different types of models have very distinct and
important roles at every stage of the cycle. Practice shows that effective
implementation of automated information systems is only possible with the use of
adequate models of different types (mathematical models of technological
processes, knowledge models, data models, etc.)
Note: there is a difference between computer systems and automated systems.
Computers equipped with specialized software serve as a technical base for
automated information systems. An automated information system also includes
the personnel that interacts with the computers.
Due to the complexity of most modern technology, managing such equipment
without human involvement is economically imprudent or in some cases simply
impossible even with a high level of computerization. For this reason, human
involvement in modern automated information systems is vital, especially when
important decisions are made.
15
Information systems are divided into factual systems and document
databases.
Factual automated systems register facts – specific values of data on physical
objects of the real world. The information in factual systems is structured so as to
provide a definite answer to a question like “What amount of goods of type Y has
enterprise X produced during the last working shift?” or “What is the state of the
managed process, based on parameter X or parameter Y?”.
Document databases serve a different type of tasks that do not require a
definite solution or answer. These systems operate deal with unstructured text
documents (articles, books, laws, etc.) and have a formalized data retrieval system.
The document database simply finds upon the user’s request a list of documents
which in some way satisfy the requirements of the user. For instance, it can
provide a full list of articles containing the phrase “automation of printed-circuit
wiring”.
The activities of any industrial enterprise can be viewed as having two major
aspects: a) the production process itself and b) the enterprise’s financial and
economic activity. Information systems for financial and economic activities have
their own specific requirements which we shall not discuss in great detail,
considering them in the general context of control tasks. The production process at
a large enterprise involves a great number of technological cycles. Moreover, the
cycles involve the use of various kinds of material (both raw and intermediate) and
require control at every stage. Any failures within one technological cycle can have
serious financial consequences or lead to unpleasant accidents. This means that
control over production has to be provided constantly and in real time. This
accounts for the high requirements to automated information systems in terms of
efficiency, quality and safety. Naturally, quality and safety are also required in
financial and economic activities as the volume of incoming and outgoing financial
flows, as well as money circulating within the enterprise, is quite substantial.
Any serious enterprise is in effect a conglomeration of several to a certain
extent independent production facilities. Depending on the size of the enterprise
and the area in which it is specialized, the number of production facilities will
vary. The independence of these production facilities does not undermine their
coordinated performance and interrelation of technological cycles. For this reason
it is essential to have a number of independent automated subsystems which
interact closely with one another.
16
2.4. The principles of constructing complex automation systems.
Characteristic features of man-machine systems.
The hierarchy of a complex automation system is defined by a necessity of
structured control in complex systems with the aim of acquiring a finite number of
possible solutions, from which the best decision is chosen. This decision is then
realized in the system in the form of decentralized and coordinated controlling
actions with various levels of responsibility. Representing the source control
system as a hierarchic system is done through the functional decomposition of the
system.
It should be noted that this hierarchic nature of the system accounts for
certain specific features. Firstly, the system is represented as a set of subordinated
subsystems of various hierarchic levels. Secondly, subsystems of a higher level use
aggregate coordinates in decision-making, which are functions of lower
subsystems’ output coordinates, and form directive controlling actions for these
subsystems. An important feature of hierarchic control system is the inaccessibility
of the full state vector of the lower level subsystems to the higher level subsystem.
It is important that the higher level formulates a control objective in aggregate
coordinate terms and chooses an aggregate control action to achieve the objective.
Solving the task at the higher level does not define the state of the system as it is
formulated in aggregated coordinate terms.
In order to define the state vector of the source system, the lower level is
used. The control objective at the lower level is formulated in source variables
terms, but the control action itself remains the same as defined on the higher level
in aggregated coordinate terms. This means that the decisions of the subsystem on
the higher level are necessarily implemented at the lower level. Consequently, the
hierarchic architecture of the control system always narrows down the possible
controlling actions due both to the aggregated coordinates and to structural
limitations.
Distribution as a feature of the complex system provides coordination
between the control system topology and the principles of organizational and
technological control in physically and functionally distributed control objects and
eliminates excess information circulating within a system during real-time parallel
and asynchronous processing. At the same time, supporting the access of users to
prepared and formatted information becomes more rational and makes it possible
to accomplish various forms of redundancy in the system, with the aim of
providing a high level of security.
17
Distributed systems are generally ergatic, the control process is carried out
jointly by a human operator (control personnel, crew, team, etc.) and the technical
means, which vary in principles and functional application. Human participation in
the control process accounts for a large number of special features in the system
and requires the solution of technical ergonomics tasks (at the system design stage)
in order to create comfortable conditions for the human operator, who has to make
quick decisions in a situation where information overloads and psychological and
physiological strain are inevitable. An ergatic system should be able to constantly
monitor the physiological state of the human operator and his or her ability to solve
the necessary functional tasks. If the human is temporarily overloaded and
consequently incapable of solving all the tasks to the fullest extent, the ergatic
system passes part of the tasks on to automated devices with the aim of
coordinating behavioral, technological, organizational and economical aspects of
control. Redistribution of functional tasks in the control process is caused by the
necessity of retaining control over the system despite information, psychological
and physiological overloads endured by the human operator and can lead to a drop
in control quality.
The task of identifying the physiological state of the human and his or her
ability to solve functional tasks is classified in the ergatic system as weakly
structured. Currently, such tasks are solved with the help of expert systems.
Human participation in the control process requires solving one more
important task – maintaining an adequate personnel qualification level. Indeed,
excess control automation leads to a reduction in the workload of the human
operator resulting in a decline in his or her professional skills, which can lead to
emergency situations and accidents. The task of maintaining these necessary skills
is solved in ergatic systems by the introduction of professional tests and problems
imitating pre-emergency or emergency situations into the system. The control
system analyses and documents the professional performance of the operator.
Based on the test results and analysis, every participant receives new tests and
problems to solve, which are tailored to the specific needs of every operator.
This class of complex systems functions as a rule with incomplete and
unreliable information on system coordinates and parameters and indeterminate
evaluations and indicators. It is therefore necessary to develop controls systems in
the form of an intellectual system which combines the operator’s intellect and the
expert system’s AI. Combining intellects within one system leads to a necessity in
solving yet another problem: deciding which of the two gets priority in each
18
specific situation – the human operator or the expert system. If the operator’s
qualifications are higher, the expert system issues recommendations which are
taken into account by the operator before making a decision. The control system in
this case functions as a decision-making system. Otherwise, the decision is made
by the expert system. If the operator’s qualifications are higher, the expert system
issues recommendations which are taken into account by the operator before
making a decision. The control system in this case functions as a decision-making
system. Otherwise, the decision is made by the expert system. Since the operators
working in control systems are highly qualified professionals, these systems are
designed and implemented as decision-making systems.
These systems are generally classified as continuous discrete systems, i.e.
systems whose state vector can change in a “leap” at discrete moments in time. An
instant change of the state vector of the system can be triggered by a discrete event
or when certain conditions are fulfilled with the interaction of continuous
coordinates. A change in the state vector depending on a certain parameter which
determines the system’s characteristics is called a process. In the present class of
systems – dynamic continuous discrete systems – this determining parameter is
time. Thus, the control process (P) is defined as a time oriented finite or infinite
sequence of discrete events, separated by uninterrupted time periods (t). The
process is always associated with some object entering the system and establishes a
link with this object as a means of functional decomposition of the system.
Formally, the process is described by the following sequence:
P = {t0 , X , T , t },
Where t is the time (process parameter); t0 is the process activation time; Х(t) is the
state vector of the process; T = {ti}is the process protocol, i.e. the multitude of time
moments, when the events triggering the changes in functional tasks happen, and,
consequently, the discrete changes in the state vector. The process parameters (t0,
Х, Т) depend on the state of the system S(t). For this reason, the control process in
a distributed system is a nonstationary indeterminate object and is classified as an
asynchronous real-time parallel vector process
3. OLAP TECHNOLOGY
3.1. What is OLAP?
OLAP means many different things to different people, but the definitions
usually involve the terms "cubes", "multidimensional", "slicing & dicing" and
19
"speedy-response". OLAP is all of these things and more, but it is also a misused &
misunderstood term, in part because it covers such a broad range of subjects.
We will discuss the above terms in later sections; to begin with, we explain
the definition & origin of OLAP. OLAP is an acronym, standing for "On-Line
Analytical Processing". This, in itself, does not provide a very accurate description
of OLAP, but it does distinguish it from OLTP or "On-Line Transactional
Processing".
The term OLTP covers, as its name suggests, applications that work with
transactional or "atomic" data, the individual records contained within a database.
OLTP applications usually just retrieve groups of records and present them to the
end-user, for example, the list of computer software sold at a particular store
during one day. These applications typically use relational databases, with a fact or
data table containing individual transactions linked to meta tables that store data
about customers & product details.
OLAP applications present the end user with information rather than just data.
They make it easy for users to identify patterns or trends in the data very quickly,
without the need for them to search through mountains of "raw" data.
Typically this analysis is driven by the need to answer business questions
such as "How are our sales doing this month in North America?". From these
foundations, OLAP applications move into areas such as forecasting and data
mining, allowing users to answer questions such as "What are our predicted costs
for next year?" and "Show me our most successful salesman".
OLAP applications differ from OLTP applications in the way that they store
data, the way that they analyze data and the way that they present data to the end-
user. It is these fundamental differences (described in the following sections) that
allow OLAP applications to answer more sophisticated business questions.
20
3.2.1. Increasing data storage
The trend towards companies storing more & more data about their business
shows no sign of stopping. Retrieving many thousands of records for immediate
analysis is a time and resource consuming process, particularly when many users
are using an application at the same time. Database engines that can quickly
retrieve a few thousand records for half-a-dozen users struggle when forced to
return the results of large queries to a thousand concurrent users.
Caching frequently requested data in temporary tables & data stores can
relieve some of the symptoms, but only goes part of the way to solving the
problem, particularly if each user requires a slightly different set of data.
In a modern data warehouse where the required data might be spread across
multiple tables, the complexity of the query may also cause time delays & require
more system resources which means more money must be spent on database
servers in order to keep up with user demands.
21
Caching on this scale would require enormous sets of temporary tables and
enormous amounts of disk space to store them.
22
3.3.1. What is a cube?
The cube is the conceptual design for the data store at the center of all OLAP
applications. Although the underlying data might be stored using a number of
different methods, the cube is the logical design by which the data is referenced.
The easiest way to explain a cube is to compare storing data in a cube with
storing it in a database table.
Figure 3.1. shows a set of sales records from three electrical stores displayed
in a transactional database table. There are two field columns "Products" and
"Store" that contain textual information about each data record and a third value
column "Volume". This type of table layout is often called a "fact table". The
columns in a table define the data stored in the table. The rows of textual
information and numeric values are simply instances of data; each row is a single
data point. A larger data set would appear as a table with a greater number of rows.
Figure 3.2. shows the same data now arranged in a "cube". The term "cube" is
used somewhat loosely, as this is in fact a two-dimensional layout, often referred
to as "a spreadsheet view" as it resembles a typical spreadsheet.
The axes of the cube contain the identifiers from the field columns in the
database table. Each axis in a cube is referred to as a "dimension". In this cube, the
horizontal dimension contains the product names and is referred to as the "Products
dimension". The vertical dimension contains the store names and is referred to as
the "Store dimension".
23
In the database table, a single row represents a single data point. In the cube,
it is the intersection between fields that defines a data point. In this cube, the cell at
the intersection of Fuses and Midtown contains the number of fuses sold at the
midtown store (in this case, 31 boxes). There is no need to mention "Volume" as
the whole cube contains volume data.
This co-ordinate driven concept of finding data is the reason why we can’t
just ignore one of the dimensions in a cube. For example, the question "How many
bulbs did we sell?" has no direct meaning with this cube unless it is qualified by
asking for data from a particular store.
The term "field" is used to refer to individual members of a dimension, so for
example, Uptown is a field in the Store dimension. Notice that the two dimensions
contain apparently unrelated fields. Dimensions are usually comprised of the same
class of objects, in this example all of the products are in one dimension and all of
the stores are in another. Attempting to mix fields between the two dimensions
would not work because it would not make sense, it would not be possible to create
a unique cell for each data point and any attempt to display the data would also not
be possible.
Note that we have avoided using the terms row & column dimension.
Although a cube appears to have rows & columns just like a table, they are very
different from the rows & columns in a database. In a database, row & column
refer to specific components of the data store, in a cube; they simply describe the
way the cube is presenting the data. For example, the cube in figure 3.1.2 can also
be displayed as in figure 3.3., with the dimensions reversed.
Both figures 3.2. or 3.3. are valid layouts, the important point is that the first
diagram shows "Products by Store" and the second shows "Stores by Product".
This is one of the advantages of the cube as a data storage object; data can be
quickly rearranged to answer multiple business questions without the need to
perform any new calculations. A second advantage is that the data could be sorted
24
either vertically or horizontally, allowing the data to be sorted by store or product
regardless of the cube’s orientation.
From this simple two-dimensional cube, we can now explain some further
concepts.
3.3.2. Multidimensionality
In the previous section, we looked at a simple two-dimensional cube.
Although useful, this cube is only slightly more sophisticated than a standard
database table. The capabilities of a cube become more apparent when we extend
the design into more dimensions. Multidimensionality is perhaps the most "feared"
element of cube design as it is sometimes difficult to envisage. It is best explained
by beginning with a three-dimensional example.
Staying with the data set used in the previous section, we now bring in more
data, in the form of revenue & cost figures. Figures 3.3. & 3.4 show the different
ways that the new data could be stored in a table.
As can be seen, the degenerate layout results in a wider table with fewer rows
while the canonical model results in a narrower table with more rows. Neither
layout is particularly easy to read when viewed directly.
The simplest OLAP layout is to create three separate two-dimensional cubes
for each part of the data, one for the revenue figures, one for costs and one for
volumes. While useful, this layout misses out on the advantages gained by
25
combining the data into a three-dimensional cube. The three-dimensional cube is
built very simply by laying the three separate two-dimensional "sheets" (the
Volume, Cost & Revenue figures) on top of each other.
As can be seen from figure 3.5., the three-dimensional layout becomes
apparent as soon as the three layers are placed on top of each other. The third
dimension, "Measures" is visible as the third axis of the cube, with each sheet
corresponding to the relevant field (Volume, Cost or Revenue).
The actual data points are located by using a co-ordinate method as before. In
this example, each cell is a value for the revenue, cost or volume of a particular
product sold in a particular store.
As before, the data can be reoriented & rearranged, but this time, more
sophisticated data rearrangements can be made. For example, the view from the
right-hand face of the cube in figure 3.5. shows the revenue, cost & volume figures
for all products sold in the Downtown store. The view from the topmost face
shows the revenue, cost & volume figures for bulbs across all three stores.
This ability to view different faces of a cube allows business questions such
as "Best performing product in all stores" to be answered quickly by altering the
layout of the data rather than performing any new calculations, thus resulting in a
considerable performance improvement over the traditional relational database
table method. Four dimensions and beyond Although the word "cube" refers to a
three-dimensional object, there is no reason why an OLAP cube should be
restricted to three dimensions. Many OLAP applications use cube designs
containing up to ten dimensions, but attempting to visualize a multidimensional
cube can be very difficult. The first step is to understand why creating a cube with
more than three dimensions is possible and what advantage it brings.
As we saw in the previous section, creating a three-dimensional cube was
fairly straightforward, particularly as we had a data set that lent itself to a
26
threedimensional layout. Now imagine that we have several three-dimensional
cubes, each one containing the same product, region & measures dimensions as
before, but with each one holding data for a different day’s trading. How do we
combine them? We could just add all of the matching numbers together to get a
single three-dimensional cube, but then we could no longer refer to data for a
particular month. We could extend one of the dimensions, for example the
measures dimension could have the fields "Monday’s costs" and "Tuesday’s
costs", but this would not be an easy design to work with and would miss out on
the advantages of a multidimensional layout.
The answer is simple, we create a fourth dimension, in this case the
dimension "Days" and add it to the cube. Although we can’t easily draw such a
cube, it is easy to prove the integrity of the design. As stated before, each data
point is stored in a single cell that can be referred to uniquely. In our four-
dimensional design, we can still point to a specific value, for example the value for
revenue from bulbs sold Uptown on Monday. This is a four dimensional reference
as it requires a field from four dimensions to locate it:
1. The Revenue field from the Measures dimension.
2. The Bulbs field from the Product dimension.
3. The Uptown field from the Store dimension.
4. The Monday field from the Days dimension.
Without actually having to draw or visualize the whole cube, it is quite easy
to retrieve and work with a four-dimensional data set simply by thinking about the
specific data cells being requested.
The issue of visualizing the data set leads onto the second step in picturing the
cube. Although the cube might have four (or more) dimensions, most applications
only present a two-dimensional view of their data. In order to view only two
dimensions, the other dimensions must be "reduced". This is a process similar to
the concept of filtering when creating an SQL query.
Having designed a four-dimensional cube, a user might only want to see the
original two-dimensional layout from figure 3.6., Products by Store. In order to
display this view, we have to do something to the remaining dimensions Measures
& Days. It makes no sense just to discard them as they are used to locate the data.
Instead, we pick a single measure & day field, allowing us to present a single two-
dimensional view of the cube.
27
Fig. 3.6. Two-dimensional view of a four-dimensional structure.
28
displaying both revenue & cost data simultaneously and allowing direct
comparison to be made between them. This layout can be seen in figure 3.7.
29
HOLAP
Stands for "Hybrid OLAP". This term describes OLAP applications that store
high-level data in proprietary multidimensional data files, but leave the underlying
base data in the original data tables.
This method has the big advantage of not requiring duplication of the base
data, resulting in time & disk space savings.
The cube drives the multidimensional views, so the application requires a
robust link between the multidimensional data file and the relational table that
stores the base data beneath it.
30
4.1. Basic Concepts and Definitions
There are several key terms that can help to understand the importance and
impact of ERP systems within industries and organizations. This is not a
comprehensive list of terms; however, it will provide a foundation. Business
intelligence is a computer-based technique to help with decision making by
analyzing data. Business process is a logically related activity or group of activities
that takes input, processes it to increase value, and provides output (Harrington,
1991). Business process integration is the assimilation of business processes
together in a central system. Cloud computing is having a third party host the
software and systems a business needs as a service through the use of the Internet.
Data redundancy is when the same data is stored in multiple separate locations.
Data repository is a location to store data. Information system refers to interaction
between information technology, business processes, and data for decision making.
Information technology in the broadest sense refers to both the hardware and
software used to store, retrieve, and manipulate information using computer
systems and applications. Key performance indicators known as KPI, provide
baseline metrics that companies use to measure how well the system and processes
are performing. Legacy system is when a new system is identified for replacement;
the older system is referred to as the legacy. Lifecycle refers to the structure from
which software applications such as ERP evolves and is integrated within business
processes. ERP systems bring corporate business processes and data access
together in an integrated way that significantly changes how they do business. The
ERP system implementation, an enormous capital expenditure, consumes many
corporate resources associated with a high level of risk and uncertainty. ERP
systems are an obvious choice for companies operating with disparate legacy
systems that do not communicate well with each other. These systems provide
significant inter-related information, greater information visibility, and accuracy on
a common database. Within the ERP systems are a standardized process to perform
the majority of business processes using industry best practices. ERP systems are
so widely diffused that they are commonly described as the de facto standard for
replacement of legacy systems in medium and large sized organizations. If today’s
company CIOs were asked about the importance and impact of ERP systems on
industries and organizations, more likely than not, they would say it is impossible
to work without an ERP system.
31
4.2. Benefits and Importance
There are many benefits to having an ERP system within the organization.
Information is readily available for the proper users, all data is kept in a central
repository, data redundancy is minimized, and there is a greater understanding of
the overall business picture. If a company does not have an ERP system and
employs separate standalone systems for functional areas of a business, the
company will not be running at its full potential.
Data may be compromised because it is stored in multiple locations. How
would a user know which information is most current ? When data is changed, is
there a guarantee that it will be updated in all storage locations ? Are processes
taking longer to start and finish than necessary ?
When a customer calls to inquire about an order, the customer may be
bounced around to numerous departments within the company because the
customer service representative does not have all the answers at his or her
fingertips. Here (see fig. 4.2) is an illustration of this type of scenario produced by
Hammer and Company.
With this illustration 4.2. the cycle has come full circle; back to the original
starting point. How much easier would it have been for the customer if the
customer service representative had the answers to every question that the
customer asked? One of the most significant features of an ERP system is that all
of the information kept by a company, including within functional areas, is
retained in one central data repository, or in other words, the information is saved
in a single database.
32
By having the information in one location with authority levels for access in
place, a customer service representative would have been able to answer all the
questions posed by the customer instead of having to transfer the customer from
department to department.
All of the information is shifted from functional areas to the front-line, or in
other words, to the person the customer will first contact when communicating
with the company. From the above illustration, the importance of the correct
employees having the correct information (in this case the customer service
representative), is crucial to delivering exceptional customer service, and in turn
serving the customer in the most valuable way
The central repository of information will allow authorized users to access the
same information in one location using an ERP system. This feature allows for one
version of information to be used. With the central data repository comes the
decline of data redundancy. The data is kept in one location where all authorized
users have access. Data redundancy occurs when the same data is placed in two or
more separate systems (Shelly, Cashman, & Rosenblatt, 2005). For example,
referring back to our illustration before, the customer needed to change the ship to
address. If the company maintained separate functional area systems, the
customer’s ship-to address would have had to be updated in all the places it was
stored. Potential for human error becomes a factor at this point. The employee
could miss a location where the customer’s ship-to address needed to be changed,
or the employee could have mistyped the correct information in any one of the
change points. Having one central place for the information to be stored reduces
the likelihood of human error and not using the correct information for future
transactions. Ranganathan and Brown (2006) suggest that the use of a centralized
data repository in an ERP system will result “in an integrated database for multiple
functions and business units, providing management with direct access to real-time
information at the business process, business division, and enterprise levels” (p.
146). An ERP system allows users and the company to formulate a better
understanding of the overall business picture. With access to multiple functional
areas in one system, and the ability to generate any report necessary, the benefits of
an ERP system are endless. Management and executives can formulate better
business decisions because of all the data readily available within the system.
Business performance can improve since the ERP system integrates business
processes, that traverse multiple business functions, divisions, and geographical
locations (Ranganathan et al., 2006). Another benefit of ERP systems is their
33
ability to manage potential growth within the company and future e-commerce and
e-supply chain investments. IT costs can be significantly reduced when
implementing an ERP system (Fuβ, Gmeiner, Schiereck, & Strahringer, 2007). For
the banking industry, merging banks can shorten post-merger integration time by
12 to 18 months, with a cost savings of potentially $60 to $80 million. Also, ERP
systems can assist banks with the continuous industry-specific pressures, such as
governmental regulations and globalization, faced by the banking industry. ERP
systems can help a global bank run smoothly and adhere to compliance. The
construction industry faces their own challenges when implementing ERP (Chung,
Skibniewski, Lucas & Kwak, 2008). Their industry processes are less standardized
when compared to manufacturing. For example, each construction project has a
unique owner, project team, and specifications. When an ERP system is
implemented successfully in the construction industry, Chung et al., (2008) report
benefits of improved efficiency, and evident waste elimination. Fuβ et al., (2007)
have researched multiple articles and developed a list of anticipated benefits of
ERP systems. The list includes the following benefits:
– Improved security and availability
– Increase in organizational flexibility
– Cost reduction
– Fast amortization of investment
– More efficient business processes
– Higher quality of business processes
– Improved integrability
– Reduced complexity and better harmonization of IT infrastructure
– Better information transparency and quality
– Better and faster compliance with legal requirements and frameworks
Bagranoff and Brewer (2003) wrote a case study based on a real company’s
ERP implementation. The authors use a fictitious company name, PMB
Investments, Inc., to protect the confidentiality of the real company. The
company’s Amscot division, located outside of Little Rock, Arkansas, was in
charge of printing, assembling, and distributing all printed materials for internal
and external customers interested in the company’s financial services and
investments. The Amscot office was created as a result of anticipated growth.
Amscot began with a hand-medown legacy system named OSCAR, which came
from the closing of two other plants to form the new Amscot plant. Unfortunately,
OSCAR could not handle the increased volume of transactions. The ability to
34
deliver to Amscot’s customers was compromised. A second system was connected
to OSCAR named KIM to help relieve the stress of the growth. “However, once
every few weeks the interface between KIM and OSCAR would go down between
12 to 18 hours resulting in customer orders literally disappearing into cyberspace
somewhere between KIM and OSCAR” (p. 86). Occasionally employees would
perform a manual count of warehouse inventory because they did not trust reports
produced by OSCAR, resulting in inventory being managed in multiple locations.
Amscot pursued the acquisition of an enterprise resource planning system to
handle the circumstances the company was facing (Bagranoff et al., 2003). Amscot
felt the long-run benefits to having an ERP system would be the consolidation of
financials, human resources, manufacturing, and distribution applications in one
central database system. Additionally, Amscot believed data redundancy and
integrity problems regarding the multiple information systems would be
eliminated. Decisions would be made more efficiently and effectively because of
real-time information generated from the ERP system. Fulfillment and delivery
would start automatically on receiving a customer order with the new system.
Having the entire supply chain coordinated would reduce printed material
inventories, minimize unnecessary shipping expenses, and streamline the receiving
of goods cycle time. The new system would allow Amscot to perform and operate
at peak efficiency. The ERP system implementation estimation of savings was $30
million annually, which came from diminishing the inventory obsolescence
35
4.3.1. IT value of ERP systems
When examining the value of ERP systems, investing in technology is only
half of what is needed to realize its benefit. According to SAP Executive Agenda,
“investment in IT without analogous improvements in the management practices
around IT will lead only to a slight increase in productivity”. It is suggested that
companies that invest in IT while enhancing management practices and
governances have experienced sustainable results in increased value and improved
productivity, in some instances as much as a 20% boost (Dorgan & Dowdy, 2004).
Research has demonstrated a circular cycle where one IT success gives rise to yet
another IT success more favorable than the first (sometimes referred to as the
“virtuous cycle”). The cycle typically gets started with an investment in core ERP
systems software generating the landscape to facilitate a homogeneous integrated
platform. Once the core ERP software demonstrates sound operational
performance, investments to extend and add value to processes such as customer
relationship management (CRM), supply chain management (SCM), and business
analytics components are examined.
36
use the Financial ERP to enable flexibility with financial and managerial reporting
across their organizational structures. This provides a real-time view of the
business to quickly read, evaluate, and respond to changing business conditions
with accurate, reconciled, and timely financial data. For a company’s financial
supply chain, potential value can be gained for improved cash flow, transparent
and real-time business intelligence, and reduced inventory levels, leading to shorter
cash-to-cash cycle times, and increased inventory turns across the network that can
lower overall costs. Companies can potentially make significant gains to reduce
overall finance costs, enabling greater collaboration with customers or suppliers,
and streamlining operations to reduce costs and resource demands (adapted from
SAP, Inc.). Companies can take advantage of an ERP financial system’s ability to
provide dynamic budgeting, forecasting, and planning to reduce overall financial
costs. Financial ERPs offer companies the ability to streamline accounting,
consolidation, process scheduling, workflow, and collaboration. By integrating
budget, cost, and performance, companies can capitalize on opportunities to
reallocate money to programs with proven impact; realigning resources where they
are most useful to maximize value to the organization. Treasury services in an ERP
system can help a company make smarter decisions by having the capability to
proactively monitor and adjust currency and interest rate exposure across the entire
enterprise while complying with internal risk policies. Additionally, visibility to
real-time data enables a company to make informed investing and borrowing
decisions on a timelier basis. Other treasury operations can be automated to
simplify dealing with administration for debt, investments, foreign exchange,
equities, and derivatives while performing straightthrough processing to enforce
security and limit controls (adapted from SAP, Inc.). Often times, companies
operate shared services with their subsidiary operations or centralized organization
functions. ERP systems provide shared services capabilities that can reduce a
company‘s costs by automating, centralizing, and standardizing global
transactional processes. In addition, ERP systems provide the ability to centralize
liquidity and act as an in-house bank to subsidiaries, administer inter-company
loans, and optimize excess funds across the enterprise. Different areas of the
company receive business value from the implementation of ERP systems. For
inbound logistics, ERP systems provide improved communication and integration
with suppliers, enhanced raw material management, and value-added management
of accounts payable (Davenport, Harris, & Cantrell, 2002). The system creates
transparency across a company’s entire purchasing process, including better
37
tracking of raw materials, improved inventory management, lot size planning
integration, and matching process documentation (Matolcsy, Booth, & Wieder,
2005). Accounts payable have automation tools to process vendor payments more
quickly by way of ERP systems. Marketing, sales, and distribution functional areas
benefit and value from ERP systems by the promotion and advertising activities
integrated in item inventory levels and production schedules. These areas benefit
because there is a better idea of what can be promised to the customer.
38
process of the cash-to-cash cycle. This is a prime case of business process
integration.
To achieve business process integration, it may be necessary to perform
business process re-engineering (BPR). BPR is an integral part of an ERP
implementation and represents a fundamental rethinking of the company’s current
way of doing business. BPR is defined by Hammer and Champy (1993) as “the
fundamental rethinking and radical redesign of business processes to achieve
dramatic improvements in critical, contemporary measures of performance, such as
cost, quality, service and speed” (p. 32). The essential features and benefits of a
bundled ERP packaged software application are already developed based on
industry best practices. For companies to take full advantage of the many benefits
offered by an ERP system, business process reengineering is required to address
the gaps in business practices, leveraging the functionality of the new ERP
packaged application. Most company business processes are procedurally similar
but industry uniqueness, distinct practices, and size play a significant role in the
gaps that a company must re-engineer for an ERP system implementation.
Research has found that successful ERP projects result when companies are
involved in BPR and BPR is included in the ERP selection (Tsai, Chen, Hwang, &
Hsu, 2010; Muscatello, Small, & Chen, 2003). Companies that adapt organization
processes to increase information flow across business organizations achieve
greater success with IT investments than if they had launched the ERP software
alone. By changing business processes to align with the new ERP system, a
company can dramatically change the value derived from the technology and scale
operations profitability. The ERP system usually consists of several functional
modules that are deployed and integrated generally by business process (fig. 4.3).
The ERP implementation creates cross-module integration, data standardization,
and industry best practices, which are all combined into a timeline involving a
large number of resources.
The business process “as-is” state and information flows between various
business operations are examined for scope of the implementation. The “as-is”
process model is developed by examining the layers of the “as-is” process, and
focuses on the most important or major areas of concern (Ridgman, 1996). Often
processes evolve to solve an immediate customer issue, operational problem, or
some other concern that addresses the way a company conducts its business
(Okrent & Vokurka, 2004). An understanding of why a process is performed in a
39
particular way helps to identify the non-value added work for simplification of the
process and improved task workflow.
Fig. 4.3 – Several functional modules that are deployed and integrated generally by business
process
40
company accounts payable department. When goods are received or services are
performed, a confirmation transaction takes place to alert of completion. Matching
is done and a check is prepared and automatically sent to the vendor in the ERP
system. The automated process enables accuracy of information, and eliminates
redundancy of data and potential delay of payment. Due to the characteristic nature
of ERP system cross-module integration features, the more modules selected for
implementation, the greater the integration benefits. However, with the increased
benefits comes increased complexity and care to ensure minimum risk to map
correctly a company’s business process to the ERP system processes.
Implementing the processes incorrectly can lead to poor integration between
modules in the system, leading to significant operational deficiency. Additionally,
there exists considerable risk in changing multiple processes at a time
(Subramoniam, Tounsi, & Krishnankutty, 2009). The risk is certain to increase if a
fallback plan is non-existent. An industry best practice of streamlining and
simplifying business processes ahead of time may mitigate the risk. Prior research
has concluded that the higher a company’s process complexity, the higher the
radicalness of its ERP implementation to enable fundamental and radical change in
the company’s operational performance (Karimi, Somers, & Bhattacherjee, 2007).
However, many common business process challenges may be ameliorated if
addressed appropriately. Listed below (fig. 4.4) are a few typical ERP business
process challenges, and suggested resolutions faced in business process
integration.
Previous research has indicated that an ERP system meets only 80% of the
company’s functional requirements (Subramoniam et al., 2009). A gap exists
between company requirements and the proposed ERP solution. What is practiced
by most companies is listed below, based on a survey by Forester Research
(Lamonica, 1998 ; O’Leary, 2000).
41
There are many enterprise application integration (EAI) tools, structured
methodologies, and systematic procedures available to facilitate business process
integration. Companies typically approach business process integration based on
their organizational needs and constraints (Subramoniam et al., 2009). Competitive
pressure and system compatibility in business processes significantly explains the
success of ERP systems (Elbertsen & Van Reekum, 2008). Organizations like
Owens Corning (Bancroft , Seip, & Sprengel 1998; Romei, 1996), the State of
Kentucky (Henry, 1998), Eastman Kodak (Stevens, 1997), and NEC Technologies
(Bancroft et al., 1998) have all effectively integrated business process into the
implementation of their ERP system. Owens Corning began its business process
integration efforts by establishing a global supply-chain prospective that would fit
all its business unit improvements (Bancroft et al., 1998; Romei, 1996; Anita,
1996). Design teams worked in parallel to address integration issues across process
boundaries. A standard business process integration methodology using benchmark
data to design the process integration was used. In another example, the State of
Kentucky’s (Henry, 1998) enterprise ERP solution included financial, budget, and
procurement functionality. Their business processes required radical changes in
order to use a technical tool to change business processes, streamline government
administrative procedures, and cut cost.
42
skilled leaders, organizational structures, processes, and size as organizations;
academic institutions may not necessitate the same level of requirements for its
alignment
43
the needed modules, SPI went live with an ERP implementation. SPI successfully
completed two years of ERP operational use without any disruption since
implementing in 2008. Now SPI can transact and process payments or receipts in
any currency. The company has a better view of its financials and expense data
than in the past. The ERP system has provided SPI with the ability to better
manage their customers and increase profits.
44
management, and tax collection (Effective information management is key to BI
success, 2010). The impact of BI on the company’s bottom line is so significant
that employers are requesting more and more graduates have BI experience
45
challenges of MES implementation while discussing MES design process. The
studies also do address use of reference models (standards) in MES development
methodology. A study by Scholten & Schneider has proposed to use ISA-95 as a
guide in defining the requirement of MES. Another study by Govindaraju et al.
developed a methodology for MES design utilizing ISA-95. This study is focused
on how ISA-95 can be utilized for determining MES requirement specification
addressing different parts of ISA-95 standards in executing different steps of MES
design process. The purpose of the study reported in this paper is to develop a
methodology for a MES implementation project, covering the system design and
implementation (construction) stage, which an extension of earlier study by
Govindaraju et al. [8].
46
implementation such as ISA-95 is needed [10], in order to help the manufacturing
companies achieved the expected benefits mentioned above.
47
models of in ISA-95 which show the hierarchy of the physical assets of the
enterprises engaged in manufacturing activities can be used to determine the
physical boundary of the MES system [7]. Figure 2 shows the hierarchical model
of equipment. - Analyze MES functional requirements. Information on the
manufacturing operations management (MOM) contained in the document ISA-95
part 3 can be used as a guide to analyze the system functional requirements [7].
MOM Model contains a description of the functional aspects of MES. Diagrams
can be used to analyze the functional requirements of the system is use case
diagram [12].
• MES design. There are two activities performed at this step:
– Generic design. Generic design is divided in two parts: generic
function model and generic sequence diagram.
– Generic functional model. ANSI/ISA-95 part 1 (Models and
terminology, 2000)
– Activity models of manufacturing operations management, help to
identify the main manufacturing operations management related
activities. They also help to identify the information flowing through
the activities of the company. A boundary is represented to
differentiate between activities at level 3 and activities at level 4.
48
Only a few activities are carried out at both levels. IDEF0 is chosen to model
the functional requirements of the system. The detailed level of the modeling is
determined by the development team. A generic IDEF0 functional model is
defined, covering all level 3 activities and their communications with some of the
level 4 activities. With ISA-95, the functional model is developed in such a way
that it separate the business processes from the manufacturing processes. This way,
it allow changes in production processes (level 3) take place without requiring
unnecessary chang
Information about the order in which different activities are carried out in
manufacturing process provides a behavior perspective about the execution of the
activities. In this stage, UML sequence diagrams are used show which message
transfers take place and how communication evolves among the different actors
involved to carry out each activity [12]. The generic sequence diagrams defined in
this step describe all information exchange between level 3 and 4 of the company,
taking into account the activities and objects previously identified in generic
IDEF0 diagrams. A detailed model description illustrating standard data flow
between the functions for production plants is described in ISA-95 (see Figure 4).
The dotted lines define the interface between levels 3 and 4. The arrows show the
flow of data between the levels. - Specific design. Specific design is divided in two
parts: specific function model and specific sequence diagram.
49
reference, specific UML sequence diagrams (To-Be) are modeled in order to
define clearly the information exchanges that is desired to occur within the
enterprise.
50
5.3. Methodology
In order to check the appropriateness of the methodology developed in this
study, an empirical investigation was done at a steel manufacturing company. The
investigation was done through in depth interviews with MES project manager,
and a series of discussions with a number of key MES project members. From the
investigation, a number of findings were collected, explaining how the execution
of the steps and sequence in the proposed methodology considered to be
appropriate and recommended, for a smooth MES implementation process.
Besides, findings related to important risks or problems to be anticipated in each
step of the methodology were also collected. Based on the findings,
recommendations for improvements in the developed MES implementation
methodology were generated. 5. Empirical Investigation at a Steel Manufacturing
Implementation Case Empirical investigations were done at a steel manufacturing
company in Indonesia (SteelCo). The company is currently in the process of
finishing its MES implementation project, which was started in the year 2012. The
scope of MES implementation project covers Production Operations Management,
Quality Management, and Inventory Management. As mentioned in the project
documentation, by implementing MES the company aims to support the
improvement of supply chain performance, through the use of a more integrated
solution with real time information, to enable realistic business decisions. At the
moment investigation was done, the implementation process has entered the
deployment phase. Two important issues in the initial assessment stage are scoping
and defining the user requirements. The case company experience shows that it is
very important that management is able to define properly the scope and extent of
changes that is brought by MES implementation, before formally plan the
implementation project. The case also shows that in defining system requirements,
requirement elicitation is an important challenge. Requirement elicitation is the
activity of discovering and gathering relevant information from user, customer, and
other stakeholders who have direct or indirect influence on the performance of the
system [13]. An effective method is needed to support the company in finding the
right information from the right stakeholders (actors) involved. In the case study, a
series of workshops were executed for requirement elicitation purpose. Different
topics were discussed within big groups. The discussion topics were divided based
on modules to be developed. For each module, workshops were executed to firstly
discuss the old systems and the problems, followed by the basic concept of best
practices provided by software vendor. The workshops were executed for all the
51
modules. The workshops were successfully executed, but the results seems to be
not that satisfying. The key users from the case company were not able to see
clearly what are the gap to be filled with MES implementation and what
functionalities the systems should provide in order to get a comprehensive solution
for the company’s problems.
MES design stage is divided into two stages: basic design and detailed design.
Activities performed at the basic design stage is the documentation of the system
design using descriptions and flow chart diagrams. The activities carried out in the
detailed design stage is drafting detailed MES systems and interfaces requirements
using UML diagrams. Design is generally done by using the ISA-95 standard.
Stages of design activities on this project has a slightly different grouping of
activities with the phases of the proposed methodology. However, in general, the
activities carried out in the design phase commonly inline with the activities in the
MES design methodology developed. One important thing that needs to be
underlined related to the design of the system is the importance of clearly defining
the mapping between the system features (functionalities) and user groups (actors).
Developing use case diagrams in this case becomes an important part of the system
design, in addition to the manufacturing process activity mapping using IDEF and
the sequence of events mapping using sequence diagrams. The development of the
use case diagram is necessary to determine a division of tasks and actors which is
needed to ensure that no conflict happen from different users (subsystems)
performing the same functions, and also to assure that all the functions assigned to
certain user groups. From interviews and discussions during the investigation, it
was found that to smoothen the implementation process, it is important to add one
more step after MES application is build and test, before the implementation
process moves to final deployment step. The additional step is needed to create a
pilot case (pilot deployment) and do a comprehensive review on the pilot
deployment, before entering the full system deployment. A proper pilot system
covering end to end processes needs to be developed, in order to ensure success of
the overall MES deployment. Pilot deployment determines how well current
requirements fit into an MES and validates the integration strategy (to level 4
system as well as level 2 system), before overall deployment takes place. In order
to make sure that pilot deployment takes place in a proper way, different actors
need to be involved. They area: project leader, ERP experts (because of the
integration with ERP), automation experts, QA/QC, integration experts (XI
experts, etc.) and shop floor automation/SFA experts (because of the integration
52
with SFA systems). For SAP implementing companies such as SteelCo, ERP
experts to be involved are the experts for MM/PP/PI, QM and APO modules. With
the addition of pilot deployment step, change management needs to be executed at
the later step (deployment step), considering that change management should
consider the results of the comprehensive review of pilot deployment. Final data
migration needs to be finalized at the (final) deployment step, after all the
important logic of system integration being tested, through the pilot deployment.
Thus, the (final) deployment step will include cut-over preparation and test, final
data migration, final change management, and trainings.
53
– A communications system used to transfer data between field data
interface devices and control units and the computers in the SCADA central host.
The system can be radio, telephone, cable, satellite, etc., or any combination of
these.
– A central host computer server or servers (sometimes called a SCADA
Center, master station, or Master Terminal Unit (MTU)
– A collection of standard and/or custom software [sometimes called
Human Machine Interface (HMI) software or Man Machine Interface (MMI)
software] systems used to provide the SCADA central host and operator terminal
application, support the communications system, and monitor and control remotely
located field data interface devices
Figure 6.1 shows a very basic SCADA system, while Figure 6.2 shows a
typical SCADA system. Each of the above system components will be discussed in
detail in the next sections.
54
6.1. Field Data Interface Devices
Field data interface devices form the "eyes and ears" of a SCADA system.
Devices such as reservoir level meters, water flow meters, valve position
transmitters, temperature transmitters, power consumption meters, and pressure
meters all provide information that can tell an experienced operator how well a
water distribution system is performing. In addition, equipment such as electric
valve actuators, motor control switchboards, and electronic chemical dosing
facilities can be used to form the "hands" of the SCADA system and assist in
automating the process of distributing water.
However, before any automation or remote monitoring can be achieved, the
information that is passed to and from the field data interface devices must be
converted to a form that is compatible with the language of the SCADA system.
To achieve this, some form of electronic field data interface is required. RTUs,
also known as Remote Telemetry Units, provide this interface. They are primarily
used to convert electronic signals received from field interface devices into the
language (known as the communication protocol) used to transmit the data over a
communication channel.
The instructions for the automation of field data interface devices, such as
pump control logic, are usually stored locally. This is largely due to the limited
bandwidth typical of communications links between the SCADA central host
computer and the field data interface devices. Such instructions are traditionally
held within the PLCs, which have in the past been physically separate from RTUs.
A PLC is a device used to automate monitoring and control of industrial facilities.
It can be used as a stand-alone or in conjunction with a SCADA or other system.
PLCs connect directly to field data interface devices and incorporate programmed
intelligence in the form of logical procedures that will be executed in the event of
certain field conditions.
PLCs have their origins in the automation industry and therefore are often
used in manufacturing and process plant applications. The need for PLCs to
connect to communication channels was not great in these applications, as they
often were only required to replace traditional relay logic systems or pneumatic
controllers. SCADA systems, on the other hand, have origins in early telemetry
applications, where it was only necessary to know basic information from a remote
source. The RTUs connected to these systems had no need for control
programming because the local control algorithm was held in the relay switching
logic.
55
As PLCs were used more often to replace relay switching logic control
systems, telemetry was used more and more with PLCs at the remote sites. It
became desirable to influence the program within the PLC through the use of a
remote signal. This is in effect the "Supervisory Control" part of the acronym
SCADA. Where only a simple local control program was required, it became
possible to store this program within the RTU and perform the control within that
device. At the same time, traditional PLCs included communications modules that
would allow PLCs to report the state of the control program to a computer plugged
into the PLC or to a remote computer via a telephone line. PLC and RTU
manufacturers therefore compete for the same market.
As a result of these developments, the line between PLCs and RTUs has
blurred and the terminology is virtually interchangeable. For the sake of simplicity,
the term RTU will be used to refer to a remote field data interface device; however,
such a device could include automation programming that traditionally would have
been classified as a PLC.
56
Historically, SCADA networks have been dedicated networks; however, with
the increased deployment of office LANs and WANs as a solution for interoffice
computer networking, there exists the possibility to integrate SCADA LANs into
everyday office computer networks.
The foremost advantage of this arrangement is that there is no need to invest
in a separate computer network for SCADA operator terminals. In addition, there
is an easy path to integrating SCADA data with existing office applications, such
as spreadsheets, work management systems, data history databases, Geographic
Information System (GIS) systems, and water distribution modeling systems.
57
6.4. Operator workstations and software components
Operator workstations are most often computer terminals that are networked
with the SCADA central host computer. The central host computer acts as a server
for the SCADA application, and the operator terminals are clients that request and
send information to the central host computer based on the request and action of
the operators.
An important aspect of every SCADA system is the computer software used
within the system. The most obvious software component is the operator interface
or Man Machine Interface/Human Machine Interface (MMI/HMI) package;
however, software of some form pervades all levels of a SCADA system.
Depending on the size and nature of the SCADA application, software can be a
significant cost item when developing, maintaining, and expanding a SCADA
system. When software is well defined, designed, written, checked, and tested, a
successful SCADA system will likely be produced. Poor performances in any of
these project phases will very easily cause a SCADA project to fail.
Many SCADA systems employ commercial proprietary software upon which
the SCADA system is developed. The proprietary software often is configured for
a specific hardware platform and may not interface with the software or hardware
produced by competing vendors. A wide range of commercial off-the-shelf
(COTS) software products also are available, some of which may suit the required
application. COTS software usually is more flexible, and will interface with
different types of hardware and software. Generally, the focus of proprietary
software is on processes and control functionality, while COTS software
emphasizes compatibility with a variety of equipment and instrumentation. It is
therefore important to ensure that adequate planning is undertaken to select the
software systems appropriate to any new SCADA system.
Software products typically used within a SCADA system are as follows:
1. Central host computer operating system: Software used to control the
central host computer hardware. The software can be based on UNIX or other
popular operating systems.
2. Operator terminal operating system: Software used to control the
central host computer hardware. The software is usually the same as the central
host computer operating system. This software, along with that for the central host
computer, usually contributes to the networking of the central host and the operator
terminals.
58
3. Central host computer application: Software that handles the
transmittal and reception of data to and from the RTUs and the central host. The
software also provides the graphical user interface which offers site mimic screens,
alarm pages, trend pages, and control functions.
4. Operator terminal application: Application that enables users to access
information available on the central host computer application. It is usually a
subset of the software used on the central host computers.
5. Communications protocol drivers: Software that is usually based
within the central host and the RTUs, and is required to control the translation and
interpretation of the data between ends of the communications links in the system.
The protocol drivers prepare the data for use either at the field devices or the
central host end of the system.
6. Communications network management software: Software required to
control the communications network and to allow the communications networks
themselves to be monitored for performance and failures.
7. RTU automation software: Software that allows engineering staff to
configure and maintain the application housed within the RTUs (or PLCs). Most
often this includes the local automation application and any data processing tasks
that are performed within the RTU.
The preceding software products provide the building blocks for the
application-specific software, which must be defined, designed, written, tested, and
deployed for each SCADA system.
59
Networks (WANs) that were implemented to communicate with remote terminal
units (RTUs) were designed with a single purpose in mind–that of communicating
with RTUs in the field and nothing else. In addition, WAN protocols in use today
were largely unknown at the time. The communication protocols in use on
SCADA networks were developed by vendors of RTU equipment and were often
proprietary. In addition, these protocols were generally very “lean”, supporting
virtually no functionality beyond that required scanning and controlling points
within the remote device. Also, it was generally not feasible to intermingle other
types of data traffic with RTU communications on the network. Connectivity to the
SCADA master station itself was very limited by the system vendor. Connections
to the master typically were done at the bus level via a proprietary adapter or
controller plugged into the Central Processing Unit (CPU) backplane. Redundancy
in these first generation systems was accomplished by the use of two identically
equipped mainframe systems, a primary and a backup, connected at the bus level.
The standby system’s primary function was to monitor the primary and take over
in the event of a detected failure. This type of standby operation meant that little or
no processing was done on the standby system. Figure 6.3 shows a typical first
generation SCADA architecture.
60
with each other in real-time. These stations were typically of the mini-computer
class, smaller and less expensive than their first generation processors. Some of
these distributed stations served as communications processors, primarily
communicating with field devices such as RTUs. Some served as operator
interfaces, providing the human-machine interface (HMI) for system operators.
Still others served as calculation processors or database servers.
The distribution of individual SCADA system functions across multiple
systems provided more processing power for the system as a whole than would
have been available in a single processor. The networks that connected these
individual systems were generally based on LAN protocols and were not capable
of reaching beyond the limits of the local environment. Some of the LAN protocols
that were used were of a proprietary nature, where the vendor created its own
network protocol or version thereof rather than pulling an existing one off the
shelf. This allowed a vendor to optimize its LAN protocol for real-time traffic, but
it limited (or effectively eliminated) the connection of network from other vendors
to the SCADA LAN. Figure 6.4 depicts typical second generation SCADA
architecture.
Distribution of system functionality across network-connected systems served
not only to increase processing power, but also to improve the redundancy and
reliability of the system as a whole. Rather than the simple primary/standby
failover scheme that was utilized in many first generation systems, the distributed
architecture often kept all stations on the LAN in an online state all of the time.
61
For example, if an HMI station were to fail, another HMI station could be
used to operate the system, without waiting for failover from the primary system to
the secondary. The WAN used to communicate with devices in the field were
largely unchanged by the development of LAN connectivity between local stations
at the SCADA master. These external communications networks were still limited
to RTU protocols and were not available for other types of network traffic. As was
the case with the first generation of systems, the second generation of SCADA
systems was also limited to hardware, software, and peripheral devices that were
provided or at least selected by the vendor.
62
Figure 3.3: Third Generation SCADA System
7.1.1. Controllers
What type of task might a control system have? It might be required to control
a sequence of events or maintain some variable constant or follow some prescribed
change. For example, the control system for an automatic drilling machine (Figure
7.1(a)) might be required to start lowering the drill when the workpiece is in
position, start drilling when the drill reaches the workpiece, stop drilling when the
drill has produced the required depth of hole, retract the drill and then switch off
and wait for the next workpiece to be put in position before repeating the
operation. Another control system (Figure 7.1(b)) might be used to control the
number of items moving along a conveyor belt and direct them into a packing case.
63
The inputs to such control systems might be from switches being closed or opened,
e.g. the presence of the workpiece might be indicated by it moving against a switch
and closing it, or other sensors such as those used for temperature or flow rates.
The controller might be required to run a motor to move an object to some
position, or to turn a valve, or perhaps a heater, on or off.
Figure 7.1 An example of a control task and some input sensors: (a) an automatic drilling
machine, (b) a packing system
What form might a controller have? For the automatic drilling machine, we
could wire up electrical circuits in which the closing or opening of switches would
result in motors being switched on or valves being actuated. Thus we might have
the closing of a switch activating a relay which, in turn, switches on the current to
a motor and causes the drill to rotate (Figure 7.2). Another switch might be used to
activate a relay and switch on the current to a pneumatic or hydraulic valve which
results in pressure being switched to drive a piston in a cylinder and so results in
the workpiece being pushed into the required position. Such electrical circuits
would have to be specific to the automatic drilling machine. For controlling the
number of items packed into a packing case we could likewise wire up electrical
circuits involving sensors and motors. However, the controller circuits we devised
for these two situations would be different. In the ‘traditional’ form of control
system, the rules governing the control system and when actions are initiated are
determined by the wiring. When the rules used for the control actions are changed,
the wiring has to be changed.
64
Figure 7.2 A control circuit
65
order to control machines and processes (Figure 7.3) and are designed to be
operated by engineers with perhaps a limited knowledge of computers and
computing languages. They are not designed so that only computer programmers
can set up or change the programs. Thus, the designers of the PLC have pre-
programmed it so that the control program can be entered using a simple, rather
intuitive, form of language. The term logic is used because programming is
primarily concerned with implementing logic and switching operations, e.g. if A or
B occurs switch on C, if A and B occurs switch on D. Input devices, e.g. sensors
such as switches, and output devices in the system being controlled, e.g. motors,
valves, etc., are connected to the PLC. The operator then enters a sequence of
instructions, i.e. a program, into the memory of the PLC. The controller then
monitors the inputs and outputs according to this program and carries out the
control rules for which it has been programmed.
PLCs have the great advantage that the same basic controller can be used with
a wide range of control systems. To modify a control system and the rules that are
to be used, all that is necessary is for an operator to key in a different set of
instructions. There is no need to rewire. The result is a flexible, cost effective,
system which can be used with control systems which vary quite widely in their
nature and complexity. PLCs are similar to computers but whereas computers are
optimised for calculation and display tasks, PLCs are optimised for control tasks
and the industrial environment. Thus PLCs are:
Rugged and designed to withstand vibrations, temperature, humidity and
noise. Have interfacing for inputs and outputs already inside the controller.
Are easily programmed and have an easily understood programming language
which is primarily concerned with logic and switching operations.
The first PLC was developed in 1969. They are now widely used and extend
from small self-contained units for use with perhaps 20 digital inputs/outputs to
66
modular systems which can be used for large numbers of inputs/outputs, handle
digital or analogue inputs/outputs, and also carry out proportional-integral-
derivative control modes.
7.2. Hardware
Typically a PLC system has the basic functional components of processor
unit, memory, power supply unit, input/output interface section, communications
interface and the programming device. Figure 7.4 shows the basic arrangement.
The processor unit or central processing unit (CPU) is the unit containing the
microprocessor and this interprets the input signals and carries out the control
actions, according to the program stored in its memory, communicating the
decisions as action signals to the outputs.
The power supply unit is needed to convert the mains A.C. voltage to the low
D.C. voltage (5 V) necessary for the processor and the circuits in the input and
output interface modules.
The programming device is used to enter the required program into the
memory of the processor. The program is developed in the device and then
transferred to the memory unit of the PLC.
The memory unit is where the program is stored that is to be used for the
control actions to be exercised by the microprocessor and data stored from the
input for processing and for the output for outputting.
The input and output sections are where the processor receives information
from external devices and communicates information to external devices. The
inputs might thus be from switches, as illustrated in Figure 7.1(a) with the
automatic drill, or other sensors such as photo-electric cells, as in the counter
67
mechanism in Figure 7.1(b), temperature sensors, or flow sensors, etc. The outputs
might be to motor starter coils, solenoid valves, etc.
Input and output devices can be classified as giving signals which are
discrete, digital or analogue (Figure 7.5). Devices giving discrete or digital signals
are ones where the signals are either off or on. Thus a switch is a device giving a
discrete signal, either no voltage or a voltage. Digital devices can be considered to
be essentially discrete devices which give a sequence of on−off signals. Analogue
devices give signals whose size is proportional to the size of the variable being
monitored. For example, a temperature sensor may give a voltage proportional to
the temperature.
68
7.3. Internal architecture
Figure 7.7 shows the basic internal architecture of a PLC. It consists of a
central processing unit (CPU) containing the system microprocessor, memory, and
input/output circuitry. The CPU controls and processes all the operations within
the PLC. It is supplied with a clock with a frequency of typically between 1 and 8
MHz. This frequency determines the operating speed of the PLC and provides the
timing and synchronisation for all elements in the system. The information within
the PLC is carried by means of digital signals. The internal paths along which
digital signals flow are called buses. In the physical sense, a bus is just a number of
conductors along which electrical signals can flow. It might be tracks on a printed
circuit board or wires in a ribbon cable. The CPU uses the data bus for sending
data between the constituent elements, the address bus to send the addresses of
locations for accessing stored data and the control bus for signals relating to
internal control actions. The system bus is used for communications between the
input/output ports and the input/output unit.
69
addition and subtraction and logic operations of AND, OR,
NOT and EXCLUSIVE-OR.
2. Memory, termed registers, located within the microprocessor
and used to store information involved in program execution.
3. A control unit which is used to control the timing of operations.
7.3.3. Memory
There are several memory elements in a PLC system:
– System read-only-memory (ROM) to give permanent storage for the
operating system and fixed data used by the CPU.
70
– Random-access memory (RAM) for the user’s program.
– Random-access memory (RAM) for data. This is where information is
stored on the status of input and output devices and the values of timers and
counters and other internal devices. The data RAM is sometimes referred to as a
data table or register table. Part of this memory, i.e. a block of addresses, will be
set aside for input and output addresses and the states of those inputs and outputs.
Part will be set aside for preset data and part for storing counter values, timer
values, etc.
– Possibly, as a bolt-on extra module, erasable and programmable read-
only-memory (EPROM) for ROMs that can be programmed and then the program
made permanent.
The programs and data in RAM can be changed by the user. All PLCs will
have some amount of RAM to store programs that have been developed by the user
and program data. However, to prevent the loss of programs when the power
supply is switched off, a battery is used in the PLC to maintain the RAM contents
for a period of time. After a program has been developed in RAM it may be loaded
into an EPROM memory chip, often a bolt-on module to the PLC, and so made
permanent. In addition there are temporary buffer stores for the input/output
channels. The storage capacity of a memory unit is determined by the number of
binary words that it can store. Thus, if a memory size is 256 words then it can store
256 × 8 = 2048 bits if 8-bit words are used and 256 × 16 = 4096 bits if 16-bit
words are used. Memory sizes are often specified in terms of the number of storage
locations available with 1K representing the number 210, i.e. 1024. Manufacturers
supply memory chips with the storage locations grouped in groups of 1, 4 and 8
bits. A 4K % 1 memory has 4 % 1 % 1024 bit locations. A 4K % 8 memory has 4
% 8 % 1024 bit locations. The term byte is used for a word of length 8 bits. Thus
the 4K % 8 memory can store 4096 bytes. With a 16-bit address bus we can have
216 different addresses and so, with 8-bit words stored at each address, we can
have 216 % 8 storage locations and so use a memory of size 216 % 8/210 = 64K %
8 which we might be as four 16K % 8 bit memory chips.
71
panel. Every input/output point has a unique address which can be used by the
CPU. It is like a row of houses along a road, number 10 might be the ‘house’ to be
used for an input from a particular sensor while number ‘45’ might be the ‘house’
to be used for the output to a particular motor. The input/output channels provide
isolation and signal conditioning functions so that sensors and actuators can often
be directly connected to them without the need for other circuitry. Electrical
isolation from the external world is usually by means of optoisolators (the term
optocoupler is also often used). Figure 7.8 shows the principle of an optoisolator.
When a digital pulse passes through the light-emitting diode, a pulse of infrared
radiation is produced. This pulse is detected by the phototransistor and gives rise to
a voltage in that circuit. The gap between the light-emitting diode and the
phototransistor gives electrical isolation but the arrangement still allows for a
digital pulse in one circuit to give rise to a digital pulse in another circuit.
The digital signal that is generally compatible with the microprocessor in the
PLC is 5 V d.c. However, signal conditioning in the input channel, with isolation,
enables a wide range of input signals to be supplied to it. A range of inputs might
be available with a larger PLC, e.g. 5 V, 24 V, 110 V and 240 V digital/discrete,
i.e. on−off, signals (Figure 7.9). A small PLC is likely to have just one form of
input, e.g. 24 V.
72
The output from the input/output unit will be digital with a level of 5 V.
However, after signal conditioning with relays, transistors or triacs, the output
from the output channel might be a 24 V, 100 mA switching signal, a d.c. voltage
of 110 V, 1 A or perhaps 240 V, 1 A a.c., or 240 V, 2 A a.c., from a triac output
channel (Figure 7.10). With a small PLC, all the outputs might be of one type, e.g.
240 V a.c., 1 A. With modular PLCs, however, a range of outputs can be
accommodated by selection of the modules to be used.
73
flow direction as from positive to negative, an input device receives current from
the input module, i.e. the input module is the source of the current (Figure 7.11(a)).
If the current flows from the output module to an output load then the output
module is referred to as sourcing (Figure 7.11(b)). With sinking, using the
conventional current flow direction as from positive to negative, an input device
supplies current to the input module, i.e. the input module is the sink for the
current (Figure 7.12(a)). If the current flows to the output module from an output
load then the output module is referred to as sinking (Figure 7.12(b)).
74
Smart relays: These have grown more capable over the years, blurring the line
between them and micro PLCs. Smart relays can be programmed with PC-based
software, but many can also be programmed from their front panel display. Ladder
logic or function block is the language of choice, and analog capabilities range
from slim to none.
PLCs: These workhorses run the range from micros with about 32 built-in
input/output (I/O) points to full-featured systems capable of handling thousands of
I/O. PLCs are programmed with PC-based software, and any changes to the
program require a PC. But, many parameters can be adjusted with a local operator
interface, which is built-in with a combination of PLC and human machine
interface (HMI) units, an emerging class of controllers combining a PLC with a
graphical interface (Table 1).
Table 1: Controller selection criteria
Characteristics Relay/timer Smart relay PLC PAC
Maximum I/O 10 20 Up to 2,000 Up to 100,000
Footprint Largest Smallest Depends on I/O Depends on I/O
quantity quantity
Local expansion n/a n/a Medium High
capability
Remote n/a n/a Medium High
expansion
capability
Programming n/a Ladder, some Ladder and maybe Multiple-Ladder,
languages function blocks other specialty structure text,
function blocks function block, etc.
Programming n/a Free to low Free to medium Medium to High
software cost
Hardware cost Lowest Low Medium High
Program n/a Low High Very High
memory
Ease of use Easiest Easy Medium Difficult
Flexibility Very low Low High High
Connectivity to Hard-wired One Multiple Multiple
other systems only communication communication communication
port and protocol ports and protocols ports and protocols
PACs: Lines are once again blurred, but this time between high-end PLCs and
PACs. But PACs add more capabilities than PLCs, particularly for control of very
complex systems. PACs can handle advanced motion control, incorporate vision
75
systems, and perform advanced control of analog loops—a set of tasks that might
unduly burden a PLC.
Distributed control systems (DCS) were intentionally left out of this
discussion because most are now PAC-based. While exceptions abound, these
criteria are a good starting point for controller selection (Table 1). Table 2 shows
the current and anticipated future state of these markets.
Table 2:The current and anticipated future state of markets
76
many new technologies have been applied to PLCs, greatly expanding their
capabilities on an almost continuous basis.
PACs are relatively new to the automation market, using the term coined by
the market research firm ARC in 2001. Since then, there has been no specific
agreement as to what differentiates a PAC from a PLC. Some users feel the term
PAC is simply marketing jargon to describe highly advanced PLCs, while others
believe there is a definite distinction between a PLC and a PAC. In any case,
defining exactly what constitutes a PAC isn’t as important as having users
understand the types of applications for which each is best suited.
77
communication ports, and greatly increased memory as compared to older models
(see Figure 7.11).
On the other hand, PACs provide a more open architecture and modular
design to facilitate communication and interoperability with other devices,
networks, and enterprise systems. They can be easily used for communicating,
monitoring, and control across various networks and devices because they employ
standard protocols and network technologies such as Ethernet, OPC, and SQL.
PACs also offer a single platform that operates in multiple domains such as
motion, discrete, and process control. Moreover, the modular design of a PAC
simplifies system expansion and makes adding and removing sensors and other
devices easy, often eliminating the need to disconnect wiring. Their modular
design makes it easy to add and effectively monitor and control thousands of I/O
points, a task beyond the reach of most PLCs.
Another key differentiator between a PLC and a PAC is the tag-based
programming offered by a PAC. With a PAC, a single tag-name database can be
used for development, with one software package capable of programming
78
multiple models. Tags, or descriptive names, can be assigned to functions before
tying to specific I/O or memory addresses. This makes PAC programming highly
flexible, with easy scalability to larger systems.
For simple applications, such as controlling a basic machine, a PLC is a better
choice than a PAC. Likewise, for most applications that consist primarily of
discrete I/O, a PLC is the best choice—unless there are other extraordinary
requirements such as extensive data handling and manipulation (fig. 7.14).
79
factors outside of specific application requirements. These factors include, but
aren’t limited to, past experience with each platform, price, the level of local
support, and anticipated future growth and changes.
Once a decision is made between a PLC or a PAC, users typically have a wide
range of products from which to choose, even if only a single vendor is being
considered. That’s because PLCs and PACs are typically designed in systems of
scale, meaning there is a family of controllers to choose from that range from
lower I/O count to larger system capacity, with correspondingly more features and
functions as I/O counts and prices increase.
80
overall system sizes. This often makes them a better choice for large systems
encompassing several areas of a plant.
While advanced PLCs have increased communication and data handling
options, PACs still offer more built-in features such as USB data logging ports, a
web server to view system data and data log files, and an LCD screen for enhanced
user interface and diagnostics (Figure 2).
PACs are designed to be integrated more tightly with SQL and other
databases. They often are still the choice for process control applications because
they deliver other advantages such as standard 16-bit resolution analog for higher
precision measurements.
Modern PLCs and PACs share many of the same features, and either will
work in many applications.
The final selection will typically be determined by dozens of factors for any
given application and company environment, including functional requirements,
future expansion plans, company/vendor relationships, and past experience with
specific automation platforms.
81
networking. The data handling, storage, processing power and communication
capabilities of some modern PLCs are approximately equivalent to desktop
computers. PLC-like programming combined with remote I/O hardware, allow a
general-purpose desktop computer to overlap some PLCs in certain applications.
Under the IEC 61131-3 standard, PLCs can be programmed using standards-
based programming languages. A graphical programming notation called
Sequential Function Charts is available on certain programmable controllers.
PLCs are well-adapted to a range of automation tasks. These are typically
industrial processes in manufacturing where the cost of developing and
maintaining the automation system is high relative to the total cost of the
automation, and where changes to the system would be expected during its
operational life. PLCs contain input and output devices compatible with industrial
pilot devices and controls; little electrical design is required, and the design
problem centers on expressing the desired sequence of operations in ladder logic
(or function chart) notation. PLC applications are typically highly customized
systems so the cost of a packaged PLC is low compared to the cost of a specific
custom-built controller design. On the other hand, in the case of mass-produced
goods, customized control systems are economic due to the lower cost of the
components, which can be optimally chosen instead of a “generic” solution, and
where the non-recurring engineering charges are spread over thousands of sales.
For high volume or very simple fixed automation tasks, different techniques
are used. For example, a consumer dishwasher would be controlled by an
electromechanical cam timer costing only a few dollars in production quantities.
A microcontroller-based design would be appropriate where hundreds or
thousands of units will be produced and so the development cost (design of power
supplies and input/output hardware) can be spread over many sales, and where the
end-user would not need to alter the control. Automotive applications are an
example; millions of units are built each year, and very few end-users alter the
programming of these controllers. However, some specialty vehicles such as transit
busses economically use PLCs instead of custom-designed controls, because the
volumes are low and the development cost would be uneconomic.
Very complex process control, such as used in the chemical industry, may
require algorithms and performance beyond the capability of even high-
performance PLCs. Very high-speed or precision controls may also require
customized solutions; for example, aircraft flight controls.
82
PLCs may include logic for single-variable feedback analog control loop, a
“proportional, integral, derivative” or “PID controller.” A PID loop could be used
to control the temperature of a manufacturing process, for example. Historically
PLCs were usually configured with only a few analog control loops; where
processes required hundreds or thousands of loops, a distributed control system
(DCS) would instead be used. However, as PLCs have become more powerful, the
boundary between DCS and PLC applications has become less clear-cut.
Shows what a bare bones hardware system might cost. We try to find the cheapest
CPU in the line and couple it with the cheapest backplane (if the line uses them),
and if IO isn’t included with the CPU we add the cheapest IO we can find.
Ethernet. Does it exist? If so, is it standard or optional? Does the manufacturer
viewed as an integral part or an afterthought. Ethernet often enables inexpensive,
simple ways to link PLCs together
USB. Does it exist? If so, is it standard or optional? Does the manufacturer
viewed as an integral part or an afterthought. USB often is used to simplify
programming and provide additional useful features
Analog, Thermocouple and Motion Control. These are to help you decide if the
hardware provides options needed for your project. The analog and thermocouple
merits are pretty cut and dry. We’ve taken our first stab at Motion Control, it’s
still erring on the side of manufactures. For instance: if they claim built-in high
speed counters, we give it to them, even if they aren’t very high-speed at all. Just
be careful to check the references on this one.
Software Price. We tried to get pricing for basic programming software for a
single user. We added the price of programming cables to this for Programmable
Relays when we saw some manufacturers giving the software away only to charge
crazy prices for the cables. From playing with many many packages we must say
that price doesn’t correlate well with quality. Our favorite packages were all over
the price scale (a couple were free). Be sure to use the free trials before
committing to a product line. Hardware you setup every now and then, software
you’re stuck looking at for many days.
Are addresses referred to by user defined names (Tags), or are they referred to by
their locations in memory (ex: i1,i2,o13)? Tags help make programs easier to
undersand.
Subroutines: They help break programs into manageable pieces, and enable re-
usability of code. To get this merit, we required that subroutines be allowed to
call other subroutines and that values must be passable by value and reference.
Seamless Data Transfer Between PLCs: When you have PLCs in different
locations and they need to communicate values, is the setup to handle this dead
simple? If so you get this merit.
83
84
85
7.6. REMOTE TERMINAL UNIT
A remote terminal unit (RTU) is a microprocessor-controlled electronic
device that interfaces objects in the physical world to a distributed control system
or SCADA (supervisory control and data acquisition) system by transmitting
telemetry data to a master system, and by using messages from the master
supervisory system to control connected objects. Other terms that may be used for
RTU is remote telemetry unit or remote telecontrol unit.
7.6.1. Architecture
An RTU monitors the field digital and analog parameters and transmits data
to the Central Monitoring Station. It contains setup software to connect data input
streams to data output streams, define communication protocols, and troubleshoot
installation problems.
An RTU may consist of one complex circuit card consisting of various
sections needed to do a custom fitted function or may consist of many circuit cards
including CPU or processing with communications interface(s), and one or more of
the following: (AI) analog input, (DI) digital input, (DO/CO) digital or control
(relay) output, or (AO) analog output card(s).
86
communication ports (RS232, RS422 and RS485) or Ethernet link. This system is
controlled by a firmware and a real-time clock with full calendar is used for
accurate time stamping of events. A watchdog timer provides a check that the RTU
program is executing regularly. The RTU program regularly resets the watchdog
timer and if this is not done within a certain time-out period the watchdog timer
flags an error condition and can sometimes reset the CPU.
87
including electrical breakers, liquid valve positions, alarm conditions, and
mechanical positions of devices.
Analog inputs
An analog input signal is generally a voltage or current that varies over a
defined value range, in direct proportion to a physical process measurement. 4-20
milliamp signals are most commonly used to represent physical measurements like
pressure, flow and temperature. Five main components that makes up the analog
input module are as follows:
Input multiplexer: This samples several analog inputs in turn and
switches each to the output in sequence. The output goes to the analog
digital converter.
Input signal amplifier: This amplifies the low-level voltages to match
the input range of the board’s A/D converter
Sample and hold circuit
A/D converter: This measures the input analog voltage and output a
digital code corresponding to the input voltages.
Bus interface and board timing system.
Typical analog input modules features include:
– 8, 16, or 32 analog inputs
– Resolution of 8 to 12 bits
– Range of 4-20 mA
– Input resistance typically 240kohms to 1 Mohms
– Conversion rates typically 10 microseconds to 30 milliseconds.
88
– 8, 16 or 32 analog outputs
– Resolution of 8 or 12 bits
– Conversion rate from 10µ seconds to 30 milliseconds
– Outputs ranging from 4-20 mA/0 to 10 volts
7.6.6. Communications
A RTU may be interfaced to multiple master stations and IEDs (Intelligent
Electronic Device) with different communication media (usually serial (RS232,
RS485, RS422) or Ethernet). An RTU may support standard protocols (Modbus,
IEC 60870-5-101/103/104, DNP3, IEC 60870-6-ICCP, IEC 61850 etc.) to
interface any third party software.
Data transfer may be initiated from either end using various techniques to
insure synchronization with minimal data traffic. The master may poll its
subordinate unit (Master to RTU or the RTU poll an IED) for changes of data on a
periodic basis. Analog value changes will usually only be reported only on changes
outside a set limit from the last transmitted value. Digital (status) values observe a
similar technique and only transmit groups (bytes) when one included point (bit)
changes. Another method used is where a subordinate unit initiates an update of
data upon a predetermined change in analog or digital data. Periodic complete data
transmission must be used periodically, with either method, to insure full
synchronization and eliminate stale data. Most communication protocols support
both methods, programmable by the installer.
Multiple RTUs or multiple IEDs may share a communications line, in a multi-
drop scheme, as units are addressed uniquely and only respond to their own polls
and commands.
89
IED communications
IED communications transfer data between the RTU and an IED. This can
eliminate the need for many hardware status inputs, analog inputs, and relay
outputs in the RTU. Communications are accomplished by copper or fibre optics
lines. Multiple units may share communication lines.
Master communications
Master communications are usually to a larger control system in a control
room or a data collection system incorporated into a larger system. Data may be
moved using a copper, fibre optic or radio frequency communication system.
Multiple units may share communication lines.
90
7.6.8. Applications
Remote monitoring of functions and instrumentation for:
1. Oil and gas (offshore platforms, onshore oil wells)
2. Networks of pump stations (waste water collection, or for water
supply)
3. Environmental monitoring systems (pollution, air quality, emissions
monitoring)
4. Mine sites
5. Air traffic equipment such as navigation aids (DVOR, DME, ILS, GP)
6. Remote monitoring and control of functions and instrumentation for:
7. Hydro-graphic (water supply, reservoirs, sewage systems)
8. Electrical power transmission networks and associated equipment
9. Natural gas networks and associated equipment
10. Outdoor warning sirens
91
6. Oleumtech Wireless RTU/Modbus Gateway: Wio wireless RTU
products are low cost remote terminal units that combine traditional remote IO
functionality of a standard
Table 4: Manufacturers of Remote Terminal Unit
92
amount of interest in the past few years is Ethernet. Other protocol, which fits onto
Ethernet extremely well, is TCP/IP, and being derived from the Internet is very
popular and widely used.
Figure 8.1 OSI model representation: two hosts interconnected via a router
93
Figure 8. 2 Basic structure of an information frame
The RS-232 standard consists of three major parts, which define: • Electrical
signal characteristics • Mechanical characteristics of the interface • Functional
description of the interchange circuits
94
Figure 8. 4 Half- duplex operational sequence of RS-232
95
loss in power, by a series of total internal reflections. Figure 5. 5 illustrates this
process.
96
The fiber components include:
1. Fiber core
2. Cladding
3. Coating (buffer)
4. Strength members
5. Cable sheath
There are four broad application areas into which fiber optic cables can be
classified: aerial cable, underground cable, sub-aqueous cable and indoor cable.
8.4. Modbus
97
The interaction between client and sever (controller and target device) can be
depicted as follows. The parameters exchanged by the client and server consist of
the Function Code (‘what to do’), the Data Request (‘with which input or output’)
and the Data response (‘result’).
The Application Data Unit (ADU) structure of the Modbus protocol is shown
in the Figure 8.8
98
This results in transmission of data at 1 Mbps (fig. 8.10).
Unlike Modbus, Modbus Plus is a proprietary standard developed to
overcome the ‘single-master’ limitation prevalent in Modbus Serial.
8.6. HART
The HART system (and its associated protocol) was originally developed by
Rosemount and is regarded as an open standard, available to all manufacturers. Its
main advantage is that it enables the retention of the existing 4-20mA
instrumentation cabling whilst using, simultaneously, the same wires to carry
digital information superimposed on the analog signal. HART is a hybrid analog
and digital system, as opposed to most field bus systems, that are purely digital. It
uses a Frequency Shift Keying (FSK) technique based on the Bell 202 standard.
Two individual frequencies of 1200 and 2200 Hz, representing digits ‘1’ and ‘0’
respectively, are used. The average value of the 1200/2400Hz sine wave
99
superimposed on the 4-20mA signal is zero; hence, the 420mA analog information
is not affected (fig. 8.11).
HART can be used in three ways:
– In conjunction with the 4-20mA current signal in point-to-point mode
– In conjunction with other field devices in multi-drop mode
– In point-to-point mode with only one field device broadcasting in
burst mode
Traditional point-to-point loops use zero for the smart device polling address.
Setting the smart device polling address to a number greater than zero implies a
multi-drop loop. Obviously the 4-20mA concept only applies to a loop with a
single transducer; hence for a multi-drop configuration the smart device sets its
analog output to a constant 4mA and communicates only digitally.
The HART protocol has two formats for digital transmission of data, viz:
– Poll/response mode
– Burst (broadcast) mode
In the poll/response mode, the master polls each of the smart devices on the
highway and requests the relevant information. In burst mode the field device
continuously transmits process data without the need for the master to send request
messages. Although this mode is fairly fast (up to 3.7 times/second), it cannot be
used in multidrop networks. The protocol is implemented with the OSI model
using layers 1, 2 and 7.
8.7. AS-i
Actuator Sensor-interface is an open system network developed by eleven
manufacturers. AS-i is a bit-oriented communication link designed to connect
100
binary sensors and actuators. Most of these devices do not require multiple bytes to
adequately convey the necessary information about the device status, so the AS-i
communication interface is designed for bit-oriented messages in order to increase
message efficiency for these types of devices. It was not developed to connect
intelligent controllers together since this would be far beyond the limited capability
of such small message streams. Modular components form the central design of
AS-i.
Connection to the network is made with unique connecting modules that
require minimal, or in some cases no tools to provide for rapid, positive device
attachment to the AS-i flat cable. Provision is made in the communications system
to make 'live' connections, permitting the removal or addition of nodes with
minimum network interruption. Connection to higher level networks (e.g.
ProfiBus) is made possible through plug-in PC and PLC cards or serial interface
converter modules.
8.8. DeviceNet
DeviceNet, developed by Allen Bradley, is a low-level device oriented
network based on CAN (Controller Area Network) developed by Bosch (GmbH)
for the automobile industry. It is designed to interconnect lower level devices
(sensors and actuators) with higher level devices (controllers).
DeviceNet is classified as a field bus, per specification IEC-62026. The
variable, multi-byte format of the CAN message frame is well suited to this task as
more information can be communicated per message than with bit-type systems.
The DeviceNet specification is an open specification and available through the
ODVA. DeviceNet can support up to 64 nodes, which can be removed individually
under power and without severing the trunk line.
A single, four-conductor cable (round or flat) provides both power and data
communications. It supports a bus (trunk line drop line) topology, with branching
allowed on the drops. Reverse wiring protection is built into all nodes, protecting
them against damage in the case of inadvertent wiring errors. The data rates
supported are 125, 250 and 500K baud (i.e. bits per second in this case).
Figure 8.12 illustrates the positioning of DeviceNet and CANBUS within
the OSI model. CANBUS represents the bottom two layers in the lower middle
column, just below DeviceNet Transport.
101
Figure 8. 12 Devicenet (as well as ControlNet and Ethernet/IP) vs. the OSI model
Unlike most other field buses, DeviceNet does implement layers 3 and 4,
which makes it a routable system. There are two other products in the same
family; Control Net and Ethernet/IP. They share the same upper layer protocols
(implemented by CIP, the Control and Information Protocol) and only differ in the
lower four layers.
8.9. Profibus
ProfiBus (PROcess FIeld BUS) is a widely accepted international networking
standard, commonly found in process control and in large assembly and material
handling machines. It supports single-cable wiring of multi-input sensor blocks,
pneumatic valves, complex intelligent devices, smaller sub-networks (such as
ASi), and operator interfaces.
It is an open, vendor independent standard. It adheres to the OSI model,
ensuring that devices from a variety of different vendors can communicate easily
and effectively. It has been standardized under the German National standard as
DIN 19 245 Parts 1 and 2 and, in addition, has also been ratified under the
European national standard EN 50170 Volume 2.
The bus interfacing hardware is implemented on ASIC (Application Specific
Integrated Circuit) chips produced by multiple vendors, and are based on RS-485
as well as the European EN50170 Electrical specification. ProfiBus uses 9-Pin D-
type connectors (impedance terminated) or 12mm round (M12-style) quick-
disconnect connectors. The number of nodes is limited to 127.
102
The distance supported is up to 24km (with repeaters and fiber optic
transmission), with speeds varying from 9600bps to 12Mbps. The message size
can be up to 244 bytes of data per node per message (12 bytes of overhead for a
maximum message length of 256 bytes), while the medium access control
mechanisms are polling and token passing.
ProfiBus supports two main types of devices, namely, masters and slaves.
Master devices control the bus and when they have the right to access the bus,
they may transfer messages without any remote request. These are referred to as
active stations
Slave devices are typically peripheral devices i.e. transmitters/sensors and
actuators.
They may only acknowledge received messages or, at the request of a master,
transmit messages to that master. These are also referred to as passive stations.
103
standardized interface between the application software and the actual field
devices.
104
Other variations included 1Base5, 10BaseFB, 10BaseFP and 10Broad36, but
these versions never became commercially viable.
105
1000BaseT is based on a different encoding scheme. As with Fast Ethernet,
Gigabit Ethernet supports full duplex and auto-negotiation.
It uses the same frame format as 10 Mbps and 100 Mbps Ethernet systems,
and operates at ten times the clock speed of Fast Ethernet, i.e. at 1Gbps. By
retaining the same frame format as the earlier versions of Ethernet, backward
compatibility is assured. Despite the similar frame format, the system had to
undergo a small change to enable it to function effectively at 1Gbps in CSMA/CD
mode.
The slot time of 64 bytes used with both 10 Mbps and 100 Mbps systems had
to be increased by a factor of 8, to 512 bytes. This is equivalent to 4.096 μs.
Without this increased slot time the collision domain would have been
impracticably small at 25 meters. The irony is that in practice all Gigabit Ethernet
systems are full duplex, and do not need this large slot time.
8.11.3. TCP/IP
TCP/IP is the de facto global standard for the Internet (network) and host–to–
host (transport) layer implementation of internet work applications because of the
popularity of the Internet. The Internet (known as ARPANet in its early years),
was part of a military project commissioned by the Advanced Research Projects
Agency (ARPA), later known as the Defense Advanced Research Agency or
DARPA. The communications model used to construct the system is known as the
ARPA model. Whereas the OSI model was developed in Europe by the
International Standards Organization (ISO), the ARPA model (also known as the
DoD model) was developed in the USA by ARPA. Although they were developed
by different bodies and at different points in time, both serve as models for a
communications infrastructure and hence provide ‘abstractions’ of the same
reality.
The remarkable degree of similarity is therefore not surprising. Whereas the
OSI model has 7 layers, the ARPA model has 4 layers. The OSI layers map onto
the ARPA model as follows.
– The OSI session, presentation and applications layers are contained in
the ARPA process and application layer.
– The OSI transport layer maps onto the ARPA host–to–host layer
(sometimes referred to as the service layer). • The OSI network layer maps onto
the ARPA Internet layer.
106
– The OSI physical and data link layers map onto the ARPA network
interface layer.
The relationship between the two models is depicted in Figure 8.13.
TCP/IP, or rather the TCP/IP protocol suite is not limited to the TCP and IP
protocols, but consists of a multitude of interrelated protocols that occupy the
upper three layers of the ARPA model. TCP/IP does NOT include the bottom
network interface layer, but depends on it for access to the medium.
107
Figure 8. 14 Internet frame
9. OPC TECHNOLOGY
9.1. Introduction in OPC
OLE for Process Control (OPC) is a series of standards and specifications for
industrial telecommunication. An industrial automation industry task force
developed the original standard in 1996 under the name OLE for Process Control
(Object Linking and Embedding for process control). OPC specifies the
communication of real-time plant data between control devices from different
manufacturers.
After the initial release in 1996, the OPC Foundation was created to maintain
the standard.[1] As OPC has been adopted beyond the field of process control, the
OPC Foundation changed the name to Open Platform Communications in 2011.[2]
The change in name reflects the applications of OPC technology for applications in
building automation, discrete manufacturing, process control and many others.
OPC has also grown beyond its original OLE (Object Linking and Embedding)
108
implementation to include other data transportation technologies including
Microsoft's .NET Framework, XML, and even the OPC Foundation's binary-
encoded TCP format.
9.1.2. Purpose
What is needed is a common way for applications to access data from any
data source like a device or a database.
OPC Server in this figure and in the following sections is used as synonym
for any server that provides OPC interfaces, e.g., OPC DataAccess Server, OPC
Alarm&Event Server, OPC HistoricalData Server.
109
9.1.3. The Current Client Application Architecture
There are many client applications that have been developed that require data
from a data source and access that data by independently developing “Drivers” for
their own packages (fig. 9.1, fig 9.2)).
110
– Access Conflicts Two packages generally cannot access the same
device simultaneously since they each contain independent Drivers.
Hardware manufacturers attempt to resolve these problems by developing
drivers, but are hindered by differences in client protocols. Today they cannot
develop an efficient driver that can be used by all clients.
OLE for Process Control (OPC) draws a line between hardware providers and
software developers. It provides a mechanism to provide data from a data source
and communicate the data to any client application in a standard way. A vendor
can now develop a reusable, highly optimized server to communicate to the data
source, and maintain the mechanism to access data from the data source/device
efficiently. Providing the server with an OPC interface allows any client to access
their devices.
9.1.5. General
OLE for Process Control (OPC™) is designed to allow client applications
access to plant floor data in a consistent manner. With wide industry acceptance
OPC will provide many benefits:
– Hardware manufacturers only have to make one set of software
components for customers to utilize in their applications.
– Software developers will not have to rewrite drivers because of feature
changes or additions in a new hardware release.
111
– Customers will have more choices with which to develop World Class
integrated manufacturing systems.
With OPC, system integration in a heterogeneous computing environment
will become simple. Leveraging OLE/COM the environment shown in Figure 9.3
becomes possible.
9.2. Scope
A primary goal for OPC is to deliver specifications to the industry as quickly
as possible. With this in mind, the scope of the first document releases is limited to
areas common to all vendors. Additional functionality will be defined in future
releases. Therefore, the first releases focus on
– Online DataAccess, i.e., the efficient reading and writing of data
between an application and a process control device flexibly and efficiently;
– Alarm and Event Handling, i.e., the mechanisms for OPC Clients to
be notified of the occurrence of specified events and alarm conditions, and
– Historical Data Access, i.e., the reading, processing and editing of
data of a historian engine.
Functionality such as security, batch and historical alarm and event data
access belong to the features which are addressed in subsequent releases. The
architecture of OPC leverages the advantages of the COM interface, which
112
provides a convenient mechanism to extend the functionality of OPC. Other goals
for the design of OPC were as follows:
– simple to implement
– flexible to accommodate multiple vendor needs
– provide a high level of functionality
– allow for efficient operation
The specifications include the following:
– A set of custom COM interfaces for use by client and server writers.
– References to a set of OLE Automation interfaces to support clients
developed with higher level business applications such as Excel, Visual Basic, etc.
The architecture is intended to utilize the Microsoft distributed OLE
technology (DCOM) to facilitate clients interfacing to remote servers.
113
Figure 9.4 OPC Client/Server Relationship
114
The OPC Items represent connections to data sources within the server. An
OPC Item, from the custom interface perspective, is not accessible as an object by
an OPC Client. Therefore, there is no external interface defined for an OPC Item.
All access to OPC Items is via an OPC Group object that “contains” the OPC item,
or simply where the OPC Item is defined. Associated with each item is a Value,
Quality and Time Stamp. The value is in the form of a VARIANT, and the Quality
is similar to that specified by Fieldbus.
Note that the items are not the data sources - they are just connections to
them. For example, the tags in a DCS system exist regardless of whether an OPC
client is currently accessing them. The OPC Item should be thought of as simply
specifying the address of the data, not as the actual physical source of the data that
the address references.
115
– Enter subscriptions to specified events, so that OPC Clients can
receive notifications of their occurrences. Filters may be used to define a subset of
desired events.
– Access and manipulate conditions implemented by the OPC Server. In
addition to the IOPCEventServer interface, an OPC Event Server may support
optional interfaces for browsing conditions implemented by the server and for
managing public condition groups (defined in the following section).
116
and design makes it possible to construct an OPC Server which allows a client
application to access data from many OPC Servers provided by many different
OPC vendors running on different nodes via a single object (fig 9.6).
The OPC Specification specifies COM interfaces (what the interfaces are),
not the implementation (not the how of the implementation) of those interfaces. It
specifies the behavior that the interfaces are expected to provide to the client
applications that use them.
Included are descriptions of architectures and interfaces that seemed most
appropriate for those architectures. Like all COM implementations, the architecture
of OPC is a client-server model where the OPC Server component provides an
interface to the OPC objects and manages them.
There are several unique considerations in implementing an OPC Server. The
main issue is the frequency of data transfer over non-sharable communications
paths to physical devices or other data bases. Thus, we expect that OPC Servers
will either be a local or remote EXE which includes code that is responsible for
efficient data collection from a physical device or a data base.
An OPC client application communicates to an OPC server through the
specified custom and automation interfaces. OPC servers must implement the
custom interface, and optionally may implement the automation interface. In some
117
cases the OPC Foundation provides a standard automation interface wrapper. This
“wrapperDLL” can be used for any vendor-specific custom-server (fig 9.8).
118
embedded systems or the IT world. Recently, the new technology has established
itself in areas where OPC was hardly seen before, e.g. device parameterization.
The reason for this breadth of application is that the OPC UA technology offers
extended features when compared to Classic OPC. OPC UA's platform
independence and, in particular, its scalability opens up many possibilities for new
and efficient automation concepts.
119
against unauthorized access, sabotage, and faults caused by negligent use. OPC
UA security is based on global standards developed by the World Wide Web
consortium. It offers various possibilities to identify applications, authenticate
users and protect against unauthorized access, as well as sign messages and encrypt
the payload transferred.
– Data security and reliability: The communication standard defines a
robust architecture with reliable communication mechanisms, configurable
timeouts, automatic fault detection, and recovery mechanisms. The communication
link between a OPC UA client and server can be monitored by both the client and
the server. If a connection is temporarily interrupted, the data can be buffered in
the server. In security-critical areas, OPC UA defines an additional redundancy
concept that can be used for devices, servers and clients.
– Platform independence and scalability: Using service-oriented base
technologies ensures that OPC UA is platform independent and opens up many
possibilities for new and cost-effective automation concepts. Embedded field
devices, process control systems, PLCs, gateways, or operator panels are
developed using lean OPC UA server implementations that have been ported
directly to operating systems including embedded Linux, VxWorks, QNX, RTOS,
and many more.
– Simplification by unification: OPC UA defines an integrated address
space and an information model that maps process data, alarms, historical data,
and program invocations. In this way, even complex processes can be fully
described with OPC UA. While classic OPC requires three different OPC servers –
DA, AE and HDA – with different semantics to acquire, for example, the current
value of a temperature sensor, the event of excess temperature, and the historical
average temperature, OPC UA needs only one component. This helps to reduce
configuration and engineering times.
– High performance: OPC UA is based on a binary protocol that is TCP-
based. This very efficient protocol allows for a fast data exchange that will meet
the high-performance requirements of most applications. The actual binary
protocol implementation is available through the OPC Foundation and serves as
the basis for all OPC UA servers and clients.
– New application possibilities: The wide breadth of the OPC UA
technology allows implementing new vertical integration concepts. By cascading
OPC UA components, information can be transported securely and reliably from
the factory floor all the way up to the production planning or ERP system (Fig.
120
9.10). For this purpose, OPC UA enabled client and server components at the
automation level connect embedded UA servers at the field level with OPC UA
clients integrated in ERP systems at the enterprise level. The individual OPC UA
components can be geographically distributed and separated from each other by
firewalls without problems.
Fig 9.10 OPC UA allows secure and robust “information permeability” – from the sensor to the
ERP system
121
The future IEC communication standard OPC UA provides features that offer
new possibilities of embedding an internationally standardized communication
interface in disparate systems ranging from PLCs, process control systems, drives,
gateways and operator panels to MES or ERP systems. The result is savings in
installation, setup, commissioning, maintenance and operation.
Fig. 9.11 OPC UA toolkits comprising platform dependent and platform independent parts
allow implementing OPC UA clients and servers on almost any target platform
122
10. COMPLEX COMPUTER SYSTEMS
10.1. Defining ‘Complex systems’
A complex system is an automated system that presents a finite set of
interrelated subsystems united by general operation goals. These in turn can be
divided into a finite number of smaller subsystems down to a subsystem of the
lowest level, i.e. elements of a complex system, which either cannot be subjected
to further fragmentation or are not entitled to this procedure because of some
relevant arrangements. Thus, any subsystem of a complex system can be viewed as
a complex system, consisting of elements (subsystems of lower level), being at the
same time an element of a higher level.
A complex system is characterized by the following distinctive features:
– advanced architecture
– multipurpose nature
– complicated control algorithm
– high level of automatization
– large number of staff and/or users
– lengthy process of creation and a long life
123
classification and the architectural concept of such computer systems are dealt with
in ss. 10.3 and 10.4, respectively.
124
– efficiency geographic information system
– exploration survey
– ocean science
– speech recognition and synthesis
– image recognition
The capabilities of computer tools as well as the required speed of problem-
solving is constantly improving due to the implementation of structural methods.
Structural methods are viewed as complex computer system design based on
multiprocessing, distribution and paralleling. Paralleling is used both in the design
of certain computer devices (control devices (CPU), instruction buffers, memory
module, arithmetic-logical units, conveyors, etc.) and cooperative parallel and
distributed data processing by many computers.
Complex computer systems have different configurations, the main ones
being the following:
– high reliability systems
– high performance computing
– multithreaded systems
Complex computer systems designed on multiprocessors are viewed as an
ideal design to improve the general reliability of an information computer system.
Due to the single-system view, separate nodes and components of the system can
seamlessly replace defective elements, providing continuity and no-failure
operation even in case of such complex applications as databases. Disaster-proof
decisions are reached by spacing the nodes of the complex computer system
hundreds of kilometers apart and by providing mechanisms of global data
synchronization between these nodes. There are many examples of scientific
computations and engineering designs based on parallel processor operation that
ensure simultaneous concurrent execution of a great number of operations.
Complex computer systems designed for high-performance computing are usually
assembled from many computers. Designing such systems is a complex process
that requires constant coordination of such tasks as installation, maintenance and
the simultaneous operation of a large number of computers. It also concerns
technical requirements to parallel and highly-efficient access to the same resource,
interprocessor communication between the nodes and coordination of parallel
operation.
Multithreaded systems are used to provide a common interface for a variety of
resources that may arbitrarily increase or decrease in number, the typical example
125
being a group of web-servers. It should be noted that distinctions between such
types of complex computer systems are to some extent fuzzy and quite often the
system may possess properties or functions that lie outside the scope of those listed
above. Moreover, configuring a large system used as a general purpose system
(GPS) requires separation of blocks performing all the functions listed above.
126
instruction, single data) is a type of parallel computing architecture where many
functional units perform different operations on the same data. Such systems may
accomplish pipeline processing by pipeline processors of a multiprocessor
computer system to increase command processing speed and arithmetic operation
speed.
127
systems. Systems using MIMD are able to concurrently execute an array of sub-
tasks in order to minimize the main task execution time.
This classification of computer systems architectures is essential for
understanding special aspects of certain architecture types, but is not detailed
enough to use in complex system designing. Thus, it is important to introduce a
more detailed classification associated with various computer architectures and
employed hardware.
Now let us have a closer look at computer systems architecture in terms of
the above-viewed basic one.
128
Because of these disadvantages in architecture, using the system’s resources
to maximum extent requires a lot of effort.
The next step in improving the performance of automated computer systems
was designing a multiprocessor computer system employing SMP (Symmetric
MultiProcessing) architecture. The main characteristic feature of this type of
computers is a physical shared memory distributed between its processors.
Shared memory is used for messaging between processors, with all computing
devices having equal access to it and the same addressing for all memory cells. For
this reason, SMP-architecture is called symmetrical.
The most well-known SMP systems are SMP-servers and Intel-based
workstations (IBM, HP, Compaq, Dell, ALR, Unisys, DG, Fujitsu, etc). The whole
system operates under a single OS (usually UNIX-like, but for Intel platform it is
supported by Windows NT/2000/2003). The existence of a single OS makes
automatic system resource distribution on various stages of its operation possible.
This results in high robustness of a system, so that in case of failure of any separate
modules the load is redistributed among the operational units securing execution of
the most important functions of an automated system. The main advantage of a
SMP system is simplicity and genericity in programming. SMP architecture
ограничения на doesn’t place restrictions on a programming model used to build
an application. It is a common case to use a parallel branch model when all
processors operate independently is applied, though it is possible to employ
models with interprocessor communication. Use of shared memory increases the
speed of information exchange between individual processors as well as guarantees
a user access to the total memory capacity of a system. There are rather efficient
means for paralleling of SMP systems.
The disadvantage of a system with a shared memory is that it is poorly
scaled. The cause for this poor scalability is that at any specific time a bus is able
to process only one transaction that results conflict problem solving in case of
several processor simultaneously address to the same areas of shared physical
memory. This happens because computing elements begin interfere with each
other. Occurrence time of such a conflict depends on communication speed and a
number of processor. Generally, conflicts take place when number of processors
amounts to 8-24. Moreover, system bus has limited (though rather high) message
rate and a limited number of slots. This makes difficult to improve systems
performance when both a number of processors and users increases. A real system
can have not more than 32 processors.
129
For designing scaling systems based on SMP, MPP hybrid cluster
architectures are applied. A cluster represents two or more computers (often called
nodes) united with a bus or a switchboard, cluster nodes being servers,
workstations or ordinary PCs. The main characteristic of hybrid cluster architecture
is NUMA (NonUniform Memory Access). Hybrid architecture combines
advantages of systems with shared memory with relatively low prices of those with
disjoint memory,
The essence (the main point) of this architecture is a specific storage
pattern (organization) that is storage (memory) is physically distributed among
different components (units) of a system while being logically shared for a user
sees (faces) single address space. A system is built with homogeneous base
modules consisting of a small number of processors and a memory unit (Fig.4.4).
(The) modules are linked (connected) with a high-speed switchboard or a
communications network. There is architectural support for single address space
as well as hardware based access to remote memory, that is to the memory of other
to modules, access the local memory being several times quicker than to the
remote one. Basically, NUMA architecture is MPP architecture where SMP nodes
are used as separate computing units. Memory access and data exchange within
each SMP node is executed through node’s local memory in no time and the access
to the processors of another node is also possible but takes more time and
addressing is a more complex process.
Further development of the idea of multiprocessing has lead to designing
large high performance known as computer multiprocessor systems known as
highly parallel computer systems. These (such) computer systems depending on
their structure nare able to concurrently process multiple data or instruction flow.
The instruction flow is thought of as a sequence of instructions executed by a
computer system, the data flow being a sequence of data handled the under
instruction flow control.
Highly parallel SIMD-type architectures of (single instruction, multiple data)
are known as matrix computing systems. They contain a number of relatively
simple high performance processors, (linked with each other to create a network
(matrix) with processors in its nodes. All (the) processors execute the same
command but on different operands delivered to the processors from shared
memory by several data flows.
Highly parallel MISD-type (multiple instruction, single data) computer
structures are called pipelined computer systems. These systems contain a chain of
130
series-connected processors so that output information of one processor is an input
one for another processor. Each processor handles the respective part of a task
transferring the results to a neighboring processor as input data.
Thus, for example, an addition operation on floating point numbers can be
divided into four steps (stages): order comparison, matching of exponents,
mantissa addition and postnormalization. In pipelined computer systems all these
stages of computation are executed by separate processors forming a conveyor.
Highly parallel computer systems show better performance, reliability and
liveness compared to multiprocessor systems. On the other hand, their obvious
drawbacks are more complicated system control, programming complexity and
small system capacity.
The first two of abovementioned disadvantages are overcome by use of LSI
(large-scale integration circuit) and specific programming languages, though the
third one results in the fact that most of highly parallel computers systems are
designed for dedicated applications.
131
Shared computer systems form the basis of automated control systems,
designed to serve many users working at the same time.
To meet the abovementioned requirements such systems should have the
following features:
– advanced OS which guarantees concurrent execution of different
programs and users’ access to standard programs;
– Computer language translators that make program preparing and
maintenance easier for software specialists;
– hardware (means) that ensure dynamic memory distribution among
the programs as well as free programs relocation during computing process;
– memory protection means against other programs interference;
– a time clock that allows in accordance with the user’s request tо
allocate them with necessary working time upon the expiry of which the computer
system automatically switches to other programs execution.
– both hardware and software for prioritizing tasks simultaneously
waiting to be executed
Multiprocessor and multicomputer automated computer systems can also be
classified according to other characteristic features. Let us consider some of them.
According to their function, computer systems are divided into general-
purpose systems and specialized systems. General purpose systems are designed
solve a wide variety of specific automation and control problems whereas
specialized ones – to solve a certain scope of tasks concerning, for example,
control of some unique equipment or solution of some specific problems. For this
reason specific computer systems, as a rule, should have both hardware and
software specially designed for this particular system.
According to hardware type, complex automated computer systems break
down into homogeneous and heterogeneous systems. Homogeneous systems
contain a number of similar computer systems (or processors), whereas
heterogeneous ones contain different-type computers (or processors). The main
drawback of homogeneous computer systems is underutilization of separate
computers (processors) during their working process. In order to improve
computer system (processor) performance heterogeneous computer systems are
used.
According to the structure type, complex computer systems are divided into
stiff structure and variable structure computer systems. The structure of a complex
computer system is understood as a configuration of a system and schemes of
132
functional and control links between its elements. In the case of stiff systems the
configuration of its functional and control links does not change during its
operation process. A variable structure is a characteristic feature of adaptive
systems, i.e. systems structures of which changes during its operation process
according to the analysis of processing information. Such systems allow to achieve
optimal state in any varied performance environment.
According to the degree of control centralization, automated complex
computer systems are divided into the following three groups: centralized,
decentralized and with combined programmed control.
In centralized complex computer systems all control functions are
concentrated in a single element represented by one of the computer systems called
CPU. In decentralized complex computer system each processor or computer
system operates autonomously solving its particular tasks. In systems with
combined programmed control a complex computer system is divided into groups
of interacting computer systems (or processors), each being centralized while the
control between each group is decentralized.
133
know in advance how many instructions will each particular program have.
Secondly, each program is a specific one and a number of instructions can greatly
vary depending on a program. As such this characteristic provides us with a very
general concept of computer performance.
Another method to measure performance is to define a number of real
operations executed per time unit, a basic unit for measuring time being FLOPS,
i.e. a number of floating point operations per second). This method of measuring
is more acceptable for a user is aware of the computational complexity of a
program and using this characteristic a user can get a lower-bound estimate of its
execution time. Yet peak performance can be achieved only in ideal conditions, i.e.
absence of conflicts in memory access in the case of balanced load on all system
units. In real applications the execution of a certain program is affected by the
following hardware characteristics: specific nature of computer architecture, a set
of instructions, functional module composition, the execution of an input/output
statement and effectiveness of compilers. The most critical factor is interaction
time with a memory device which is specified by its structure, capacity and
subsystem memory architecture.
Most modern computers use a tiered storage as the most efficient access to a
memory unit, its layers being represented by registers and register memory,
primary (general) RAM, cache memory, virtual and hard disk and tape device. The
hierarchy is organized as follows: the data processing speed should be increased as
memory level increases whereas memory level capacity should be decreased.
Computer efficiency in such a hierarchy is achieved by storing frequently used
data in the top-level memory, access time to which is minimal. Such memory is
rather expensive so it cannot be large. Memory hierarchy is one of the
characteristics of computer architecture that are of great importance for
improving performance.
The performance efficiency of a computer system unit is viewed as the degree
of involvement of this unit in the total system performance when solving some
particular problem, i.e. work unit efficiency. Paralleling is justified if it leads to a
substantial increase in the average work system efficiency. This directly affects the
task time. At present, we talk about tasks that require complex system genericity
dictated by modern application fields, rather than about a special set of tasks.
The real breakthrough in this sphere was the switch to microprocessor unit
assembly bases responsible for designing multiprocessor computing systems.
134
Designing complex automated systems is the most efficient way of dealing
deal with inconsistency between the ever growing demand in reliable high-speed
computing tools and the limits of computer systems at the present stage of
techlonogical development.
135
being a robot taking something from a conveyor belt. The objects on the conveyer
belt are moving and there is some interval of time for the robot to take the required
object. If the robot fails, the object will not be available any more even if its
movement was a right one. If the robot is too quick, the object won’t be there yet.
Moreover, in this case the robot can lock out object movement.
Another example is the traffic control loop of an airplane managed by a
computer (an autopilot). The aircraft's sensors should continuously downlink flight
measured data to a control. If the measured data is lost, control quality goes down,
possibly causing a plane crash.
It should be noted that in the case of a robot we deal with hard real time, i.e.
if the robot is behind time, this will result in an erroneous operation. Although this
case may be viewed as a soft real time mode if the only consequence were loss of
productivity. Much of what is done in the field of real time programming, in fact,
functions in soft real time. Properly designed systems usually have a safety/
correction level of their behavior even for a case when computations haven’t been
finished at the required moment, so that if a computer needs a bit more time, this
can be somehow compensated. Sometimes the term real time system is used to
denote an on-line system, though it is typically a mere gimmick. For instance,
ticket reservation systems or depot-handling systems are not qualified as real time
systems as a human-operator doesn’t really attach much importance to a delay of
several hundreds of milliseconds. Sometimes the term real time system is
employed to denote ‘high-performance system.’ It is important to note that the
term ‘real time’ is not a synonym for ‘high-performance.’ Thus, it bears repeating
that the term ‘real time’ does not mean that a system responds to input signals
instantly and a delay may amount to seconds or even more, but means that it
guarantees some maximum response delay time that makes successful problem
solving possible. It is also important to mention that algorithms providing a
guaranteed response time have a lower average performance than those systems
that don’t guarantee a certain response time.
Thus, the abovementioned facts lead to the following conclusions:
– the term ‘real-time system’ can be understood as a system, the
functional correctness of which is defined not only by correctness of computations
but also by the time needed to get the required result. An inability to meet time
requirements is viewed as system failure. In order to meet the specified
requirements for real-time systems hardware, software and operating procedures
(operation algorithms should guarantee set time parameters for the system’s
136
reaction. A system should not necessarily have fast response time but it should be
guaranteed as well as meet specified requirements;
– use of the above defined term ‘real-time system’ to denote high-
performance and interactive response systems is assumed as incorrect.
– though the term ‘soft real time’ is used often, it is not clearly defined.
Indeed, the meaning of the term ‘real-time system is interpreted by specialists
differently depending on the area of their professional interests as well as depending
on whether they are theoretical scientists or practical specialists and even on their
personal experience and social circle. As there is no exact definition for ‘soft’ real
time, let us assume, that this category includes all real-time systems that are outside
the category of ‘hard real time’ ones.
– almost all industrial automation systems are real-time systems.
– whether a particular system belongs to the category of real-time
systems or not doesn’t depend on its operation speed. For example, if a system is
designed for a ground-water level control, it works in real-time mode even though it
takes measurements only once every half an hour.
Intuition suggests that the higher the speed of the processes in the control
object is, the higher should be the operation speed of real-time systems. In order to
evaluate the necessary operation speed in digital control systems dealing with
stationary processes, it is common to use Kotelnikov’s sampling theorem. From
this theorem it follows that the signal sampling frequency should be at least twice
as high as the threshold frequency of their spectrum [4]. When dealing with
wideband transient processes, it is common to use high-performance ADCs
(analog-digital converters) with fast buffer memory which records signals
realizations at a required speed for a later analysis and/or registration by the
computer system.
The required processing should be completed by the beginning of the next
transient process, otherwise the information will get lost.
Systems of a like nature are called quasi-real-time systems.
For a number of automation tasks, software systems should function as part
of large automated systems without human input. In such cases real-time systems
are called embedded.
Embedded systems can be defined as software and hardware which represent
components of another, larger system that operates without outside interference on
the part of humans.
137
Hardware of a real-time system, on which real-time operating system
(RTOS) functions and software is commonly referred to as a target platform. Due
to the possible uniqueness of a target platform, especially in embedded systems,
program designing can be carried out on different equipment or in some cases on a
different operating system (OS) and target program testing is conducted remotely
by means of tooling or emulation of a target OS operation.
138
An interface in such systems serves a dual function:
1. controls interaction of application processes with a system;
2. provides continuity of the execution of code (i.e. provides absence of
task switching during the execution of code).
The key advantage of monolithic architecture is its relatively high
performance rate compared to other types of architecture. Though this can be
achieved mainly by writing considerable proportion of the programs in an
assembly language.
The drawbacks of a monolithic architecture are the following:
1. System calls, requiring privilege level switching (from a user task to the (a)
kernel), should be implemented as interruptions or as a special type of exceptions.
That substantially increases their execution time.
2. A kernel is non-preemptable. As a result a high-priority task might not
be controlled because of a low-priority one.
3. The complexity of ‘moving’ a system to new CPU architectures due to
substantial number of assembler inserts.
4. Inflexibility and design complexity: partial changes of a kernel require
its total recompilation. The general disadvantage of such an architecture is poor
predictability of its behavior which is caused by complex interactions of modules.
A microkernel architecture is believed to be one of the most efficient RTO
architectures. A compact fast kernel either is resident and located in random access
memory (RAM) or is located in read-only memory (ROM) in the case of
embedded systems. Other supplementary OS modules can be added as need arises
(in particular, they can be timely replaced and improved.
The main principle of such an architecture is separation of OS services. A
kernel functions as a message dispatcher between front-end user programs and
servers, i.e. system services. Module architecture was developed as an attempt to to
exclude an interface between applications and a kernel so that to make system
modernization easier in case there is a need to ‘transfer’ it onto new OS
architectures.
At present a microkernel serves a dual function:
– controls interactions between different system parts (for example, job
and file managers),
– provides continuity of provides continuity of the execution of code
(i.e. provides absence of task switching during the execution of code).
.
139
– On the one hand, such an architecture has a number of advantages as
far as requirements to RTOS and embedded systems are concerned.
– The most important of these requirements are the following:
– Higher OS reliability, because each service is in itself a stand-alone
application and therefore it is easier to bed it in and to catch any
errors.
– Such a system has an edge in scaling up, as unnecessary services can
be excluded from the system without loss of its performance.
– These systems also exhibit higher fault tolerance, as a ‘hung’ service
can be restarted without hard reboot.
On the other hand, module architecture of OS has one key disadvantage – in
case of intensive use of OS functions operation speed is lower than one of a
monolithic architecture system. This is explained by the fact that supplementary
OS functions (those that are not located in a kernel) are called as processes and in
case of task concurrence this results in task switching. This process may require
much more time. Among well-known RTOS systems employing microkernel
architecture one should note OS9 and QNX.
RTOS designed on object-oriented approach have more complex
architecture.
A microkernel in such OS is removed to a user task level, each task
containing a certain number of microkernels needed for its proper operation. Each
user task contains a thread or several threads (to interact both between tasks and to
provide system service calls, messages coming from user tasks using mailboxes.
This principle is appropriate for designing RTOS of complicated layered
distributed systems.
Actual equality of all system components make task switching possible at
any time. Object-oriented approach guarantees design modularity system security,
modernization simplicity and code re-execution. Unlike the previously viewed
systems, not all components of the system itself should be brought into RAM. If
the microkernel is already loaded for another application, it should not be loaded
again and the code and the data of the existing microkernel are used.
All these techniques allow to reduce required memory space.
As different applications share the same microkernels, they should work in
the same address space. Thus, the system cannot use virtual memory and therefore
it works faster (as delays caused by translation of a virtual address into physical
address are exсluded).
140
11.3. Processes and Threads in RTOS
Increasing the scope of real-time systems resulted in stepping up the
requirements to these systems. At present a mandatory requirement to OS which is
meant to be used for solving real-time tasks is a possibility of multitasking. The
same is relevant for a general-purpose OS. But as far as multitasking is concerned,
in case of real-time systems there are a number of additional requirements to be
met. These requirements are defined by the mandatory characteristic of a real-time
system, i.e. predictability.
Multitasking means parallel execution of several operations, though practical
implementation of paralleling bumps into computer system resource sharing. The
main resource sharing of which between several tasks is called scheduling, is a
processor. That is why truly parallel processing of several tasks is impossible in
single-processor systems. Fig. 6.4 shows the realization of multitasking in a
single-processor system. The processor executes dispatching by means of task-
control blocks, each task-control block containing a special field for a task priority
record. There is a fairly large number of different dispatching methods and the
most important ones will be dealt with later.
The problem of resource sharing is relevant to multiprocessor systems too
because several processors have to share a single system bus. That is why groups
of computing complexes united by a common control block are used for designing
real-time systems meant for simultaneously solving several tasks. The possibility
of using several processors within one computing complex and to provide
maximally transparent interactions between several computing complexes with, for
instance, a local network, is an important characteristic of RTOS that greatly
enlightens its application opportunities.
The notion of task in terms of OS and software applications can be
understood as two different things: processes and threads. A process is a
generalized representation of a task, as it denotes an independent program module
or an entire executable file together with its address space, state of registers, P-
counter, function and procedure code, whereas a thread is an integral part of a
process and means the sequence of an executable code. Each process contains at
least one thread, the maximum number of threads within one process in most OS
being limited only by total available RAM of a computing complex. Threads of
one process divide its address space; that is why they can easily exchange data.
Also, the switching time between such threads (i.e. time that a processor needs to
switch from execution of commands from one thread to execution of commands
141
from another thread) turns out to be less than the switching time between
processes. Hence, in real-time applications concurrent tasks are grouped to the
greatest possible extent as threads executed within one process.
Each thread has an important property on the basis of which OS makes a
decision when it can consume with processor time. This property is called thread
priority and is expressed as a whole-number value. Number of priorities (or
priority levels) is determined by OS functionality, the lowest value (0) is attributed
to ‘idle’ thread of OS which is used for correct operation of a system when ‘there
is nothing for it to be executed’.
A thread may be in one of the following five states: dormant, ready, running,
waiting or interrupting (execution of interrupt service routine - ISR) (see. fig.
11.1).
142
cases when all threads that share the processor are of the same priority, i.e. are of
the same importance from the operating system’s point of view:
1. FIFO - First Input First Output means that the first thread in a queue is
executed first and is being executed until it is completed or blocked in waiting for
some resource deallocation or event. After that control is delegated to the next-in-
queue tread.
2. Round-robin process scheduling means one method of having different
program process take turns using the resources of the computer is to limit each
process to a certain short time period, then suspending that process to give another
process a turn (or "time-slice"). After that control is delegated to the next-in-queue
thread. When the time of the last thread is over, the control is delegated to the first
thread in queue that is in ready state. Thus, execution of each thread is divided into
a sequence of time cycles.
Another group of methods is used to share a processor’s time among threads
of different importance, i.e. priority.
3. In the simplest case when two threads of different priority are in ready
state, processing time is provided to a thread of higher priority. Such a method is
referred to as preemptive multitasking. The use of this method is associated
with some complexity. For example, if there is one group of threads of some
priority and another group of a lower priority, then in case of round-robin process
scheduling of each group in a preemptive multitasking system the low priority
threads may not get access to the processor at all.
4. One of the solutions of the abovementioned problem is so-called adaptive
process scheduling. Essentially this means that the priority of the thread that is not
being executed for some period of time increases by one. Priority re-establishment
occurs within one time slice after thread completion or when a thread is blocked.
Thus, in case of round-robin process scheduling, a queue (or «round-robin») of
threads of higher priority cannot полностью block execution of a lower priority
thread queue.
5. In real-time tasks, dispatch methods should mean some specific
requirements as a procedure of control delegation should be defined by deadline-
driven scheduling. This requirement is to the fullest extent satisfied by preemptive
multitasking. The principle of this method is that as soon as a thread of higher
priority than the one of an active thread passes into a ready state, the active thread
is involuntary preempted (i.e. passes from active state into a ready state) and
control is delegated to a thread of higher priority.
143
In practice, both combinations of the above described methods and various
modifications of them are widely used. In the context of scheduling of several
threads of different priority levels in real-time systems, the most important
problem is to prioritize them in such a way that each thread is executed within its
deadline. If all the threads meet the deadline, the system is said to be schedulable.
For real-time systems used for periodic event processing, there is a
mathematical model which makes it possible to calculate whether the system in
question is schedulable. The model was developed by C. L. Liu and J. Layland in
1970 [4] and is called Rate Monotonic Analysis (RMA). The usability and
efficiency of this mathematical model resulted in the acceptance of RMA as a
standard by world leading RTOS manufacturers.
Taking into account everything said above, it is possible to formulate one of
the most important requirements a RTOS that is used in complicated automated
and control systems: RTOS should guarantee multitasking with support of pre-
emptive priority scheduling.
144
The simplest way to get an exclusive access to shared resources is interrupt
prohibition and enabling. Such an approach should be taken with care and should
not prohibit interruptions for long so not to impair interrupt response time. This
method is acceptable if only several variables are copied or changed. It is also the
only way to exchange data between a thread and an Interrupt Service Routine
(ISR). In any case, interruptions should be prohibited for a minimal time period.
Trial and Installation
If the kernel is not used, two threads may ‘agree’ that before they get access
to the resources, they should check for some global variable and if it equals zero
(0), the access is considered to be allowed, the first of the threads to get access sets
the value of this variable at one (1). This process is commonly called Test-And-Set
(TAS). This operation should either be executed by the CPU itself, or it is
necessary to prohibit interruptions during the execution time of this operation.
Dispatch Blockage
If a process does not share variables of data structures with IRS sub-
programs, the dispatch can be blocked and unblocked, as shown in the listing given
below. In this case two processes can share data without risk of collision. It should
be noted that when the dispatcher is blocked, interruptions are allowed and if
interruption occurs when a program undergoes its critical section, then ISR will
immediately start. When ISR is over, the control gets back to the interrupted task
even if ISR made a task with higher priority ‘ready’. Once OSSchedUnlock is
requested, there is a search for high priority tasks, and if they are found, there is a
context switch. Though this method is a rather efficient one, dispatch blockage
should be avoided as such an approach partially stultifies the use of a kernel in
principle.
Semaphores
A semaphore is a programming concept that is frequently used in most
multithreaded kernels to solve multi-threading problems
Semaphores are used to:
– to control access to shared resources;
– signal occurrence of an event;
– to allow two threads to synchronize their activity.
A semaphore is a key that a thread should get in order to continue execution.
If a semaphore is already being used, a requesting thread interrupts its execution to
wait until the semaphore is free. In other words, the thread waits for a key and if it
has already been taken by someone, it the requesting thread waits when it’s free.
145
There are two types of semaphores: binary semaphores and counting
semaphores. As the name implies, binary semaphores can possess only two values
- 0 or 1. A counting semaphore (variable of integer type) can possess values
ranging from 0 to 255, 65535 or 4294967295, depending on the capacity of the
used semaphore - 8, 16 or 32 bits, respectively. This value depends on the kernel
implemented. Also, in addition to semaphore value a kernel should store a list of
threads waiting to access it.
There are three main operations that can be executed with semaphores:
INITIALIZE (or CREATE), WAIT (or PEND) and SIGNAL (or POST). The
initial value of a semaphore is set at the stage of its creation. A list of threads
waiting for this semaphore should initially be empty.
A thread that wants to possess the semaphore executes the WAIT operation. If
the semaphore is accessible, that is, its value is greater than 0, the value will
decrease by one, and the thread continues its execution. If the value of a semaphore
equals 0, the thread, executing the operation WAIT, is added to the list of the
threads, waiting for the semaphore. Many kernels also allow to define time-out, at
the end of which the thread is restarted, with a return code sent to it warning that
time-out has elapsed.
Some synchronization mechanisms can be used in any multitask systems as
correct control of execution of several threads with one resource (for example, a
device buffer or some shared variable) cannot be guaranteed without them.
However, in real-time problems, synchronization objects should meet some
specific requirements. This is accounted for by the fact that synchronization
objects can cause serious delays of thread execution, because the purpose of these
objects is in fact to block access to a certain shred resource. One of the most
serious problems that may occur in case of possible resource blockage is priority
inversion.
146
REFERENCES
1. Deuel A 1994 The benefits of a manufacturing execution system for
plantwide automation ISA Transactions 113-124
2. Mcclellan M 2004 The Collaborative Effect Intelligent Enterprises 7(16) 35
3. Fuchs F and Thiel K 2009 Manufacturing Execution Systems: Optimal
Design, Planning, and Deployment (New York: McGraw-Hill)
4. Hadjimichael B 2004 Manufacturing Execution Systems Integration and
Intelligence (Master Thesis McGill University)
5. Cao W Jing S and Wang X 2008 Research on Manufacturing Execution
System for Cement Industry IEEE Conference on Industrial Electronics and
Applications 1614-1618
6. Waldron T A 2011 Strategic Development of a Manufacturing Execution
System (MES) for Cold Chain Management Using Information Product Mapping
(Master Thesis - Massachusetts Institute of Technology)
7. Scholten B and Schneider M 2010 ISA-95 As-Is / To-Be Study MESA
White Paper 23
8. Govindaraju R Lukman K and Chandra D R 2014 Manufacturing Execution
System Design using ISA-95, Advanced Materials Research 980 248-252
9. MESA International 1997 MES explained: A High Level Vision White
Paper 6
10. MESA International 2000 Enterprise-Control System Integration Part
1: Model and Terminology
11. MESA International 2005 Enterprise-Control System Integration Part
3: Activity Models of Manufacturing
12. Pressman R 2010 Software Engineering: A Practitioner’s Approach
7th ed. (New York: McGraw-Hill)
13. Kauppinen M 2005 Introducing Requirements Engineering into
Product Developoment: Towards Systemetic User Requirement Definition. Espoo:
Helsinki University of Technology
14. Manufacturing Approach, International Journal for Scientific
Research & Development| 2(4) 337-340
147
16. Ким Д.П. Теория автоматического управления: линейные
системы. – М.: ФИЗМАТЛИТ, 2003. – 288 с.
17. Лодон Дж., Лодон К. Управление информационными системами /
Пер. с англ. Трутнева Д.Р. – СПб.: Питер, 2005. – 910 c.
18. О’Лири Д. ERP системы. Современное планирование и
управление ресурсами предприятия. Выбор, внедрение и эксплуатация / Пер.
с англ. Водянова Ю.И. – М.: Вершина, 2004. – 272 с.
19. Андреев Е.Б., Куцевич И.В., Куцевич Н.А. MES-системы: взгляд
изнутри. – М.: РТСофт, 2015. – 240 с.
20. Андреев Е.Б., Куцевич Н.А., Синенко О.В. SCADA-системы:
взгляд изнутри. – М.: РТСофт, 2004. – 176 с.
21. Леньшин В.Н., Куминов В.В. Производственные исполнительные
системы (MES) – путь к эффективному предприятию. – URL:
http://asutp.ru/?p=600359 (дата обращения: 12.01.2016).
22. Фролов Е.Б., Загидуллин Р.Р. MES-системы, как они есть или
эволюция систем планирования производства (часть I). – URL:
http://www.fobos-mes.ru/stati/mes-sistemyi-kak-oni-est-ili-evolyutsiya-sistem-
planirovaniya-proizvodstva.-chast-i.html (дата обращения: 12.01.2016).
23. Фролов Е.Б., Загидуллин Р.Р. MES-системы, как они есть или
эволюция систем планирования производства (часть II). – URL:
http://www.fobos-mes.ru/stati/mes-sistemyi-kak-oni-est-ili-evolyutsiya-sistem-
planirovaniya-proizvodstva.-chast-ii.html (дата обращения: 12.01.2016).
24. Солдатов С. Интеграция SCADA-систем и систем управления
предприятием // Современные технологии автоматизации. – 2016. – №1. –
c.90-95.
25. Фролов Е.Б., Загидуллин Р.Р. Оперативно-календарное
планирование и диспетчирование в MES-системах (часть I). – URL:
http://www.fobos-mes.ru/stati/operativno-kalendarnoe-planirovanie-i-
dispetchirovanie-v-mes-sistemah.-chast-i.html (дата обращения: 12.01.2016).
26. Лилеев П. Типовые модели интеграции SAP: ERP и MES.
Современные подходы к интеграции ERP и MES на металлургических
предприятиях (часть
148
LIST OF CONTENT
1. AUTOMATED CONTROL SYSTEMS. INTRODICTION AND
DEFINITIONS.............................................................................................................. 3
1.1. Defining ‘computer technology’ ................................................................. 3
1.2. Defining ‘automated systems’ ..................................................................... 3
1.2.1. Automated systems’ ........................................................................... 3
1.2.2. Processes occurring in automated systems ........................................ 4
1.3. Types of control systems ............................................................................. 6
1.4. Types of support for automated subsystems ............................................... 7
2. CLASSIFYING AUTOMATED SYSTEMS ................................................... 10
2.1. Product lifecycle ........................................................................................ 10
2.2. Complex systems for industrial automation ............................................. 13
2.3. The structure of complex automated control systems ............................... 14
2.4. The principles of constructing complex automation systems.
Characteristic features of man-machine systems. ................................................ 17
3. OLAP TECHNOLOGY..................................................................................... 19
3.1. What is OLAP? ......................................................................................... 19
3.2. Why do we need OLAP? ........................................................................... 20
3.2.1. Increasing data storage..................................................................... 21
3.2.2. Data versus Information................................................................... 21
3.2.3. Data layout ....................................................................................... 22
3.3. OLAP fundamentals .................................................................................. 22
3.3.1. What is a cube? ................................................................................ 23
3.3.2. Multidimensionality ......................................................................... 25
3.3.3. "Slicing & dicing" ............................................................................ 28
3.3.4. Nested dimensions ........................................................................... 28
3.3.5. Hierarchies & groupings .................................................................. 29
4. ENTERPRISE RESOURCE PLANNING ...................................................... 30
4.1. Basic Concepts and Definitions ................................................................ 31
4.2. Benefits and Importance............................................................................ 32
4.3. Value of ERP ............................................................................................. 35
4.3.1. IT value of ERP systems .................................................................. 36
4.3.2. Business value of ERP systems ....................................................... 36
4.3.3. Business process integration ............................................................ 38
4.3.4. Importance of strategic alignment of ERP with business goals ...... 42
4.4. ERP System Use in Organizations ............................................................ 43
4.5. Future impacts to industry and organizations ........................................... 44
5. MANUFACTURING EXECUTION SYSTEMS (MES)................................ 45
149
5.1. Manufacturing Execution Systems Implementation ................................. 46
5.2. Model Development .................................................................................. 47
5.2.1. Specific functional model. ............................................................... 49
5.2.2. Configure, Build and Test................................................................ 50
5.3. Methodology ............................................................................................. 51
6. SUPERVISORY CONTROL AND DATA ACQUISITION (SCADA) ........ 53
6.1. Field Data Interface Devices ..................................................................... 55
6.2. Communications Network......................................................................... 56
6.3. Central Host Computer .............................................................................. 57
6.4. Operator workstations and software components ..................................... 58
6.5. SCADA Architectures ............................................................................... 59
6.5.1. Monolithic SCADA Systems ........................................................... 59
6.5.2. Distributed SCADA Systems .......................................................... 60
6.5.3. Networked SCADA Systems ........................................................... 62
7. GENERAL SCADA COMPONENTS ............................................................. 63
7.1. PLC BASICS ............................................................................................. 63
7.1.1. Controllers........................................................................................ 63
7.1.2. Microprocessor controlled system ................................................... 65
7.1.3. The programmable logic controller ................................................. 65
7.2. Hardware ................................................................................................... 67
7.3. Internal architecture ................................................................................... 69
7.3.1. The CPU........................................................................................... 69
7.3.2. The buses ......................................................................................... 70
7.3.3. Memory ............................................................................................ 70
7.3.4. Input/output unit .............................................................................. 71
7.3.5. Sourcing and sinking ....................................................................... 73
7.4. Controller selection criteria ....................................................................... 74
7.5. PLC vs. PAC ............................................................................................. 76
7.5.1. Determining users’ needs................................................................. 77
7.5.2. Functional differences...................................................................... 80
7.5.3. PLC & PAC model comparison ...................................................... 81
7.6. REMOTE TERMINAL UNIT .................................................................. 86
7.6.1. Architecture...................................................................................... 86
7.6.2. Central Processing Unit (CPU) ........................................................ 86
7.6.3. Power supply.................................................................................... 87
7.6.4. Digital (control) outputs................................................................... 88
7.6.5. Software and logic control ............................................................... 89
7.6.6. Communications .............................................................................. 89
7.6.7. Comparison with other control systems .......................................... 90
150
7.6.8. Applications ..................................................................................... 91
7.6.9. RTU manufacturers.......................................................................... 91
8. INDUSTRIAL DATA COMMUNICATIONS ................................................ 92
8.1. Open Systems Interconnection (OSI) model............................................. 93
8.2. RS-232 interface standard ......................................................................... 94
8.2.1. Half-duplex operation of RS-232 .................................................... 94
8.3. Fiber Optics ............................................................................................... 95
8.3.1. Applications for fiber optic cables ................................................... 96
8.3.2. Fiber optic cable components .......................................................... 96
8.4. Modbus ...................................................................................................... 97
8.4.1. Modbus protocol .............................................................................. 97
8.4.2. Modbus Plus .................................................................................... 98
8.5. Data Highway Plus /DH485 ...................................................................... 99
8.6. HART ........................................................................................................ 99
8.7. AS-i.......................................................................................................... 100
8.8. DeviceNet ................................................................................................ 101
8.9. Profibus.................................................................................................... 102
8.10. Foundation Fieldbus ............................................................................. 103
8.11. . Industrial Ethernet .............................................................................. 104
8.11.1. 100 Mbps Ethernet ....................................................................... 105
8.11.2. Gigabit Ethernet ........................................................................... 105
8.11.3. TCP/IP.......................................................................................... 106
9. OPC TECHNOLOGY ..................................................................................... 108
9.1. Introduction in OPC ................................................................................ 108
9.1.1. OPC Background ........................................................................... 109
9.1.2. Purpose........................................................................................... 109
9.1.3. The Current Client Application Architecture ................................ 110
9.1.4. The Custom Application Architecture ........................................... 111
9.1.5. General ........................................................................................... 111
9.2. Scope ....................................................................................................... 112
9.3. OPC Fundamentals .................................................................................. 113
9.3.1. OPC Objects and Interfaces ........................................................... 113
9.3.2. OPC DataAccess Overview ........................................................... 114
9.3.3. OPC Alarm and Event Handling Overview .................................. 115
9.3.4. OPC Historical Data Access Overview ......................................... 116
9.3.5. Where OPC Fits ............................................................................. 116
9.3.6. General OPC Architecture and Components ................................. 117
9.3.7. Local vs. Remote Servers .............................................................. 118
151
9.4. New Automation Concepts with OPC Unified Architecture .................. 118
9.4.1. Standardized communication......................................................... 119
9.4.2. Developing OPC UA based concepts ............................................ 121
10. COMPLEX COMPUTER SYSTEMS ........................................................... 123
10.1. Defining ‘Complex systems’................................................................ 123
10.2. General design concepts of complex computer systems...................... 124
10.3. Computer system classification ........................................................... 126
10.4. Main architectures of complex computer systems ............................... 128
10.5. Computer System Classification based on different characteristics
(properties) ......................................................................................................... 131
10.6. Other important characteristics of automated computer systems ........ 133
11. REAL-TIME SYSTEMS ................................................................................. 135
11.1. Real-time mode: concept, definition and terminology ........................ 135
11.2. Architectures of real-time operating systems (RTOS)......................... 138
11.3. Processes and Threads in RTOS .......................................................... 141
11.4. Thread Scheduling ............................................................................... 142
11.5. Synchronization mechanisms ............................................................... 144
REFERENCES ......................................................................................................... 147
152
Anastasiia D. Stotckaia
Alexander V. Nikoza
153