You are on page 1of 153

MINISTRY OF EDUCATION AND SCIENCE

OF THE RUSSIAN FEDERATION

ST. PETERSBURG STATE ELECTROTECHNICAL UNIVERSITY


«LETI» OF V. I. ULYANOV (LENIN)»

Anastasiia D. Stotckaia, Alexander V. Nikoza

COMPUTER-BASED TECHNOLOGIES OF
CONTROL IN TECHNICAL SYSTEMS.
Lecture notes

Educational material for the discipline


«Computer-based technologies of control in technical systems»

St. Petersburg
2017
УДК 517.935 (07)
ББК З 973.23 - 018.2я7 + З 986 я7
S86

Anastasiia D. Stotckaia, Alexander V. Nikoza


S86 Computer-based technologies of control in technical systems. Lecture notes:
educational material. St. Petersburg, 2017. 153 pages.

The lecture notes "Computer-based Technologies of Control in Technical


Systems" is intended to provide complete representation about modern information
technologies and software used in the control systems, implemented on the basis of
information and digital systems, primarily - in industrial processes. Presentation of
the material is based on universal regulations applicable to the control of any
complex system. Questions concerning the control of technical systems and
various industrial automation systems are discussed in general terms. The
functional, organizational, informational, software and hardware aspects of
computer-aided control processes are given in detail. Important issues related to
the development of up-to-date industrial systems - Intranet- and of Internet-
technologies are considered. Following principals of SCADA-systems construction
are discussed: the implementation of human-computer interaction, hierarchical
principle of systems construction, the composition of hardware and software
platforms and methods of software interactions.

УДК 517.935 (07)


ББК З 973.23 - 018.2я7 + З 986 я7

Approved
by the publishing Council of the University
as educational material

© St. Petersburg State Electrotechnical University «LETI», 2017

2
1. AUTOMATED CONTROL SYSTEMS. INTRODICTION AND
DEFINITIONS
1.1. Defining ‘computer technology’
In modern automation and control systems, the main instrument for
processing information, carrying out calculations and defining setting and
controlling actions is the computer. The integration of computers in the areas of
science, production and control are marked by the addition of ‘computer’ as an
adjective in every respective term. ‘Computer technology’ further includes
communications technology which is responsible for the flow of information
within information computer networks. Modern literature uses several terms to
refer to technologies in information processing and control. The most common of
them is ‘information technology’.
Note: The terms ‘computer technology’ and ‘information technology’ differ
in the sense that information can be processed without the help of a computer.
However, technological advancement, especially in such areas as automation of
production, product design and documentation, has made the computer a useful
and popular tool. Thus, in the context of this discipline, the terms shall be used
interchangeably as synonyms.
Computer technology is a process involving a number of means and
methods for data collection, processing and transmission with the aim of gaining
information on the state of a product, process or phenomenon.
Since this course is dedicated to computer technologies in automation and
control, and automation tasks are solved with the use of automation systems, we
may proceed to discussing and defining this term.

1.2. Defining ‘automated systems’

1.2.1. Automated systems’


A system is a complex of elements which interact with each other, forming a
certain united whole.
A system’s architecture is the group of features relevant to the user.
An element in a system – is the indivisible elementary part of a system. A
group of elements which have a certain functional purpose and consist of
interconnected elements is defined as a subsystem.

3
System organization is the internal order of interaction between the elements
of a system, defined by, among other things, a restriction on element variety within
the system.
The structure of a system is the content, the order and interaction principles
which define its main features. If the separate elements of a system have internal
connections, such a structure is referred to as hierarchic.
Note: The representation of a system in the form of elements and subsystems
depends on the level and the granularity of detail. This is especially true for more
complex systems. For instance, the top level of detail in a production control
system involves the following subsystems: economic, logistical, production and
power. At the same time, each of these subsystems can be viewed as a self-
contained system. On the lower levels of detail, the security alarm system and the
waste reclamation system can also be viewed as self-contained.
An Automated system – is the combination of personnel, technology,
software, mathematical methods and organization complexes which help to
rationally manage a complex object or process according to the objective.
The AS consists of
The main part, which includes information support, technical support and
mathematical support
The functional part, which includes interconnected programmes, automating
control functions.
In general automated systems are defined by the following features:
1) Building an AS requires a systemic approach
2) Any AS can be analysed, built and controlled based on system control
theory
3) An AS has to include a principle for further development and expansion
(extensibility and scalability)
4) The output product of an AS is information, on which decision making is
based
5) An automated system should be considered as a human-machine system for
information processing and control.

1.2.2. Processes occurring in automated systems


The processes which behind a functioning automated system of any kind can
be viewed as having the following stages:
1 Input of information from external or internal sources

4
2 Processing of input information and representing it in a convenient
(required) form
3 Storage of information in the form of databases, information arrays and
files
4 Outputting information to customers or transferring it to another system;
5 Altering the input data according to the relevant law
There are two types of internal processes in automated systems:
Information processes Processes of creating and
supporting automated systems
Formalized processes whose Developing and setting up a system for
implementation does not alter the data solving a certain type of tasks,
processing algorithm,, which remains administrating (supporting access
specific (searching, registering, storing, service and user rights) and processing
data transmission, document printing requests.
simulator study, execution unit control
algorithm)
Unformalizied procedures which lead to Supporting the integrity and safety of
the creation of new unique information information
with the source information processing
algorithm is unknown (forming a Periodical revision of information;
number of alternatives from which one
is chosen)
Poorly formalized procedures where the Automation of data indexing, etc.
data processing algorithm can alter and
is not clearly defined (planning tasks,
efficiency evaluation, etc.)

Automated systems can be an effective means for solving the following tasks:
1 Achieving more rational ways of solving management tasks through the
integration of mathematical methods;
2 Automation of manual labour
3 Increasing the reliability of the data on which decision making is based
4 Improving the structure of data flows (including document circulation)
5 Cutting production expenses (including information)
Note: It should be mentioned that the development and integration of
automated systems, especially at the initial stages, is a highly expensive process.

5
This is due to the necessary purchase of calculating machines and software, taking
on new staff and providing re-training.

1.3. Types of control systems


Control systems fall into two categories: automatic control system and
automated control systems.
In automatic CS, the control of an object or system is carried out without
direct human involvement. These are closed-loop systems. The main functions of
automatic CS are automatic control and measurement, automatic alarm, automatic
security, automatic launching and halting of engines and controllers, automatic
maintenance of set modes in working equipment, automated regulation.
In automated CS, the control loop involves human activity in taking the more
important decisions and bearing responsibility for them. An automated CS is
usually a human-machine system using economic and mathematical methods,
computer technologies, means of communication and new principles of
organisation for finding and carrying out for efficient control of an object (or
system).
Automated CS can be classified according to their functional and structural
features, application sphere, the nature of the data involved etc.
According to their sphere of application, automated CS are distinguished as
follows:
– Enterprise management
– Production control
– Administration systems (in HR)
– Finance and accountancy systems
– Marketing systems
– Research systems
– Automated design systems
– Task-based systems
Enterprise management systems are aimed at automating the function of the
administrative personnel. These include both industrial information control
systems and non-industrial: in hotels, banks, trade companies, etc. These systems
have strategic and tactical functions serving a wide spectrum of administrative and
industrial tasks.
Production systems are subdivided into:
– Automated production management systems;

6
– Automated process control systems
– Automated control systems for technical resources
Automated process control systems (APCS) are aimed at automating
production personnel functions. These systems control and use data which
determines the state of technological equipment and provide the necessary mode
for technological processes. They are often referred to as industrial automation
systems. The SCADA system (Supervisory Control and Data Acquisition) is
incorporated into the APCS. Direct software control of technological equipment is
carried out through the CNC system (Computer Numerical Control) based on
controllers (specialized industrial computers) built into the equipment.
Automated design systems are aimed at automating the functions of design
engineers, constructors, architects and designers in the creation of new equipment
or technology. The system’s main functions are engineering calculations, creating
graphics (drawings, schemes and plans) and project documentation, modelling the
objects to de designed.
Integrated (corporate) automated control systems are used to automate the
main functions of an enterprise, encompassing the whole operation cycle from
project design to distribution or even waste management. Creating these systems
can be a complicated task as it requires a systematic approach with consideration
of the main objectives, like gaining profit and control over the market. Such an
approach can lead to significant changes in the structure of the enterprise, which
makes it a difficult decision for many enterprise managers.

1.4. Types of support for automated subsystems


The general structure of an automated system can be viewed as a group of
subsystems irrespective of the sphere of application. In this case, we speak of the
structural classification feature and the subsystems are referred to as enabling
systems. This way, the structure of any complex system, including complex
automated control systems, can be represented as a group of enabling subsystems.
Information application is the combination of a unified system for
information classification and coding, unified documentation systems, schemes of
information flows circulating within the object of automation (enterprise, industry,
etc.), as well as methodology for building databases. The goal of an information
application subsystem is to promptly form and output accurate information for
taking management decisions on various hierarchic levels.

7
Automated Control System

Mathematical ans Administrative and


Information Support Technical Support Legal Support
Sortware Support
Subsystem Subsystem Subsystem
Subsystem

Fig. 1.1. Support Systems

Technical support is a complex of technical resources supplying the


operation of an automated system, including the documentation relating to these
resources and technological processes.
The technical maintenance complex includes:
– Computers (any platforms)
– Devices for acquisition, processing storage and output of data
– Devices for passing/receiving data and communication lines
– Office appliances and other supporting devices
– Operational materials and consumables
There are two most widely used forms of technical maintenance organization
(forms of using technical resources) – the centralized and partially or entirely
decentralized forms.
The more promising of these approaches is the partially decentralized form of
organization based on distributed networks consisting of PC’s and industrial
computers for database storage, which are common in any functional subsystem.
The automation of complex objects or processes located within a limited
space often requires the use of a decentralized structure.
Mathematical support and software – is a complex of mathematical
methods, models, algorithm and programs designed to serve the objectives of an
automated system and also to provide the stable functioning of the technical
complex.
Mathematical support includes:
Means for modeling systems and control processes
Universal algorithms for controlling processes and equipment
Methods of mathematical systems theory, circuit design, mathematical
statistics, queuing theory, mathematical programming, etc.

8
Software includes system-wide and special programmes as well as technical
documentation.
System-wide software refers to user-oriented programme complexes aimed at
solving universal tasks in information processing and management. These are used
to broaden the functional capacity of computers, to provide control and manage
data processing.
Special software refers to a group of programmes developed during the
creation of a specific automated system. This includes packets of applied
programmes, which implement the developed models of varying levels of
adequacy representing the functioning of a real object.
The technical documentation for software comes with a description of tasks,
algorithmisation, economic and mathematical task model and model examples.
Software

General Special

System Software System Software

Application Software Professional

Fig. 1.2 – Software in an automated system

Organisational support – is a complex of methods and means regulating the


interaction of personnel with the technical equipment and each other in the process
of developing and using automated control systems. Organisational support is
created based on the results of the pre-design study of a business or manufacturing
centre and has the following functions:
1 Analysing the existing control system used for the object where the
automated control system (ACS) will be employed; determining tasks for
automation;
2 Preparing tasks for computer solution, including technical statements for
designing ACS and economic justification
3 Developing management solutions on the content and structure of the
organisation and developing task-solving methods, with the aim of
raising efficiency.

9
Legal support is a set of legal provisions which regulate the information
management systems’ creation, legal status and functioning, as well how the
information is acquired, converted and used. The main objective of legal support is
to ensure lawfulness. It is based on laws, orders, government decrees, instructions
and other regulatory documents issued by various public authorities.
Legal support consists of a) a general part, regulating the functioning of any
ACS as an information management system, and b) a local part, regulating the
activity of a specific subsystem.
Legal support includes:
– System status
– Rights and obligations of the developer (or supplier) and the
customer
– Rights, obligations and responsibility of the personnel
– Legal positions of separate types of control process and the
order of creating and using information

2. CLASSIFYING AUTOMATED SYSTEMS


2.1. Product lifecycle
As the practice of designing, developing and operating various automated
systems as well as developing and supporting software is closely connected with
the term ‘product lifecycle’, we shall briefly define this term.
The main stages of a product’s life cycle are shown in fig. 2.1. These include
the design stage, preproduction engineering, manufacturing, sales & distribution,
operation and finally disposal (the stages can also include marketing, the purchase
of materials and accessories, services, packaging and storage, assembling and
installation). We shall discuss the main product lifecycle stages using general
machine engineering products as an example.
The design stage consists of a number of procedures including developing a
solution, geometrical models and drafts, calculation, process modeling,
optimization etc.
During the preproduction engineering stage the route and operational
technologies for manufacturing components are developed and programmed into
machines with numerical control. The assembly and installation technologies as
well as control and testing technologies are also developed at this stage.

10
Fig. 2.1. – The main stages of a product’s lifecycle.

The manufacturing stage involves time planning and operational planning,


purchasing materials and accessories with incoming inspection, machining and
other necessary types of processing, controlling results of the processing
procedure, assembly, testing and final control.
The following stages involve:
– Conservation, packaging and transportation
– Installation on customer premises, operation, servicing and maintenance
– Disposal
All PL stages have specific objectives. The most important aspect is
production and operation requirements for complex technology such as technology
produced for shipbuilding, aircraft and automobile construction industries.
Manufacturing complex technology is impossible without broad implementation of
computer-based automated systems.
The specific tasks occurring on each stage of PL account for a wide variety of
automated systems used in the process. Figure 2.2. shows the main PL stages in
relation to various AS types.

Design

11
CAD,
CAM,
CAE, PDM Preproduction engineering
SCM, ERP,
CPC
Manufacturing
MES ,
SCADA,
CNC
Distribution

CRM
Operation

IETEM

Disposal

Fig. 2.2. the main PLM stages in relation to various AS types.

Design automation is carried out with the help of CAD. In machine


engineering CAD is subdivided into functional, construction and technological
design. Functional CAD systems are referred to as CAE (Computer Aided
Engineering).
Technological processes are designed with the help of CAM (Computer
Aided Manufacturing).
To solve the problem of side by side operation of different CAD components,
CAE/CAD/CAM coordination, managing design data and designing, PDM
(Product Data Management) systems are developed. These systems are either
integrated into one of the CAD modules or are autonomous and can work
alongside other CAD systems.
Most product lifecycle stages, from finding suppliers of raw materials to
product distribution, require a system for managing supplies - SCM (Supply Chain
Management). Managing supply chains is about sustaining the flow of material
with minimal expenses. For instance, if the production cycle time is less than the
waiting time of the customer, it is possible to employ the customizing strategy.
Otherwise the company has to employ the strategy of production and storage.
In recent years, most companies producing hardware and software for
automated systems have focused on creating systems for E-commerce.

12
Furthermore, it is possible to coordinate the operation of several partner
enterprises using Internet technologies in the integrated information environment
called CPC (Collaborative Product Commerce).
At the stage of product distribution, it is essential to carry out customer and
supplier management and market analysis and to determine the demand for the
product. These functions are performed by the CRM (Customer Relationship
Management) system.
Personnel training is performed by the IETM (Interactive Electronic
Technical Manuals). These help conduct diagnostic operations, search for
malfunctioning components, order replacement parts and other operations at the
operation stage.
Data control within the universal information environment is performed by
the PLM system (Product Lifecycle Management) at all levels of the cycle. One
important feature of this system is that it supports the interaction of various
automated system at various enterprises, in other words, PLM technology
(including CPC) is the basis for integrating the information environment in which
all the automated systems at several enterprises are functioning.

2.2. Complex systems for industrial automation


Control in industry, like in any other complex system has a hierarchic
structure. The general structure of enterprise control has several hierarchic levels
(represented as a pyramid in fig. 2.3.). Control automation in different levels is
carried out with the help of ACS.
Information support of production is provided by industrial control systems.
These include MRP (Manufacturing Requirement Planning) and ERP (Enterprise
Resource Planning). More developed ERP systems perform business-related
functions connected with production planning, purchasing supplies, distribution,
market analysis, finance and HR management, storage management, strategic
capital planning, etc. MRP systems are oriented generally at business-related
functions directly connected with manufacturing. Industrial automation also
includes the MES (Manufacturing Execution Systems), which is used for solving
operative industrial control tasks.

13
Fig. 2.3. General structure of enterprise control

Automated Process Control Systems (ACPS) include the SCADA


(Supervisory Control and Data Acquisition) which performs the functions of
collecting and processing data on the state of equipment and technological
processes, also helping develop software for inbuilt equipment. Direct programme
control of equipment is carried out by means of the CNC (Computer Numerical
Control) system which is based on controllers (specialized computers, also called
industrial computers) which are built into the equipment. CNC systems are also
referred to as inbuilt computer systems. The lowest level is designed for organizing
communication between the technological object and the managing devices
(computers) using remote terminal units (RTUs).

2.3. The structure of complex automated control systems


Human decisions in various fields such as placing orders, technical re-
equipment, supplied materials, research methods etc. lead to what we call events.
The term event just like the term controlling action is used here in a very broad
sense. This can mean supplying materials, withdrawing a product from the
catalogue, rearranging equipment, carrying out repairs, etc. Thus, control is
understood as a number of actions that bring the system into the desired target
state. These controlling actions influence the events and are at certain times
manifested though other events, closing the cycle by doing so. In complex
automation and control systems one also deals with what is called the information
system. We shall limit our discussion of this system to one of its components – the
information cycle. An event generates information which is then used with

14
intermediate processing or without it; the application triggers the controlling
action.
This process is present is all control systems. An information system is an
information environment that makes it possible to determine when, where and in
what circumstances the event occurred.
Using information requires a number of controlling actions to be given by the
artificial automated information system. This way, automated information is
created artificially by humans.
An automated information system generally performs the following operations:
1 Collection, initial processing and verification of information
2 Conversion of information, i.e. recoding and rerecording when the
information presentation method or the carrier is incompatible with the
usage unit
3 Transmission of information to the storage unit and storage
4 Secondary processing (when the information received cannot be used
directly, i.e. when it cannot trigger the required controlling action in its
present state)
5 Output of information to the user (information presentation)
6 Providing computer support for decision-making
7 Providing information to be used by the decision maker (a person) for the
purpose of solving control tasks.
It should be noted that different types of models have very distinct and
important roles at every stage of the cycle. Practice shows that effective
implementation of automated information systems is only possible with the use of
adequate models of different types (mathematical models of technological
processes, knowledge models, data models, etc.)
Note: there is a difference between computer systems and automated systems.
Computers equipped with specialized software serve as a technical base for
automated information systems. An automated information system also includes
the personnel that interacts with the computers.
Due to the complexity of most modern technology, managing such equipment
without human involvement is economically imprudent or in some cases simply
impossible even with a high level of computerization. For this reason, human
involvement in modern automated information systems is vital, especially when
important decisions are made.

15
Information systems are divided into factual systems and document
databases.
Factual automated systems register facts – specific values of data on physical
objects of the real world. The information in factual systems is structured so as to
provide a definite answer to a question like “What amount of goods of type Y has
enterprise X produced during the last working shift?” or “What is the state of the
managed process, based on parameter X or parameter Y?”.
Document databases serve a different type of tasks that do not require a
definite solution or answer. These systems operate deal with unstructured text
documents (articles, books, laws, etc.) and have a formalized data retrieval system.
The document database simply finds upon the user’s request a list of documents
which in some way satisfy the requirements of the user. For instance, it can
provide a full list of articles containing the phrase “automation of printed-circuit
wiring”.
The activities of any industrial enterprise can be viewed as having two major
aspects: a) the production process itself and b) the enterprise’s financial and
economic activity. Information systems for financial and economic activities have
their own specific requirements which we shall not discuss in great detail,
considering them in the general context of control tasks. The production process at
a large enterprise involves a great number of technological cycles. Moreover, the
cycles involve the use of various kinds of material (both raw and intermediate) and
require control at every stage. Any failures within one technological cycle can have
serious financial consequences or lead to unpleasant accidents. This means that
control over production has to be provided constantly and in real time. This
accounts for the high requirements to automated information systems in terms of
efficiency, quality and safety. Naturally, quality and safety are also required in
financial and economic activities as the volume of incoming and outgoing financial
flows, as well as money circulating within the enterprise, is quite substantial.
Any serious enterprise is in effect a conglomeration of several to a certain
extent independent production facilities. Depending on the size of the enterprise
and the area in which it is specialized, the number of production facilities will
vary. The independence of these production facilities does not undermine their
coordinated performance and interrelation of technological cycles. For this reason
it is essential to have a number of independent automated subsystems which
interact closely with one another.

16
2.4. The principles of constructing complex automation systems.
Characteristic features of man-machine systems.
The hierarchy of a complex automation system is defined by a necessity of
structured control in complex systems with the aim of acquiring a finite number of
possible solutions, from which the best decision is chosen. This decision is then
realized in the system in the form of decentralized and coordinated controlling
actions with various levels of responsibility. Representing the source control
system as a hierarchic system is done through the functional decomposition of the
system.
It should be noted that this hierarchic nature of the system accounts for
certain specific features. Firstly, the system is represented as a set of subordinated
subsystems of various hierarchic levels. Secondly, subsystems of a higher level use
aggregate coordinates in decision-making, which are functions of lower
subsystems’ output coordinates, and form directive controlling actions for these
subsystems. An important feature of hierarchic control system is the inaccessibility
of the full state vector of the lower level subsystems to the higher level subsystem.
It is important that the higher level formulates a control objective in aggregate
coordinate terms and chooses an aggregate control action to achieve the objective.
Solving the task at the higher level does not define the state of the system as it is
formulated in aggregated coordinate terms.
In order to define the state vector of the source system, the lower level is
used. The control objective at the lower level is formulated in source variables
terms, but the control action itself remains the same as defined on the higher level
in aggregated coordinate terms. This means that the decisions of the subsystem on
the higher level are necessarily implemented at the lower level. Consequently, the
hierarchic architecture of the control system always narrows down the possible
controlling actions due both to the aggregated coordinates and to structural
limitations.
Distribution as a feature of the complex system provides coordination
between the control system topology and the principles of organizational and
technological control in physically and functionally distributed control objects and
eliminates excess information circulating within a system during real-time parallel
and asynchronous processing. At the same time, supporting the access of users to
prepared and formatted information becomes more rational and makes it possible
to accomplish various forms of redundancy in the system, with the aim of
providing a high level of security.

17
Distributed systems are generally ergatic, the control process is carried out
jointly by a human operator (control personnel, crew, team, etc.) and the technical
means, which vary in principles and functional application. Human participation in
the control process accounts for a large number of special features in the system
and requires the solution of technical ergonomics tasks (at the system design stage)
in order to create comfortable conditions for the human operator, who has to make
quick decisions in a situation where information overloads and psychological and
physiological strain are inevitable. An ergatic system should be able to constantly
monitor the physiological state of the human operator and his or her ability to solve
the necessary functional tasks. If the human is temporarily overloaded and
consequently incapable of solving all the tasks to the fullest extent, the ergatic
system passes part of the tasks on to automated devices with the aim of
coordinating behavioral, technological, organizational and economical aspects of
control. Redistribution of functional tasks in the control process is caused by the
necessity of retaining control over the system despite information, psychological
and physiological overloads endured by the human operator and can lead to a drop
in control quality.
The task of identifying the physiological state of the human and his or her
ability to solve functional tasks is classified in the ergatic system as weakly
structured. Currently, such tasks are solved with the help of expert systems.
Human participation in the control process requires solving one more
important task – maintaining an adequate personnel qualification level. Indeed,
excess control automation leads to a reduction in the workload of the human
operator resulting in a decline in his or her professional skills, which can lead to
emergency situations and accidents. The task of maintaining these necessary skills
is solved in ergatic systems by the introduction of professional tests and problems
imitating pre-emergency or emergency situations into the system. The control
system analyses and documents the professional performance of the operator.
Based on the test results and analysis, every participant receives new tests and
problems to solve, which are tailored to the specific needs of every operator.
This class of complex systems functions as a rule with incomplete and
unreliable information on system coordinates and parameters and indeterminate
evaluations and indicators. It is therefore necessary to develop controls systems in
the form of an intellectual system which combines the operator’s intellect and the
expert system’s AI. Combining intellects within one system leads to a necessity in
solving yet another problem: deciding which of the two gets priority in each

18
specific situation – the human operator or the expert system. If the operator’s
qualifications are higher, the expert system issues recommendations which are
taken into account by the operator before making a decision. The control system in
this case functions as a decision-making system. Otherwise, the decision is made
by the expert system. If the operator’s qualifications are higher, the expert system
issues recommendations which are taken into account by the operator before
making a decision. The control system in this case functions as a decision-making
system. Otherwise, the decision is made by the expert system. Since the operators
working in control systems are highly qualified professionals, these systems are
designed and implemented as decision-making systems.
These systems are generally classified as continuous discrete systems, i.e.
systems whose state vector can change in a “leap” at discrete moments in time. An
instant change of the state vector of the system can be triggered by a discrete event
or when certain conditions are fulfilled with the interaction of continuous
coordinates. A change in the state vector depending on a certain parameter which
determines the system’s characteristics is called a process. In the present class of
systems – dynamic continuous discrete systems – this determining parameter is
time. Thus, the control process (P) is defined as a time oriented finite or infinite
sequence of discrete events, separated by uninterrupted time periods (t). The
process is always associated with some object entering the system and establishes a
link with this object as a means of functional decomposition of the system.
Formally, the process is described by the following sequence:
P = {t0 , X , T , t },
Where t is the time (process parameter); t0 is the process activation time; Х(t) is the
state vector of the process; T = {ti}is the process protocol, i.e. the multitude of time
moments, when the events triggering the changes in functional tasks happen, and,
consequently, the discrete changes in the state vector. The process parameters (t0,
Х, Т) depend on the state of the system S(t). For this reason, the control process in
a distributed system is a nonstationary indeterminate object and is classified as an
asynchronous real-time parallel vector process

3. OLAP TECHNOLOGY
3.1. What is OLAP?
OLAP means many different things to different people, but the definitions
usually involve the terms "cubes", "multidimensional", "slicing & dicing" and

19
"speedy-response". OLAP is all of these things and more, but it is also a misused &
misunderstood term, in part because it covers such a broad range of subjects.
We will discuss the above terms in later sections; to begin with, we explain
the definition & origin of OLAP. OLAP is an acronym, standing for "On-Line
Analytical Processing". This, in itself, does not provide a very accurate description
of OLAP, but it does distinguish it from OLTP or "On-Line Transactional
Processing".
The term OLTP covers, as its name suggests, applications that work with
transactional or "atomic" data, the individual records contained within a database.
OLTP applications usually just retrieve groups of records and present them to the
end-user, for example, the list of computer software sold at a particular store
during one day. These applications typically use relational databases, with a fact or
data table containing individual transactions linked to meta tables that store data
about customers & product details.
OLAP applications present the end user with information rather than just data.
They make it easy for users to identify patterns or trends in the data very quickly,
without the need for them to search through mountains of "raw" data.
Typically this analysis is driven by the need to answer business questions
such as "How are our sales doing this month in North America?". From these
foundations, OLAP applications move into areas such as forecasting and data
mining, allowing users to answer questions such as "What are our predicted costs
for next year?" and "Show me our most successful salesman".
OLAP applications differ from OLTP applications in the way that they store
data, the way that they analyze data and the way that they present data to the end-
user. It is these fundamental differences (described in the following sections) that
allow OLAP applications to answer more sophisticated business questions.

3.2. Why do we need OLAP?


When first investigating OLAP, it is easy to question the need for it. If an end
user requires high-level information about their company, then that information
can always be derived from the underlying transactional data, hence we can
achieve every requirement with an OLTP application. Were this true, OLAP would
not have become the important topic that it is today. OLAP exists & continues to
expand in usage because there are limitations with the OLTP approach. The limits
of OLTP applications are seen in three areas.

20
3.2.1. Increasing data storage
The trend towards companies storing more & more data about their business
shows no sign of stopping. Retrieving many thousands of records for immediate
analysis is a time and resource consuming process, particularly when many users
are using an application at the same time. Database engines that can quickly
retrieve a few thousand records for half-a-dozen users struggle when forced to
return the results of large queries to a thousand concurrent users.
Caching frequently requested data in temporary tables & data stores can
relieve some of the symptoms, but only goes part of the way to solving the
problem, particularly if each user requires a slightly different set of data.
In a modern data warehouse where the required data might be spread across
multiple tables, the complexity of the query may also cause time delays & require
more system resources which means more money must be spent on database
servers in order to keep up with user demands.

3.2.2. Data versus Information


Business users need both data and information. Users who make business
decisions based on events that are happening need the information contained
within their company's data. A stock controller in a superstore might want the full
list of all goods sold in order to check up on stock levels, but the manager might
only want to know the amount of fruit & frozen goods being sold. Even more
useful would be the trend of frozen good sales over the last three months.
In order to answer the question "How many frozen goods did we sell today?",
an OLTP application must retrieve all of the frozen good sales for the day and then
count them, presenting only the summarized information to the end-user. To make
a comparison over three months, this procedure must be repeated for multiple days.
Multiply the problem by several hundred stores, so that the managing director can
see how the whole company is performing and it is easy to see that the problem
requires considerable amounts of processing power to provide answers within the
few seconds that a business user would be prepared to wait.
Database engines were not primarily designed to retrieve groups of records
and then sum them together mathematically and they tend not to perform well
when asked to do so. An OLTP application would always be able to provide the
answers, but not in the typical few-seconds response times demanded by users.
Caching results doesn't help here either, because in order to be effective, every
possible aggregation must be cached, or the benefit won't always be realized.

21
Caching on this scale would require enormous sets of temporary tables and
enormous amounts of disk space to store them.

3.2.3. Data layout


The relational database model was designed for transactional processing and
is not always the best way to store data when attempting to answer business
questions such as "Sales of computers by region" or "Volume of credit-card
transactions by month". These types of queries require vast amounts of data to be
retrieved & aggregated on-demand, something that will require time & system
resources to achieve. More significantly, related queries such as "Product sales
broken down by region" and "Regions broken down by product sales" require
separate queries to be performed on the same data set.
The answer to the limitations of OLTP is not to spend more & more money
on bigger & faster databases, but to use a different approach altogether to the
problem and that approach is OLAP. OLAP applications store data in a different
way from the traditional relational model, allowing them to work with data sets
designed to serve greater numbers of users in parallel. Unlike databases, OLAP
data stores are designed to work with aggregated data, allowing them to quickly
answer high-level questions about a company's data whilst still allowing users to
access the original transactional data when required.

3.3. OLAP fundamentals


As discussed, OLAP applications are used to solve everyday business
questions such as "How many cars did we sell in Europe last month?" or "Are our
North American stores throwing away more damaged goods year-on-year?" To
answer these questions, large amounts of transactional or base data must be
retrieved and then summed together. More subtly, they require a different approach
to storing & retrieving the data.
Although different OLAP tools use different underlying technologies, they all
attempt to present data using the same high-level concept of the multidimensional
cube. Cubes are easy to understand, but there are fundamental differences between
cubes and databases that can make them appear more complicated than they really
are. This section sets out what cubes are, how they differ from databases and why
they provide OLAP applications with more power than the relational database.
Storing data in cubes introduces other new terms & concepts and these are all
explained in this section.

22
3.3.1. What is a cube?
The cube is the conceptual design for the data store at the center of all OLAP
applications. Although the underlying data might be stored using a number of
different methods, the cube is the logical design by which the data is referenced.
The easiest way to explain a cube is to compare storing data in a cube with
storing it in a database table.

Fig 3.1. A relational table containing sales records.

Figure 3.1. shows a set of sales records from three electrical stores displayed
in a transactional database table. There are two field columns "Products" and
"Store" that contain textual information about each data record and a third value
column "Volume". This type of table layout is often called a "fact table". The
columns in a table define the data stored in the table. The rows of textual
information and numeric values are simply instances of data; each row is a single
data point. A larger data set would appear as a table with a greater number of rows.

Fig. 3.2. The data from figure 3.1.1. as a two-dimensional cube.

Figure 3.2. shows the same data now arranged in a "cube". The term "cube" is
used somewhat loosely, as this is in fact a two-dimensional layout, often referred
to as "a spreadsheet view" as it resembles a typical spreadsheet.
The axes of the cube contain the identifiers from the field columns in the
database table. Each axis in a cube is referred to as a "dimension". In this cube, the
horizontal dimension contains the product names and is referred to as the "Products
dimension". The vertical dimension contains the store names and is referred to as
the "Store dimension".

23
In the database table, a single row represents a single data point. In the cube,
it is the intersection between fields that defines a data point. In this cube, the cell at
the intersection of Fuses and Midtown contains the number of fuses sold at the
midtown store (in this case, 31 boxes). There is no need to mention "Volume" as
the whole cube contains volume data.
This co-ordinate driven concept of finding data is the reason why we can’t
just ignore one of the dimensions in a cube. For example, the question "How many
bulbs did we sell?" has no direct meaning with this cube unless it is qualified by
asking for data from a particular store.
The term "field" is used to refer to individual members of a dimension, so for
example, Uptown is a field in the Store dimension. Notice that the two dimensions
contain apparently unrelated fields. Dimensions are usually comprised of the same
class of objects, in this example all of the products are in one dimension and all of
the stores are in another. Attempting to mix fields between the two dimensions
would not work because it would not make sense, it would not be possible to create
a unique cell for each data point and any attempt to display the data would also not
be possible.
Note that we have avoided using the terms row & column dimension.
Although a cube appears to have rows & columns just like a table, they are very
different from the rows & columns in a database. In a database, row & column
refer to specific components of the data store, in a cube; they simply describe the
way the cube is presenting the data. For example, the cube in figure 3.1.2 can also
be displayed as in figure 3.3., with the dimensions reversed.

Fig. 3.1.3. The two-dimensional cube reoriented.

Both figures 3.2. or 3.3. are valid layouts, the important point is that the first
diagram shows "Products by Store" and the second shows "Stores by Product".
This is one of the advantages of the cube as a data storage object; data can be
quickly rearranged to answer multiple business questions without the need to
perform any new calculations. A second advantage is that the data could be sorted

24
either vertically or horizontally, allowing the data to be sorted by store or product
regardless of the cube’s orientation.
From this simple two-dimensional cube, we can now explain some further
concepts.

3.3.2. Multidimensionality
In the previous section, we looked at a simple two-dimensional cube.
Although useful, this cube is only slightly more sophisticated than a standard
database table. The capabilities of a cube become more apparent when we extend
the design into more dimensions. Multidimensionality is perhaps the most "feared"
element of cube design as it is sometimes difficult to envisage. It is best explained
by beginning with a three-dimensional example.
Staying with the data set used in the previous section, we now bring in more
data, in the form of revenue & cost figures. Figures 3.3. & 3.4 show the different
ways that the new data could be stored in a table.

Fig 3.3 A "degenerate" table layout

Fig 3.4. The "canonical" table layout

As can be seen, the degenerate layout results in a wider table with fewer rows
while the canonical model results in a narrower table with more rows. Neither
layout is particularly easy to read when viewed directly.
The simplest OLAP layout is to create three separate two-dimensional cubes
for each part of the data, one for the revenue figures, one for costs and one for
volumes. While useful, this layout misses out on the advantages gained by

25
combining the data into a three-dimensional cube. The three-dimensional cube is
built very simply by laying the three separate two-dimensional "sheets" (the
Volume, Cost & Revenue figures) on top of each other.
As can be seen from figure 3.5., the three-dimensional layout becomes
apparent as soon as the three layers are placed on top of each other. The third
dimension, "Measures" is visible as the third axis of the cube, with each sheet
corresponding to the relevant field (Volume, Cost or Revenue).

Fig. 3.5. The three-dimensional cube.

The actual data points are located by using a co-ordinate method as before. In
this example, each cell is a value for the revenue, cost or volume of a particular
product sold in a particular store.
As before, the data can be reoriented & rearranged, but this time, more
sophisticated data rearrangements can be made. For example, the view from the
right-hand face of the cube in figure 3.5. shows the revenue, cost & volume figures
for all products sold in the Downtown store. The view from the topmost face
shows the revenue, cost & volume figures for bulbs across all three stores.
This ability to view different faces of a cube allows business questions such
as "Best performing product in all stores" to be answered quickly by altering the
layout of the data rather than performing any new calculations, thus resulting in a
considerable performance improvement over the traditional relational database
table method. Four dimensions and beyond Although the word "cube" refers to a
three-dimensional object, there is no reason why an OLAP cube should be
restricted to three dimensions. Many OLAP applications use cube designs
containing up to ten dimensions, but attempting to visualize a multidimensional
cube can be very difficult. The first step is to understand why creating a cube with
more than three dimensions is possible and what advantage it brings.
As we saw in the previous section, creating a three-dimensional cube was
fairly straightforward, particularly as we had a data set that lent itself to a

26
threedimensional layout. Now imagine that we have several three-dimensional
cubes, each one containing the same product, region & measures dimensions as
before, but with each one holding data for a different day’s trading. How do we
combine them? We could just add all of the matching numbers together to get a
single three-dimensional cube, but then we could no longer refer to data for a
particular month. We could extend one of the dimensions, for example the
measures dimension could have the fields "Monday’s costs" and "Tuesday’s
costs", but this would not be an easy design to work with and would miss out on
the advantages of a multidimensional layout.
The answer is simple, we create a fourth dimension, in this case the
dimension "Days" and add it to the cube. Although we can’t easily draw such a
cube, it is easy to prove the integrity of the design. As stated before, each data
point is stored in a single cell that can be referred to uniquely. In our four-
dimensional design, we can still point to a specific value, for example the value for
revenue from bulbs sold Uptown on Monday. This is a four dimensional reference
as it requires a field from four dimensions to locate it:
1. The Revenue field from the Measures dimension.
2. The Bulbs field from the Product dimension.
3. The Uptown field from the Store dimension.
4. The Monday field from the Days dimension.
Without actually having to draw or visualize the whole cube, it is quite easy
to retrieve and work with a four-dimensional data set simply by thinking about the
specific data cells being requested.
The issue of visualizing the data set leads onto the second step in picturing the
cube. Although the cube might have four (or more) dimensions, most applications
only present a two-dimensional view of their data. In order to view only two
dimensions, the other dimensions must be "reduced". This is a process similar to
the concept of filtering when creating an SQL query.
Having designed a four-dimensional cube, a user might only want to see the
original two-dimensional layout from figure 3.6., Products by Store. In order to
display this view, we have to do something to the remaining dimensions Measures
& Days. It makes no sense just to discard them as they are used to locate the data.
Instead, we pick a single measure & day field, allowing us to present a single two-
dimensional view of the cube.

27
Fig. 3.6. Two-dimensional view of a four-dimensional structure.

We have to pick a field from the remaining dimensions because we need to


know from which Measures field and which day to retrieve the Product & Store
information. The dimensions that don’t appear in the view are often referred to as
"section" dimensions, as the required view is "sectioned" on a specific field from
these dimensions.
Although it is difficult to visualize at this point, it is the dimensions and the
fields in those dimensions that define cubes, not the data stored in the cube. A table
is often described by the number of columns & rows that it has, while the number
of dimensions and the number of fields in each dimension define a cube.

3.3.3. "Slicing & dicing"


This is a phrase used to describe one part of the process used to retrieve &
view data stored in an OLAP cube. Because of the size of most real-world OLAP
cubes and their complexity, the process of locating the right set of data is often
referred to under the heading "Navigation".
As data can only effectively be displayed in a two-dimensional format, the
multi-dimensional cube must be restricted into flat "slices" of data. When picking a
particular orientation of data in a cube, the user is literally "slicing & dicing" the
data in order to view a simple flat layout.

3.3.4. Nested dimensions


Although data can only be viewed in a two-dimensional or flat layout, it
doesn’t mean that only two dimensions can be fully displayed at one time. It is
perfectly possible to display more than one dimension on each axis.
For example, the user might want to see revenue & cost figures for all
products sold in each store on Monday. Rather than displaying revenue & cost
separately, the Measures dimension can be "nested" inside the Store dimension,

28
displaying both revenue & cost data simultaneously and allowing direct
comparison to be made between them. This layout can be seen in figure 3.7.

Fig. 3.7. Two-dimensional view with nested dimensions.

3.3.5. Hierarchies & groupings


These three terms refer to different ways of physically storing data that is held
within an OLAP cube. Each method still attempts to present data as a cube, but
uses different underlying technology to achieve the results.
ROLAP
Stands for "Relational OLAP". This term describes OLAP applications that
store all of the cube data, both base and high-level in relational tables. The
application hides the presence of the tables by presenting the data in a cube layout.
Vendors who only have a relational database engine available to them have to take
this method of data storage.
The multidimensional views are generated by combining base & aggregate
data tables together with complicated (often multi-pass) SQL statements, often
resulting in poor reporting performance combined with the difficulty of
maintaining tables of aggregate data in order to improve reporting response times.
MOLAP
Stands for "Multidimensional OLAP". This term describes OLAP
applications that store all of the cube data, both base and high-level in proprietary
multidimensional data files. The application copies the base data from the
underlying table into a multidimensional data format (usually a binary data file)
and then evaluates the consolidated values.
The multidimensional data views are automatically present in this method and
performance is often very quick, particularly if the cubes are small enough to fit
into RAM. More typically, the data is stored in large disk files.
The biggest drawback with this method is the duplication of base data that
occurs when it is copied into the cube, requiring extra disk space and processing
time.

29
HOLAP
Stands for "Hybrid OLAP". This term describes OLAP applications that store
high-level data in proprietary multidimensional data files, but leave the underlying
base data in the original data tables.
This method has the big advantage of not requiring duplication of the base
data, resulting in time & disk space savings.
The cube drives the multidimensional views, so the application requires a
robust link between the multidimensional data file and the relational table that
stores the base data beneath it.

4. ENTERPRISE RESOURCE PLANNING


ERP systems (general model is shown in Fig. 4.1) have revolutionized
businesses around the globe. Processes are leaner and more efficient, costs are
minimized, positive customer service is more prevalent, and government
compliance is present. Companies have saved significant amounts of money,
sometimes even in the millions, when their operations are run by an ERP system.
The ERP system not only affects the company itself, but also the supply chain
including external entities, both customers and suppliers. Throughout this chapter,
you will see the importance and impact that ERP systems make on industry and
organizations.

Fig. 4.1 – ERP general model

30
4.1. Basic Concepts and Definitions
There are several key terms that can help to understand the importance and
impact of ERP systems within industries and organizations. This is not a
comprehensive list of terms; however, it will provide a foundation. Business
intelligence is a computer-based technique to help with decision making by
analyzing data. Business process is a logically related activity or group of activities
that takes input, processes it to increase value, and provides output (Harrington,
1991). Business process integration is the assimilation of business processes
together in a central system. Cloud computing is having a third party host the
software and systems a business needs as a service through the use of the Internet.
Data redundancy is when the same data is stored in multiple separate locations.
Data repository is a location to store data. Information system refers to interaction
between information technology, business processes, and data for decision making.
Information technology in the broadest sense refers to both the hardware and
software used to store, retrieve, and manipulate information using computer
systems and applications. Key performance indicators known as KPI, provide
baseline metrics that companies use to measure how well the system and processes
are performing. Legacy system is when a new system is identified for replacement;
the older system is referred to as the legacy. Lifecycle refers to the structure from
which software applications such as ERP evolves and is integrated within business
processes. ERP systems bring corporate business processes and data access
together in an integrated way that significantly changes how they do business. The
ERP system implementation, an enormous capital expenditure, consumes many
corporate resources associated with a high level of risk and uncertainty. ERP
systems are an obvious choice for companies operating with disparate legacy
systems that do not communicate well with each other. These systems provide
significant inter-related information, greater information visibility, and accuracy on
a common database. Within the ERP systems are a standardized process to perform
the majority of business processes using industry best practices. ERP systems are
so widely diffused that they are commonly described as the de facto standard for
replacement of legacy systems in medium and large sized organizations. If today’s
company CIOs were asked about the importance and impact of ERP systems on
industries and organizations, more likely than not, they would say it is impossible
to work without an ERP system.

31
4.2. Benefits and Importance
There are many benefits to having an ERP system within the organization.
Information is readily available for the proper users, all data is kept in a central
repository, data redundancy is minimized, and there is a greater understanding of
the overall business picture. If a company does not have an ERP system and
employs separate standalone systems for functional areas of a business, the
company will not be running at its full potential.
Data may be compromised because it is stored in multiple locations. How
would a user know which information is most current ? When data is changed, is
there a guarantee that it will be updated in all storage locations ? Are processes
taking longer to start and finish than necessary ?
When a customer calls to inquire about an order, the customer may be
bounced around to numerous departments within the company because the
customer service representative does not have all the answers at his or her
fingertips. Here (see fig. 4.2) is an illustration of this type of scenario produced by
Hammer and Company.
With this illustration 4.2. the cycle has come full circle; back to the original
starting point. How much easier would it have been for the customer if the
customer service representative had the answers to every question that the
customer asked? One of the most significant features of an ERP system is that all
of the information kept by a company, including within functional areas, is
retained in one central data repository, or in other words, the information is saved
in a single database.

Fig. 4.2 – An illustration of this type of scenario

32
By having the information in one location with authority levels for access in
place, a customer service representative would have been able to answer all the
questions posed by the customer instead of having to transfer the customer from
department to department.
All of the information is shifted from functional areas to the front-line, or in
other words, to the person the customer will first contact when communicating
with the company. From the above illustration, the importance of the correct
employees having the correct information (in this case the customer service
representative), is crucial to delivering exceptional customer service, and in turn
serving the customer in the most valuable way
The central repository of information will allow authorized users to access the
same information in one location using an ERP system. This feature allows for one
version of information to be used. With the central data repository comes the
decline of data redundancy. The data is kept in one location where all authorized
users have access. Data redundancy occurs when the same data is placed in two or
more separate systems (Shelly, Cashman, & Rosenblatt, 2005). For example,
referring back to our illustration before, the customer needed to change the ship to
address. If the company maintained separate functional area systems, the
customer’s ship-to address would have had to be updated in all the places it was
stored. Potential for human error becomes a factor at this point. The employee
could miss a location where the customer’s ship-to address needed to be changed,
or the employee could have mistyped the correct information in any one of the
change points. Having one central place for the information to be stored reduces
the likelihood of human error and not using the correct information for future
transactions. Ranganathan and Brown (2006) suggest that the use of a centralized
data repository in an ERP system will result “in an integrated database for multiple
functions and business units, providing management with direct access to real-time
information at the business process, business division, and enterprise levels” (p.
146). An ERP system allows users and the company to formulate a better
understanding of the overall business picture. With access to multiple functional
areas in one system, and the ability to generate any report necessary, the benefits of
an ERP system are endless. Management and executives can formulate better
business decisions because of all the data readily available within the system.
Business performance can improve since the ERP system integrates business
processes, that traverse multiple business functions, divisions, and geographical
locations (Ranganathan et al., 2006). Another benefit of ERP systems is their

33
ability to manage potential growth within the company and future e-commerce and
e-supply chain investments. IT costs can be significantly reduced when
implementing an ERP system (Fuβ, Gmeiner, Schiereck, & Strahringer, 2007). For
the banking industry, merging banks can shorten post-merger integration time by
12 to 18 months, with a cost savings of potentially $60 to $80 million. Also, ERP
systems can assist banks with the continuous industry-specific pressures, such as
governmental regulations and globalization, faced by the banking industry. ERP
systems can help a global bank run smoothly and adhere to compliance. The
construction industry faces their own challenges when implementing ERP (Chung,
Skibniewski, Lucas & Kwak, 2008). Their industry processes are less standardized
when compared to manufacturing. For example, each construction project has a
unique owner, project team, and specifications. When an ERP system is
implemented successfully in the construction industry, Chung et al., (2008) report
benefits of improved efficiency, and evident waste elimination. Fuβ et al., (2007)
have researched multiple articles and developed a list of anticipated benefits of
ERP systems. The list includes the following benefits:
– Improved security and availability
– Increase in organizational flexibility
– Cost reduction
– Fast amortization of investment
– More efficient business processes
– Higher quality of business processes
– Improved integrability
– Reduced complexity and better harmonization of IT infrastructure
– Better information transparency and quality
– Better and faster compliance with legal requirements and frameworks
Bagranoff and Brewer (2003) wrote a case study based on a real company’s
ERP implementation. The authors use a fictitious company name, PMB
Investments, Inc., to protect the confidentiality of the real company. The
company’s Amscot division, located outside of Little Rock, Arkansas, was in
charge of printing, assembling, and distributing all printed materials for internal
and external customers interested in the company’s financial services and
investments. The Amscot office was created as a result of anticipated growth.
Amscot began with a hand-medown legacy system named OSCAR, which came
from the closing of two other plants to form the new Amscot plant. Unfortunately,
OSCAR could not handle the increased volume of transactions. The ability to

34
deliver to Amscot’s customers was compromised. A second system was connected
to OSCAR named KIM to help relieve the stress of the growth. “However, once
every few weeks the interface between KIM and OSCAR would go down between
12 to 18 hours resulting in customer orders literally disappearing into cyberspace
somewhere between KIM and OSCAR” (p. 86). Occasionally employees would
perform a manual count of warehouse inventory because they did not trust reports
produced by OSCAR, resulting in inventory being managed in multiple locations.
Amscot pursued the acquisition of an enterprise resource planning system to
handle the circumstances the company was facing (Bagranoff et al., 2003). Amscot
felt the long-run benefits to having an ERP system would be the consolidation of
financials, human resources, manufacturing, and distribution applications in one
central database system. Additionally, Amscot believed data redundancy and
integrity problems regarding the multiple information systems would be
eliminated. Decisions would be made more efficiently and effectively because of
real-time information generated from the ERP system. Fulfillment and delivery
would start automatically on receiving a customer order with the new system.
Having the entire supply chain coordinated would reduce printed material
inventories, minimize unnecessary shipping expenses, and streamline the receiving
of goods cycle time. The new system would allow Amscot to perform and operate
at peak efficiency. The ERP system implementation estimation of savings was $30
million annually, which came from diminishing the inventory obsolescence

4.3. Value of ERP


Systems Getting the most out of IT is not a one-shot effort, but rather a
continuous and evolving process. Included is not just the IT investment, but also
how a company approaches improvement opportunities in support of its business
strategy and objectives, business processes, and value assessments. KPI is a tool
that can be used to measure the ERP systems and process performance. Once an
organization has defined its operational and strategic goals, progress can be
measured. The value of KPIs is a quantifiable measurement that reflects critical
success factors of an organization. KPIs are established prior to the ERP
implementation and will differ depending on the organization. For example, a KPI
could be defined to measure a) percentage of payable invoices that do not match a
purchase order, b) accuracy of purchase orders that are received without defect,
complete, and on time, or c) elapsed time for order approval.

35
4.3.1. IT value of ERP systems
When examining the value of ERP systems, investing in technology is only
half of what is needed to realize its benefit. According to SAP Executive Agenda,
“investment in IT without analogous improvements in the management practices
around IT will lead only to a slight increase in productivity”. It is suggested that
companies that invest in IT while enhancing management practices and
governances have experienced sustainable results in increased value and improved
productivity, in some instances as much as a 20% boost (Dorgan & Dowdy, 2004).
Research has demonstrated a circular cycle where one IT success gives rise to yet
another IT success more favorable than the first (sometimes referred to as the
“virtuous cycle”). The cycle typically gets started with an investment in core ERP
systems software generating the landscape to facilitate a homogeneous integrated
platform. Once the core ERP software demonstrates sound operational
performance, investments to extend and add value to processes such as customer
relationship management (CRM), supply chain management (SCM), and business
analytics components are examined.

4.3.2. Business value of ERP systems


Not only is IT value prevalent in ERP systems, but there is sound business
value as well. For example, an ERP human capital management (HCM) system can
help align a company’s business strategy. This provides integrated processes and
reporting, the managing of workforce to place the right people in the right jobs,
develop and reward top performers, retain key talent for the long term, and
increase efficiency and operating performance throughout the entire organization.
An HCM ERP provides substantial benefit to a company while delivering a
blueprint for transforming a company’s human resource operations. These types of
ERP systems make it possible to rapidly experience return on investment through
reduced operation costs and increased efficiency. The HCM ERP system connects
employees and management to deliver business processes and automate common
administrative tasks, while leveraging industry best practices. Another important
business functional area where ERP systems provide significant alignment for a
company’s fiscal account ability is financial operations. The financial ERP system
assists a company with the control, accounting standards, financial reporting, and
compliance to improve performance and confidence in this area of operations.
Financial ERP systems can typically provide module applications that let
customers tailor solutions to their specific business needs in operations. Companies

36
use the Financial ERP to enable flexibility with financial and managerial reporting
across their organizational structures. This provides a real-time view of the
business to quickly read, evaluate, and respond to changing business conditions
with accurate, reconciled, and timely financial data. For a company’s financial
supply chain, potential value can be gained for improved cash flow, transparent
and real-time business intelligence, and reduced inventory levels, leading to shorter
cash-to-cash cycle times, and increased inventory turns across the network that can
lower overall costs. Companies can potentially make significant gains to reduce
overall finance costs, enabling greater collaboration with customers or suppliers,
and streamlining operations to reduce costs and resource demands (adapted from
SAP, Inc.). Companies can take advantage of an ERP financial system’s ability to
provide dynamic budgeting, forecasting, and planning to reduce overall financial
costs. Financial ERPs offer companies the ability to streamline accounting,
consolidation, process scheduling, workflow, and collaboration. By integrating
budget, cost, and performance, companies can capitalize on opportunities to
reallocate money to programs with proven impact; realigning resources where they
are most useful to maximize value to the organization. Treasury services in an ERP
system can help a company make smarter decisions by having the capability to
proactively monitor and adjust currency and interest rate exposure across the entire
enterprise while complying with internal risk policies. Additionally, visibility to
real-time data enables a company to make informed investing and borrowing
decisions on a timelier basis. Other treasury operations can be automated to
simplify dealing with administration for debt, investments, foreign exchange,
equities, and derivatives while performing straightthrough processing to enforce
security and limit controls (adapted from SAP, Inc.). Often times, companies
operate shared services with their subsidiary operations or centralized organization
functions. ERP systems provide shared services capabilities that can reduce a
company‘s costs by automating, centralizing, and standardizing global
transactional processes. In addition, ERP systems provide the ability to centralize
liquidity and act as an in-house bank to subsidiaries, administer inter-company
loans, and optimize excess funds across the enterprise. Different areas of the
company receive business value from the implementation of ERP systems. For
inbound logistics, ERP systems provide improved communication and integration
with suppliers, enhanced raw material management, and value-added management
of accounts payable (Davenport, Harris, & Cantrell, 2002). The system creates
transparency across a company’s entire purchasing process, including better

37
tracking of raw materials, improved inventory management, lot size planning
integration, and matching process documentation (Matolcsy, Booth, & Wieder,
2005). Accounts payable have automation tools to process vendor payments more
quickly by way of ERP systems. Marketing, sales, and distribution functional areas
benefit and value from ERP systems by the promotion and advertising activities
integrated in item inventory levels and production schedules. These areas benefit
because there is a better idea of what can be promised to the customer.

4.3.3. Business process integration


Companies realize the business value of ERP systems with the ability to
obtain business process integration. Business process integration allows processes
within a company to be incorporated together in one centralized system. The value
of encompassing process integration permits companies to gain efficiencies in
overall and individual processes. Additionally, potential process improvements
may become visible. SAP University Alliances and The Rushmore Group, LLC
developed a diagram of how business process integration works. In this example, a
customer would like to place a sales order for a product. To start the sales process,
a pre-sales activity such as a newspaper advertisement, television commercial, or
word-of-mouth has prompted the need or desire to purchase a product or products.
The customer will then place a sales order. Next, the company will check the
availability of the item or items requested. If the item is in stock, the materials
management segment of the company will pull the item from the plant or
warehouse, and prepare the item for delivery to the customer. If the item is not in
stock, this will prompt the materials management segment of the company to begin
the procurement process with a vendor to restock the item. Once the item (good)
has been received from the vendor, then the plant or warehouse will prepare the
item or items for delivery to the customer. At the issuing of the items to the
customer and the item receipt point from the vendor, the financial accounting
segment of the business is integrated into the overall process, with accounts
receivable and accounts payable due. Another process could have been included
into this scenario had the company been a manufacturing company. At the
availability check point, in place of purchasing the item, the item may have been
produced. The procurement process may have played a role in the production
process as well, had a raw material or component part not been available to
complete the production of the item. In this illustrated example, all three processes
of sales, procurement, and accounting are integrated to complete the overall

38
process of the cash-to-cash cycle. This is a prime case of business process
integration.
To achieve business process integration, it may be necessary to perform
business process re-engineering (BPR). BPR is an integral part of an ERP
implementation and represents a fundamental rethinking of the company’s current
way of doing business. BPR is defined by Hammer and Champy (1993) as “the
fundamental rethinking and radical redesign of business processes to achieve
dramatic improvements in critical, contemporary measures of performance, such as
cost, quality, service and speed” (p. 32). The essential features and benefits of a
bundled ERP packaged software application are already developed based on
industry best practices. For companies to take full advantage of the many benefits
offered by an ERP system, business process reengineering is required to address
the gaps in business practices, leveraging the functionality of the new ERP
packaged application. Most company business processes are procedurally similar
but industry uniqueness, distinct practices, and size play a significant role in the
gaps that a company must re-engineer for an ERP system implementation.
Research has found that successful ERP projects result when companies are
involved in BPR and BPR is included in the ERP selection (Tsai, Chen, Hwang, &
Hsu, 2010; Muscatello, Small, & Chen, 2003). Companies that adapt organization
processes to increase information flow across business organizations achieve
greater success with IT investments than if they had launched the ERP software
alone. By changing business processes to align with the new ERP system, a
company can dramatically change the value derived from the technology and scale
operations profitability. The ERP system usually consists of several functional
modules that are deployed and integrated generally by business process (fig. 4.3).
The ERP implementation creates cross-module integration, data standardization,
and industry best practices, which are all combined into a timeline involving a
large number of resources.
The business process “as-is” state and information flows between various
business operations are examined for scope of the implementation. The “as-is”
process model is developed by examining the layers of the “as-is” process, and
focuses on the most important or major areas of concern (Ridgman, 1996). Often
processes evolve to solve an immediate customer issue, operational problem, or
some other concern that addresses the way a company conducts its business
(Okrent & Vokurka, 2004). An understanding of why a process is performed in a

39
particular way helps to identify the non-value added work for simplification of the
process and improved task workflow.

Fig. 4.3 – Several functional modules that are deployed and integrated generally by business
process

An example of an “as-is” process would be how to pay a vendor invoice. A


company typically issues a purchase order for goods or services to a vendor. A
copy of the purchase order is sent to the accounts payable department and the
vendor. Once the items or services are completed, the vendor submits an invoice
electronically (email or EDI), or possibly by postal mail, to the company for
payment. The accounts payable department matches the purchase order against the
invoice, the receiving document (if items received), and the invoice. If they match,
the accounts payable department issues payment. The “to-be” design and mapping
of legacy business processes are developed according to the company’s business
model. The “to-be” design will generally include company operating business
rules, data conversion, reporting, and organizational hierarchy requirements. Zhang
(2000) suggests the first thing that must be done is to evaluate what processes are
critical to the business. Several iterations and discussions take place between
stakeholders, users, and the implementation team, to ensure that all business
processes strengthen the process integration. Generally, the process examines the
“to-be” model as the ideal workflow without constraint, along with considerations
for future growth and IT investments. The vendor payment “to-be” process, for
example. The purchase order is entered into the ERP system common database. A
copy of the purchase order is electronically sent to both the vendor and the

40
company accounts payable department. When goods are received or services are
performed, a confirmation transaction takes place to alert of completion. Matching
is done and a check is prepared and automatically sent to the vendor in the ERP
system. The automated process enables accuracy of information, and eliminates
redundancy of data and potential delay of payment. Due to the characteristic nature
of ERP system cross-module integration features, the more modules selected for
implementation, the greater the integration benefits. However, with the increased
benefits comes increased complexity and care to ensure minimum risk to map
correctly a company’s business process to the ERP system processes.
Implementing the processes incorrectly can lead to poor integration between
modules in the system, leading to significant operational deficiency. Additionally,
there exists considerable risk in changing multiple processes at a time
(Subramoniam, Tounsi, & Krishnankutty, 2009). The risk is certain to increase if a
fallback plan is non-existent. An industry best practice of streamlining and
simplifying business processes ahead of time may mitigate the risk. Prior research
has concluded that the higher a company’s process complexity, the higher the
radicalness of its ERP implementation to enable fundamental and radical change in
the company’s operational performance (Karimi, Somers, & Bhattacherjee, 2007).
However, many common business process challenges may be ameliorated if
addressed appropriately. Listed below (fig. 4.4) are a few typical ERP business
process challenges, and suggested resolutions faced in business process
integration.

Fig. 4.4 –A few typical ERP business process challenges

Previous research has indicated that an ERP system meets only 80% of the
company’s functional requirements (Subramoniam et al., 2009). A gap exists
between company requirements and the proposed ERP solution. What is practiced
by most companies is listed below, based on a survey by Forester Research
(Lamonica, 1998 ; O’Leary, 2000).

41
There are many enterprise application integration (EAI) tools, structured
methodologies, and systematic procedures available to facilitate business process
integration. Companies typically approach business process integration based on
their organizational needs and constraints (Subramoniam et al., 2009). Competitive
pressure and system compatibility in business processes significantly explains the
success of ERP systems (Elbertsen & Van Reekum, 2008). Organizations like
Owens Corning (Bancroft , Seip, & Sprengel 1998; Romei, 1996), the State of
Kentucky (Henry, 1998), Eastman Kodak (Stevens, 1997), and NEC Technologies
(Bancroft et al., 1998) have all effectively integrated business process into the
implementation of their ERP system. Owens Corning began its business process
integration efforts by establishing a global supply-chain prospective that would fit
all its business unit improvements (Bancroft et al., 1998; Romei, 1996; Anita,
1996). Design teams worked in parallel to address integration issues across process
boundaries. A standard business process integration methodology using benchmark
data to design the process integration was used. In another example, the State of
Kentucky’s (Henry, 1998) enterprise ERP solution included financial, budget, and
procurement functionality. Their business processes required radical changes in
order to use a technical tool to change business processes, streamline government
administrative procedures, and cut cost.

4.3.4. Importance of strategic alignment of ERP with business goals


ERP systems are strongly characterized as operational information technology
(IT) systems, enabling management to have sufficient data for analysis and
decision making purposes (Mehrjerdi, 2010). This greatly contributes to a
company’s capability to align with its core business strategies and competences
(Chan & Huff, 1993). Alignment involves “applying IT in an appropriate and
timely way and in harmony with business strategies, goals, and needs” (Luftman &
Brier, 1999, p. 109). These types of enterprise information systems provide a
holistic integration, functional operation, and real-time processes in a single
common database. The mechanisms used to attain alignment can vary by business
strategy and industry. Interestingly Chan et al., (1993) acknowledged a difference
between industry companies and academic institutions when examining
organization size on IT alignment. What would uniquely differentiate an academic
organization from an industry company alignment ? Industry companies and
academic institutions operate within substantially different institutional
environments. While academic institutions have similar, if not the same highly

42
skilled leaders, organizational structures, processes, and size as organizations;
academic institutions may not necessitate the same level of requirements for its
alignment

4.4. ERP System Use in Organizations


ERP systems are widely used in many Fortune 500 companies. Here are
several examples of ERP systems in real-world scenarios demonstrating business
value. These companies span a breadth of industry and ERP business needs. Aegis
Logistics, one of US leading oil and gas logistics service providers has completed
Project Bluewater, where they rolled out a major ERP implementation (Aegis
Logistics goes live with SAP ERP, 2010). The project is considered to be the
single most important IT initiative in the company’s history. Aegis experienced
several inefficiencies with backend operations that used old disparate systems
without any integration, lacked automation across key business processes, and did
not have a consolidated view of all operations. Over the years, their legacy systems
led to issues such as inconsistent workflows, unavailability of timely and accurate
data, duplication of work, and other operational challenges. Just a little over two
years after the implementation, Aegis realized the value and benefit of its ERP
solution. The ERP system brought discipline to their business processes,
eliminated duplication of work, and captured all crucial operational data to
facilitate a seamless information exchange.
Software Paradigms International (SPI) is a large Atlanta, Georgia based
company whose business leverages on and offshore business models to deliver
quality IT and Business Process Outsourcing (BPO) solutions (Faster consolidation
of financials and accounts, 2010). The company offers BPO services in medical
and billing, legal coding, accounting finance BPO, data entry and validation, and
image processing. Their main need for a system was to help consolidate financials
and improve customer service across lines of business. SPI was operating with two
distinct accounting systems, one for US operations, and the other for India
operations. Their project job costing process most often led to a lot of inconsistent
data being generated for tracking of employee actual time on projects, which led to
inaccuracy in estimating the price of project work and subsequent Profit & Loss
statements. A huge issue for SPI was to properly handle multiple currencies since
their operations were global. SPI chose an ERP solution that was not an exact
match to all of their requirements; however, the solution had the capability to get
the desired results. Leveraging BI tools and expert consulting services along with

43
the needed modules, SPI went live with an ERP implementation. SPI successfully
completed two years of ERP operational use without any disruption since
implementing in 2008. Now SPI can transact and process payments or receipts in
any currency. The company has a better view of its financials and expense data
than in the past. The ERP system has provided SPI with the ability to better
manage their customers and increase profits.

4.5. Future impacts to industry and organizations


ERP systems continue to be impactful towards industry and organizations. So
many innovations have been developed and implemented just in the last five to ten
years. More focus has been made towards supply chain management and customer
relationship management. Many ERP vendors have incorporated these modules
into their systems to help better serve customers. Vendors realize the need for the
companies they serve to continue to be scalable, flexible, and have the ability to
compete in their respective industries. One future impact on the horizon is the
amalgamation of cloud computing. Cloud computing is going to allow companies
to free up resources, because the company will have a third party hosting the
system and software needed to do business over the Internet. ERP systems could
be included in this opportunity. More companies will be served with this new
capability. The company will not be required to manage the hardware and software
used. Companies will be allowed to pay as they use the service, instead of making
a capital investment (Ford, 2010). Cloud computing will also make an impact on
rapidly changing flexible areas of the company. Collaboration and communication
including e-mail and file sharing will be positively affected. Transactions and
workflows outside of the company, sourcing, procurement, trade finance, and
supply chains, are suited for cloud computing. “This type of flexible technology
opens the door to a new way of conducting agile business without being limited by
technology infrastructure.” (Ford, 2010, p. 58) Business intelligence (BI) is another
hot topic making an impact on future industry and organizations. BI is the ability
to analyze data for decision making purposes using computer-based techniques.
ERP systems have a built in BI component to help the data mining process. BI is
also offered as a SaaS or softwareas-a-service. It is expected that the Saas BI
market will have triple growth and compounding annual growth at a rate of 22.4
percent through 2013 (Kanaracus, 2010). SaaS BI can assist front office workers
more efficiently. With the BI component of an ERP system, the Public sector has
found the importance of this feature in critical areas such as public safety, border

44
management, and tax collection (Effective information management is key to BI
success, 2010). The impact of BI on the company’s bottom line is so significant
that employers are requesting more and more graduates have BI experience

5. MANUFACTURING EXECUTION SYSTEMS (MES)


Manufacturing execution system is information systems (IS) application that
bridges the gap between IS at the top level, namely enterprise resource planning
(ERP) and IS at the lower levels, namely the automation systems. MES provides a
media for optimizing the manufacturing process as a whole in a real time basis.
With the support of MES, a company can be provided with updated and complete
information that can help the manufacturing department to maintain the quality of
its products in a shorter time and with lower cost. Considering the potential
benefits provided by MES, MES implementation is one of the strategies that can be
utilized by manufacturing companies in their effort to increase competitiveness in
facing the globalization. By the use of MES tin combination with the
implementation of ERP and other automation systems, a manufacturing company
is expected to have high competitiveness. In implementing MES, functional
integration - making all the components of the manufacturing system able to work
well together, is the most difficult challenge. Functional boundaries for each
system component must be specified before a manufacturing company can
integrate its processes. When specifying the system requirement in MES
implementation, a company needs to use a reference model as a standard to follow.
Without using a reference model, manufacturing companies need to spend more
efforts to determine their requirements. Currently, there has been an industry
standard that specifies the sub-systems of a manufacturing execution systems and
defines the boundaries between ERP systems, MES, and other automation systems.
The standard is known as ISA-95. ISA-95 defines the terminology and models that
can be used in defining the requirements of a MES application for a specific
company and designing the integration of the company’s ERP system at a business
level with the production automation systems at a lower level. Although the
advantages from the use of MES have been stated in some studies, not much
research being done on how to implement MES effectively. From a literature study
it is found that Hadjimichael Cao, and Waldron have studied the design of MES.
These studies discuss the development of MES and propose a method in
developing MES. The methods proposed are very similar to common IS
development methodologies. The studies have not specifically address the unique

45
challenges of MES implementation while discussing MES design process. The
studies also do address use of reference models (standards) in MES development
methodology. A study by Scholten & Schneider has proposed to use ISA-95 as a
guide in defining the requirement of MES. Another study by Govindaraju et al.
developed a methodology for MES design utilizing ISA-95. This study is focused
on how ISA-95 can be utilized for determining MES requirement specification
addressing different parts of ISA-95 standards in executing different steps of MES
design process. The purpose of the study reported in this paper is to develop a
methodology for a MES implementation project, covering the system design and
implementation (construction) stage, which an extension of earlier study by
Govindaraju et al. [8].

5.1. Manufacturing Execution Systems Implementation


According to MESA International, "Manufacturing Execution Systems (MES)
is a dynamic information system application that drives execution of
manufacturing operations, and by using current and accurate data, MES guides,
triggers and reports plant activities as events. The MES set of functions manages
production operations from point of order release into manufacturing to point of
product delivery into finished goods. MES provides critical information about
production activities to other production related systems across the organization
and supply chain via bi-directional communication. In a nutshell, MES is defined
as the layer that integrates business systems with the plants control systems and is
commonly referred as integration from the shop floor to the top floor" [9]. Goals to
be achieved by the implementation of MES are among others: 1) Optimizing of
the entire supply chain through better workflow controls, better and real time
documentation of process steps 2) Improved data quality for assessing processes
and products 3) Visibility and transparency throughout the entire production
process: only deviations are to be analyzed, a detailed examination of the normal
flow of operations is no longer required 4) Reduction of storage costs for work in
progress (WIP) material due to reduced lead time 5) Reduction of administrative
work for maintaining manufacturing documents 6) Reducing the number of lost
batches 7) Reduction of operating costs due to a high level of integration and the
prevention of isolated solutions 8) Better decision making process through easy
access to current dataand information for all critical business cases In order to
minimize the risks in the implementation process, guidelines for MES design and

46
implementation such as ISA-95 is needed [10], in order to help the manufacturing
companies achieved the expected benefits mentioned above.

5.2. Model Development


The proposed methodology for MES implementation process developed in
this study comprises of five steps:
Initial assessment,
Design,
Configure/Build and test,
Deployment and
Operation.
The developed methodology is presented in Figure 5.1.

Fig. 5.1. Proposed MES implementation methodology

The stages of MES implementation process proposed in this study are


discussed below:
• Initial assessment. There are two activities performed at this step: -
Determine implementation scope. System hierarchy model of ISA-95 indicates that
there are 5 levels of system in manufacturing process. This model can be used as a
guide to determine the boundary of each system level [7]. As can be seen in Figure
1, MES (level 3) interacts with the ERP system and the automation systems. One
of the important concern is the distribution of functions to various systems (levels)
which support the individual task in the best possible way. This means that the
tasks can be solved without major problems. Besides, equipment hierarchical

47
models of in ISA-95 which show the hierarchy of the physical assets of the
enterprises engaged in manufacturing activities can be used to determine the
physical boundary of the MES system [7]. Figure 2 shows the hierarchical model
of equipment. - Analyze MES functional requirements. Information on the
manufacturing operations management (MOM) contained in the document ISA-95
part 3 can be used as a guide to analyze the system functional requirements [7].
MOM Model contains a description of the functional aspects of MES. Diagrams
can be used to analyze the functional requirements of the system is use case
diagram [12].
• MES design. There are two activities performed at this step:
– Generic design. Generic design is divided in two parts: generic
function model and generic sequence diagram.
– Generic functional model. ANSI/ISA-95 part 1 (Models and
terminology, 2000)
– Activity models of manufacturing operations management, help to
identify the main manufacturing operations management related
activities. They also help to identify the information flowing through
the activities of the company. A boundary is represented to
differentiate between activities at level 3 and activities at level 4.

Fig. 5.2. Equipment hierarchy model

48
Only a few activities are carried out at both levels. IDEF0 is chosen to model
the functional requirements of the system. The detailed level of the modeling is
determined by the development team. A generic IDEF0 functional model is
defined, covering all level 3 activities and their communications with some of the
level 4 activities. With ISA-95, the functional model is developed in such a way
that it separate the business processes from the manufacturing processes. This way,
it allow changes in production processes (level 3) take place without requiring
unnecessary chang
Information about the order in which different activities are carried out in
manufacturing process provides a behavior perspective about the execution of the
activities. In this stage, UML sequence diagrams are used show which message
transfers take place and how communication evolves among the different actors
involved to carry out each activity [12]. The generic sequence diagrams defined in
this step describe all information exchange between level 3 and 4 of the company,
taking into account the activities and objects previously identified in generic
IDEF0 diagrams. A detailed model description illustrating standard data flow
between the functions for production plants is described in ISA-95 (see Figure 4).
The dotted lines define the interface between levels 3 and 4. The arrows show the
flow of data between the levels. - Specific design. Specific design is divided in two
parts: specific function model and specific sequence diagram.

5.2.1. Specific functional model.


Specific functional model is an adaptation of the generic IDEF models,
developed using company specific requirements. The first step is to define the
(company) specific IDEF0, taking into consideration the generic IDEF0 model of
the ANSI/ISA-95 developed earlier. Before making the “To Be” company specific
IDEF0, it is proposed to form a multidisciplinary team to firstly develop a current
(As-Is) functional model (IDEF0) of the company. Using this model and taking
into account the desired final state that is expected to reach with this integration
project, the specific IDEF0 (functional) model (To-Be) is defined.
Specific sequence diagram.
The second step is to adapt the generic UML sequence diagrams to the
specific company’s situation. The integration team defines the current sequence
model (As-Is) taking into consideration the As-Is IDEF0 model and the collected
information about the flow of current information exchange. Using these sequence
diagrams and taking into consideration the specific IDEF0 (To-Be) model as a

49
reference, specific UML sequence diagrams (To-Be) are modeled in order to
define clearly the information exchanges that is desired to occur within the
enterprise.

Fig. 5.3. Enterprise control integration

5.2.2. Configure, Build and Test.


The goal to be achieved at this step is to configure, build and test the module
components according to the approved design specifications (which were
developed based on MES requirements). In general, MES application is developed,
data migration is performed, and system test is executed in this step. System
testing comprises of unit test, integration test and performance test.
Deployment.
In this step, final preparation for system transition is executed. Trainings are
delivered, cut-over planning is developed, and troubleshooting activities are
performed, before the new system is put into operation.
Operation.
The new system is put into operation, and the post-project support is provided
to help users work with the new system. Besides, a final system quality audit is
needed to be done at this step.

50
5.3. Methodology
In order to check the appropriateness of the methodology developed in this
study, an empirical investigation was done at a steel manufacturing company. The
investigation was done through in depth interviews with MES project manager,
and a series of discussions with a number of key MES project members. From the
investigation, a number of findings were collected, explaining how the execution
of the steps and sequence in the proposed methodology considered to be
appropriate and recommended, for a smooth MES implementation process.
Besides, findings related to important risks or problems to be anticipated in each
step of the methodology were also collected. Based on the findings,
recommendations for improvements in the developed MES implementation
methodology were generated. 5. Empirical Investigation at a Steel Manufacturing
Implementation Case Empirical investigations were done at a steel manufacturing
company in Indonesia (SteelCo). The company is currently in the process of
finishing its MES implementation project, which was started in the year 2012. The
scope of MES implementation project covers Production Operations Management,
Quality Management, and Inventory Management. As mentioned in the project
documentation, by implementing MES the company aims to support the
improvement of supply chain performance, through the use of a more integrated
solution with real time information, to enable realistic business decisions. At the
moment investigation was done, the implementation process has entered the
deployment phase. Two important issues in the initial assessment stage are scoping
and defining the user requirements. The case company experience shows that it is
very important that management is able to define properly the scope and extent of
changes that is brought by MES implementation, before formally plan the
implementation project. The case also shows that in defining system requirements,
requirement elicitation is an important challenge. Requirement elicitation is the
activity of discovering and gathering relevant information from user, customer, and
other stakeholders who have direct or indirect influence on the performance of the
system [13]. An effective method is needed to support the company in finding the
right information from the right stakeholders (actors) involved. In the case study, a
series of workshops were executed for requirement elicitation purpose. Different
topics were discussed within big groups. The discussion topics were divided based
on modules to be developed. For each module, workshops were executed to firstly
discuss the old systems and the problems, followed by the basic concept of best
practices provided by software vendor. The workshops were executed for all the

51
modules. The workshops were successfully executed, but the results seems to be
not that satisfying. The key users from the case company were not able to see
clearly what are the gap to be filled with MES implementation and what
functionalities the systems should provide in order to get a comprehensive solution
for the company’s problems.
MES design stage is divided into two stages: basic design and detailed design.
Activities performed at the basic design stage is the documentation of the system
design using descriptions and flow chart diagrams. The activities carried out in the
detailed design stage is drafting detailed MES systems and interfaces requirements
using UML diagrams. Design is generally done by using the ISA-95 standard.
Stages of design activities on this project has a slightly different grouping of
activities with the phases of the proposed methodology. However, in general, the
activities carried out in the design phase commonly inline with the activities in the
MES design methodology developed. One important thing that needs to be
underlined related to the design of the system is the importance of clearly defining
the mapping between the system features (functionalities) and user groups (actors).
Developing use case diagrams in this case becomes an important part of the system
design, in addition to the manufacturing process activity mapping using IDEF and
the sequence of events mapping using sequence diagrams. The development of the
use case diagram is necessary to determine a division of tasks and actors which is
needed to ensure that no conflict happen from different users (subsystems)
performing the same functions, and also to assure that all the functions assigned to
certain user groups. From interviews and discussions during the investigation, it
was found that to smoothen the implementation process, it is important to add one
more step after MES application is build and test, before the implementation
process moves to final deployment step. The additional step is needed to create a
pilot case (pilot deployment) and do a comprehensive review on the pilot
deployment, before entering the full system deployment. A proper pilot system
covering end to end processes needs to be developed, in order to ensure success of
the overall MES deployment. Pilot deployment determines how well current
requirements fit into an MES and validates the integration strategy (to level 4
system as well as level 2 system), before overall deployment takes place. In order
to make sure that pilot deployment takes place in a proper way, different actors
need to be involved. They area: project leader, ERP experts (because of the
integration with ERP), automation experts, QA/QC, integration experts (XI
experts, etc.) and shop floor automation/SFA experts (because of the integration

52
with SFA systems). For SAP implementing companies such as SteelCo, ERP
experts to be involved are the experts for MM/PP/PI, QM and APO modules. With
the addition of pilot deployment step, change management needs to be executed at
the later step (deployment step), considering that change management should
consider the results of the comprehensive review of pilot deployment. Final data
migration needs to be finalized at the (final) deployment step, after all the
important logic of system integration being tested, through the pilot deployment.
Thus, the (final) deployment step will include cut-over preparation and test, final
data migration, final change management, and trainings.

6. SUPERVISORY CONTROL AND DATA ACQUISITION


(SCADA)
SCADA is an acronym for Supervisory Control and Data Acquisition.
SCADA systems are used to monitor and control a plant or equipment in industries
such as telecommunications, water and waste control, energy, oil and gas refining
and transportation. These systems encompass the transfer of data between a
SCADA central host computer and a number of Remote Terminal Units (RTUs)
and/or Programmable Logic Controllers (PLCs), and the central host and the
operator terminals. A SCADA system gathers information (such as where a leak on
a pipeline has occurred), transfers the information back to a central site, then alerts
the home station that a leak has occurred, carrying out necessary analysis and
control, such as determining if the leak is critical, and displaying the information in
a logical and organized fashion. These systems can be relatively simple, such as
one that monitors environmental conditions of a small office building, or very
complex, such as a system that monitors all the activity in a nuclear power plant or
the activity of a municipal water system. Traditionally, SCADA systems have
made use of the Public Switched Network (PSN) for monitoring purposes. Today
many systems are monitored using the infrastructure of the corporate Local Area
Network (LAN)/Wide Area Network (WAN). Wireless technologies are now being
widely deployed for purposes of monitoring.
SCADA systems consist of:
– One or more field data interface devices, usually RTUs, or PLCs,
which interface to field sensing devices and local control switchboxes and valve
actuators

53
– A communications system used to transfer data between field data
interface devices and control units and the computers in the SCADA central host.
The system can be radio, telephone, cable, satellite, etc., or any combination of
these.
– A central host computer server or servers (sometimes called a SCADA
Center, master station, or Master Terminal Unit (MTU)
– A collection of standard and/or custom software [sometimes called
Human Machine Interface (HMI) software or Man Machine Interface (MMI)
software] systems used to provide the SCADA central host and operator terminal
application, support the communications system, and monitor and control remotely
located field data interface devices
Figure 6.1 shows a very basic SCADA system, while Figure 6.2 shows a
typical SCADA system. Each of the above system components will be discussed in
detail in the next sections.

Fig 6.1: Current SCADA Communications Media

Fig. 6.2: Typical SCADA System

54
6.1. Field Data Interface Devices
Field data interface devices form the "eyes and ears" of a SCADA system.
Devices such as reservoir level meters, water flow meters, valve position
transmitters, temperature transmitters, power consumption meters, and pressure
meters all provide information that can tell an experienced operator how well a
water distribution system is performing. In addition, equipment such as electric
valve actuators, motor control switchboards, and electronic chemical dosing
facilities can be used to form the "hands" of the SCADA system and assist in
automating the process of distributing water.
However, before any automation or remote monitoring can be achieved, the
information that is passed to and from the field data interface devices must be
converted to a form that is compatible with the language of the SCADA system.
To achieve this, some form of electronic field data interface is required. RTUs,
also known as Remote Telemetry Units, provide this interface. They are primarily
used to convert electronic signals received from field interface devices into the
language (known as the communication protocol) used to transmit the data over a
communication channel.
The instructions for the automation of field data interface devices, such as
pump control logic, are usually stored locally. This is largely due to the limited
bandwidth typical of communications links between the SCADA central host
computer and the field data interface devices. Such instructions are traditionally
held within the PLCs, which have in the past been physically separate from RTUs.
A PLC is a device used to automate monitoring and control of industrial facilities.
It can be used as a stand-alone or in conjunction with a SCADA or other system.
PLCs connect directly to field data interface devices and incorporate programmed
intelligence in the form of logical procedures that will be executed in the event of
certain field conditions.
PLCs have their origins in the automation industry and therefore are often
used in manufacturing and process plant applications. The need for PLCs to
connect to communication channels was not great in these applications, as they
often were only required to replace traditional relay logic systems or pneumatic
controllers. SCADA systems, on the other hand, have origins in early telemetry
applications, where it was only necessary to know basic information from a remote
source. The RTUs connected to these systems had no need for control
programming because the local control algorithm was held in the relay switching
logic.

55
As PLCs were used more often to replace relay switching logic control
systems, telemetry was used more and more with PLCs at the remote sites. It
became desirable to influence the program within the PLC through the use of a
remote signal. This is in effect the "Supervisory Control" part of the acronym
SCADA. Where only a simple local control program was required, it became
possible to store this program within the RTU and perform the control within that
device. At the same time, traditional PLCs included communications modules that
would allow PLCs to report the state of the control program to a computer plugged
into the PLC or to a remote computer via a telephone line. PLC and RTU
manufacturers therefore compete for the same market.
As a result of these developments, the line between PLCs and RTUs has
blurred and the terminology is virtually interchangeable. For the sake of simplicity,
the term RTU will be used to refer to a remote field data interface device; however,
such a device could include automation programming that traditionally would have
been classified as a PLC.

6.2. Communications Network


The communications network is intended to provide the means by which data
can be transferred between the central host computer servers and the field-based
RTUs. The Communication Network refers to the equipment needed to transfer
data to and from different sites. The medium used can either be cable, telephone or
radio.
The use of cable is usually implemented in a factory. This is not practical for
systems covering large geographical areas because of the high cost of the cables,
conduits and the extensive labor in installing them. The use of telephone lines (i.e.,
leased or dial-up) is a more economical solution for systems with large coverage.
The leased line is used for systems requiring on-line connection with the remote
stations. This is expensive since one telephone line will be needed per site. Dial-up
lines can be used on systems requiring updates at regular intervals (e.g., hourly
updates). Here ordinary telephone lines can be used. The host can dial a particular
number of a remote site to get the readings and send commands.
Remote sites are usually not accessible by telephone lines. The use of radio
offers an economical solution. Radio modems are used to connect the remote sites
to the host. An on-line operation can also be implemented on the radio system. For
locations where a direct radio link cannot be established, a radio repeater is used to
link these sites.

56
Historically, SCADA networks have been dedicated networks; however, with
the increased deployment of office LANs and WANs as a solution for interoffice
computer networking, there exists the possibility to integrate SCADA LANs into
everyday office computer networks.
The foremost advantage of this arrangement is that there is no need to invest
in a separate computer network for SCADA operator terminals. In addition, there
is an easy path to integrating SCADA data with existing office applications, such
as spreadsheets, work management systems, data history databases, Geographic
Information System (GIS) systems, and water distribution modeling systems.

6.3. Central Host Computer


The central host computer or master station is most often a single computer or
a network of computer servers that provide a man-machine operator interface to
the SCADA system. The computers process the information received from and
sent to the RTU sites and present it to human operators in a form that the operators
can work with. Operator terminals are connected to the central host computer by a
LAN/WAN so that the viewing screens and associated data can be displayed for
the operators. Recent SCADA systems are able to offer high resolution computer
graphics to display a graphical user interface or mimic screen of the site or water
supply network in question. Historically, SCADA vendors offered proprietary
hardware, operating systems, and software that was largely incompatible with
other vendors' SCADA systems. Expanding the system required a further contract
with the original SCADA vendor. Host computer platforms characteristically
employed UNIX-based architecture, and the host computer network was physically
removed from any office-computing domain.
However, with the increased use of the personal computer, computer
networking has become commonplace in the office and as a result, SCADA
systems are now available that can network with office-based personal computers.
Indeed, many of today's SCADA systems can reside on computer servers that are
identical to those servers and computers used for traditional office applications.
This has opened a range of possibilities for the linking of SCADA systems to
office-based applications such as GIS systems, hydraulic modeling software,
drawing management systems, work scheduling systems, and information
databases.

57
6.4. Operator workstations and software components
Operator workstations are most often computer terminals that are networked
with the SCADA central host computer. The central host computer acts as a server
for the SCADA application, and the operator terminals are clients that request and
send information to the central host computer based on the request and action of
the operators.
An important aspect of every SCADA system is the computer software used
within the system. The most obvious software component is the operator interface
or Man Machine Interface/Human Machine Interface (MMI/HMI) package;
however, software of some form pervades all levels of a SCADA system.
Depending on the size and nature of the SCADA application, software can be a
significant cost item when developing, maintaining, and expanding a SCADA
system. When software is well defined, designed, written, checked, and tested, a
successful SCADA system will likely be produced. Poor performances in any of
these project phases will very easily cause a SCADA project to fail.
Many SCADA systems employ commercial proprietary software upon which
the SCADA system is developed. The proprietary software often is configured for
a specific hardware platform and may not interface with the software or hardware
produced by competing vendors. A wide range of commercial off-the-shelf
(COTS) software products also are available, some of which may suit the required
application. COTS software usually is more flexible, and will interface with
different types of hardware and software. Generally, the focus of proprietary
software is on processes and control functionality, while COTS software
emphasizes compatibility with a variety of equipment and instrumentation. It is
therefore important to ensure that adequate planning is undertaken to select the
software systems appropriate to any new SCADA system.
Software products typically used within a SCADA system are as follows:
1. Central host computer operating system: Software used to control the
central host computer hardware. The software can be based on UNIX or other
popular operating systems.
2. Operator terminal operating system: Software used to control the
central host computer hardware. The software is usually the same as the central
host computer operating system. This software, along with that for the central host
computer, usually contributes to the networking of the central host and the operator
terminals.

58
3. Central host computer application: Software that handles the
transmittal and reception of data to and from the RTUs and the central host. The
software also provides the graphical user interface which offers site mimic screens,
alarm pages, trend pages, and control functions.
4. Operator terminal application: Application that enables users to access
information available on the central host computer application. It is usually a
subset of the software used on the central host computers.
5. Communications protocol drivers: Software that is usually based
within the central host and the RTUs, and is required to control the translation and
interpretation of the data between ends of the communications links in the system.
The protocol drivers prepare the data for use either at the field devices or the
central host end of the system.
6. Communications network management software: Software required to
control the communications network and to allow the communications networks
themselves to be monitored for performance and failures.
7. RTU automation software: Software that allows engineering staff to
configure and maintain the application housed within the RTUs (or PLCs). Most
often this includes the local automation application and any data processing tasks
that are performed within the RTU.
The preceding software products provide the building blocks for the
application-specific software, which must be defined, designed, written, tested, and
deployed for each SCADA system.

6.5. SCADA Architectures


SCADA systems have evolved in parallel with the growth and sophistication
of modern computing technology. The following sections will provide a
description of the following three generations of SCADA systems:
First Generation – Monolithic
Second Generation – Distributed
Third Generation – Networked

6.5.1. Monolithic SCADA Systems


When SCADA systems were first developed, the concept of computing in
general centered on “mainframe” systems. Networks were generally non-existent,
and each centralized system stood alone. As a result, SCADA systems were
standalone systems with virtually no connectivity to other systems. The Wide Area

59
Networks (WANs) that were implemented to communicate with remote terminal
units (RTUs) were designed with a single purpose in mind–that of communicating
with RTUs in the field and nothing else. In addition, WAN protocols in use today
were largely unknown at the time. The communication protocols in use on
SCADA networks were developed by vendors of RTU equipment and were often
proprietary. In addition, these protocols were generally very “lean”, supporting
virtually no functionality beyond that required scanning and controlling points
within the remote device. Also, it was generally not feasible to intermingle other
types of data traffic with RTU communications on the network. Connectivity to the
SCADA master station itself was very limited by the system vendor. Connections
to the master typically were done at the bus level via a proprietary adapter or
controller plugged into the Central Processing Unit (CPU) backplane. Redundancy
in these first generation systems was accomplished by the use of two identically
equipped mainframe systems, a primary and a backup, connected at the bus level.
The standby system’s primary function was to monitor the primary and take over
in the event of a detected failure. This type of standby operation meant that little or
no processing was done on the standby system. Figure 6.3 shows a typical first
generation SCADA architecture.

Figure 6.3. First Generation SCADA Architecture

6.5.2. Distributed SCADA Systems


The next generation of SCADA systems took advantage of developments and
improvement in system miniaturization and Local Area Networking (LAN)
technology to distribute the processing across multiple systems. Multiple stations,
each with a specific function, were connected to a LAN and shared information

60
with each other in real-time. These stations were typically of the mini-computer
class, smaller and less expensive than their first generation processors. Some of
these distributed stations served as communications processors, primarily
communicating with field devices such as RTUs. Some served as operator
interfaces, providing the human-machine interface (HMI) for system operators.
Still others served as calculation processors or database servers.
The distribution of individual SCADA system functions across multiple
systems provided more processing power for the system as a whole than would
have been available in a single processor. The networks that connected these
individual systems were generally based on LAN protocols and were not capable
of reaching beyond the limits of the local environment. Some of the LAN protocols
that were used were of a proprietary nature, where the vendor created its own
network protocol or version thereof rather than pulling an existing one off the
shelf. This allowed a vendor to optimize its LAN protocol for real-time traffic, but
it limited (or effectively eliminated) the connection of network from other vendors
to the SCADA LAN. Figure 6.4 depicts typical second generation SCADA
architecture.
Distribution of system functionality across network-connected systems served
not only to increase processing power, but also to improve the redundancy and
reliability of the system as a whole. Rather than the simple primary/standby
failover scheme that was utilized in many first generation systems, the distributed
architecture often kept all stations on the LAN in an online state all of the time.

Figure 6.4: Second Generation SCADA Architecture

61
For example, if an HMI station were to fail, another HMI station could be
used to operate the system, without waiting for failover from the primary system to
the secondary. The WAN used to communicate with devices in the field were
largely unchanged by the development of LAN connectivity between local stations
at the SCADA master. These external communications networks were still limited
to RTU protocols and were not available for other types of network traffic. As was
the case with the first generation of systems, the second generation of SCADA
systems was also limited to hardware, software, and peripheral devices that were
provided or at least selected by the vendor.

6.5.3. Networked SCADA Systems


The current generation of SCADA master station architecture is closely
related to that of the second generation, with the primary difference being that of
an open system architecture rather than a vendor controlled, proprietary
environment. There are still multiple networked systems, sharing master station
functions. There are still RTUs utilizing protocols that are vendor-proprietary. The
major improvement in the third generation is that of opening the system
architecture, utilizing open standards and protocols and making it possible to
distribute SCADA functionality across a WAN and not just a LAN. Open
standards eliminate a number of the limitations of previous generations of SCADA
systems. The utilization of off-the-shelf systems makes it easier for the user to
connect third party peripheral devices (such as monitors, printers, disk drives, tape
drives, etc.) to the system and/or the network. As they have moved to “open” or
“off-the-shelf” systems, SCADA vendors have gradually gotten out of the
hardware development business. These vendors have looked to system vendors
such as Compaq, Hewlett-Packard, and Sun Microsystems for their expertise in
developing the basic computer platforms and operating system software. This
allows SCADA vendors to concentrate their development in an area where they
can add specific value to the system–that of SCADA master station software. The
major improvement in third generation SCADA systems comes from the use of
WAN protocols such as the Internet Protocol (IP) for communication between the
master station and communications equipment. This allows the portion of the
master station that is responsible for communications with the field devices to be
separated from the master station “proper” across a WAN. Vendors are now
producing RTUs that can communicate with the master station using an Ethernet
connection. Figure 6.5 represents a networked SCADA system.

62
Figure 3.3: Third Generation SCADA System

Another advantage brought about by the distribution of SCADA functionality


over a WAN is that of disaster survivability. The distribution of SCADA
processing across a LAN in second-generation systems improves reliability, but in
the event of a total loss of the facility housing the SCADA master, the entire
system could be lost as well. By distributing the processing across physically
separate locations, it becomes possible to build a SCADA system that can survive
a total loss of any one location. For some organizations that see SCADA as a
super-critical function, this is a real benefit.

7. GENERAL SCADA COMPONENTS


7.1. PLC BASICS

7.1.1. Controllers
What type of task might a control system have? It might be required to control
a sequence of events or maintain some variable constant or follow some prescribed
change. For example, the control system for an automatic drilling machine (Figure
7.1(a)) might be required to start lowering the drill when the workpiece is in
position, start drilling when the drill reaches the workpiece, stop drilling when the
drill has produced the required depth of hole, retract the drill and then switch off
and wait for the next workpiece to be put in position before repeating the
operation. Another control system (Figure 7.1(b)) might be used to control the
number of items moving along a conveyor belt and direct them into a packing case.

63
The inputs to such control systems might be from switches being closed or opened,
e.g. the presence of the workpiece might be indicated by it moving against a switch
and closing it, or other sensors such as those used for temperature or flow rates.
The controller might be required to run a motor to move an object to some
position, or to turn a valve, or perhaps a heater, on or off.

Figure 7.1 An example of a control task and some input sensors: (a) an automatic drilling
machine, (b) a packing system

What form might a controller have? For the automatic drilling machine, we
could wire up electrical circuits in which the closing or opening of switches would
result in motors being switched on or valves being actuated. Thus we might have
the closing of a switch activating a relay which, in turn, switches on the current to
a motor and causes the drill to rotate (Figure 7.2). Another switch might be used to
activate a relay and switch on the current to a pneumatic or hydraulic valve which
results in pressure being switched to drive a piston in a cylinder and so results in
the workpiece being pushed into the required position. Such electrical circuits
would have to be specific to the automatic drilling machine. For controlling the
number of items packed into a packing case we could likewise wire up electrical
circuits involving sensors and motors. However, the controller circuits we devised
for these two situations would be different. In the ‘traditional’ form of control
system, the rules governing the control system and when actions are initiated are
determined by the wiring. When the rules used for the control actions are changed,
the wiring has to be changed.

64
Figure 7.2 A control circuit

7.1.2. Microprocessor controlled system


Instead of hardwiring each control circuit for each control situation we can
use the same basic system for all situations if we use a microprocessor-based
system and write a program to instruct the microprocessor how to react to each
input signal from, say, switches and give the required outputs to, say, motors and
valves. Thus we might have a program of the form:
If switch A closes
Output to motor circuit
If switch B closes
Output to valve circuit
By changing the instructions in the program we can use the same
microprocessor system to control a wide variety of situations. As an illustration,
the modern domestic washing machine uses a microprocessor system. Inputs to it
arise from the dials used to select the required wash cycle, a switch to determine
that the machine door is closed, a temperature sensor to determine the temperature
of the water and a switch to detect the level of the water. On the basis of these
inputs the microprocessor is programmed to give outputs which switch on the
drum motor and control its speed, open or close cold and hot water valves, switch
on the drain pump, control the water heater and control the door lock so that the
machine cannot be opened until the washing cycle is completed.

7.1.3. The programmable logic controller


A programmable logic controller (PLC) is a special form of microprocessor-
based controller that uses a programmable memory to store instructions and to
implement functions such as logic, sequencing, timing, counting and arithmetic in

65
order to control machines and processes (Figure 7.3) and are designed to be
operated by engineers with perhaps a limited knowledge of computers and
computing languages. They are not designed so that only computer programmers
can set up or change the programs. Thus, the designers of the PLC have pre-
programmed it so that the control program can be entered using a simple, rather
intuitive, form of language. The term logic is used because programming is
primarily concerned with implementing logic and switching operations, e.g. if A or
B occurs switch on C, if A and B occurs switch on D. Input devices, e.g. sensors
such as switches, and output devices in the system being controlled, e.g. motors,
valves, etc., are connected to the PLC. The operator then enters a sequence of
instructions, i.e. a program, into the memory of the PLC. The controller then
monitors the inputs and outputs according to this program and carries out the
control rules for which it has been programmed.

Figure 7.3 A programmable logic controller

PLCs have the great advantage that the same basic controller can be used with
a wide range of control systems. To modify a control system and the rules that are
to be used, all that is necessary is for an operator to key in a different set of
instructions. There is no need to rewire. The result is a flexible, cost effective,
system which can be used with control systems which vary quite widely in their
nature and complexity. PLCs are similar to computers but whereas computers are
optimised for calculation and display tasks, PLCs are optimised for control tasks
and the industrial environment. Thus PLCs are:
Rugged and designed to withstand vibrations, temperature, humidity and
noise. Have interfacing for inputs and outputs already inside the controller.
Are easily programmed and have an easily understood programming language
which is primarily concerned with logic and switching operations.
The first PLC was developed in 1969. They are now widely used and extend
from small self-contained units for use with perhaps 20 digital inputs/outputs to

66
modular systems which can be used for large numbers of inputs/outputs, handle
digital or analogue inputs/outputs, and also carry out proportional-integral-
derivative control modes.

7.2. Hardware
Typically a PLC system has the basic functional components of processor
unit, memory, power supply unit, input/output interface section, communications
interface and the programming device. Figure 7.4 shows the basic arrangement.

Figure 7.4 The PLC system

The processor unit or central processing unit (CPU) is the unit containing the
microprocessor and this interprets the input signals and carries out the control
actions, according to the program stored in its memory, communicating the
decisions as action signals to the outputs.
The power supply unit is needed to convert the mains A.C. voltage to the low
D.C. voltage (5 V) necessary for the processor and the circuits in the input and
output interface modules.
The programming device is used to enter the required program into the
memory of the processor. The program is developed in the device and then
transferred to the memory unit of the PLC.
The memory unit is where the program is stored that is to be used for the
control actions to be exercised by the microprocessor and data stored from the
input for processing and for the output for outputting.
The input and output sections are where the processor receives information
from external devices and communicates information to external devices. The
inputs might thus be from switches, as illustrated in Figure 7.1(a) with the
automatic drill, or other sensors such as photo-electric cells, as in the counter

67
mechanism in Figure 7.1(b), temperature sensors, or flow sensors, etc. The outputs
might be to motor starter coils, solenoid valves, etc.
Input and output devices can be classified as giving signals which are
discrete, digital or analogue (Figure 7.5). Devices giving discrete or digital signals
are ones where the signals are either off or on. Thus a switch is a device giving a
discrete signal, either no voltage or a voltage. Digital devices can be considered to
be essentially discrete devices which give a sequence of on−off signals. Analogue
devices give signals whose size is proportional to the size of the variable being
monitored. For example, a temperature sensor may give a voltage proportional to
the temperature.

Figure 7.5 Signals: (a) discrete, (b) digital, (c) analogue

The communications interface is used to receive and transmit data on


communication networks from or to other remote PLCs (Figure 76). It is
concerned with such actions as device verification, data acquisition,
synchronisation between user applications and connection management.

Figure 7.6 Basic communications model

68
7.3. Internal architecture
Figure 7.7 shows the basic internal architecture of a PLC. It consists of a
central processing unit (CPU) containing the system microprocessor, memory, and
input/output circuitry. The CPU controls and processes all the operations within
the PLC. It is supplied with a clock with a frequency of typically between 1 and 8
MHz. This frequency determines the operating speed of the PLC and provides the
timing and synchronisation for all elements in the system. The information within
the PLC is carried by means of digital signals. The internal paths along which
digital signals flow are called buses. In the physical sense, a bus is just a number of
conductors along which electrical signals can flow. It might be tracks on a printed
circuit board or wires in a ribbon cable. The CPU uses the data bus for sending
data between the constituent elements, the address bus to send the addresses of
locations for accessing stored data and the control bus for signals relating to
internal control actions. The system bus is used for communications between the
input/output ports and the input/output unit.

Figure 7.7 Architecture of a PLC

7.3.1. The CPU


The internal structure of the CPU depends on the microprocessor concerned.
In general they have:
1. An arithmetic and logic unit (ALU) which is responsible for
data manipulation and carrying out arithmetic operations of

69
addition and subtraction and logic operations of AND, OR,
NOT and EXCLUSIVE-OR.
2. Memory, termed registers, located within the microprocessor
and used to store information involved in program execution.
3. A control unit which is used to control the timing of operations.

7.3.2. The buses


The buses are the paths used for communication within the PLC. The
information is transmitted in binary form, i.e. as a group of bits with a bit being a
binary digit of 1 or 0, i.e. on/off states. The term word is used for the group of bits
constituting some information. Thus an 8-bit word might be the binary number
00100110. Each of the bits is communicated simultaneously along its own parallel
wire. The system has four buses:
– The data bus carries the data used in the processing carried out by the CPU.
A microprocessor termed as being 8-bit has an internal data bus which can handle
8-bit numbers. It can thus perform operations between 8-bit numbers and deliver
results as 8-bit values.
– The address bus is used to carry the addresses of memory locations. So that
each word can be located in the memory, every memory location is given a unique
address. Just like houses in a town are each given a distinct address so that they
can be located, so each word location is given an address so that data stored at a
particular location can be accessed by the CPU either to read data located there or
put, i.e. write, data there. It is the address bus which carries the information
indicating which address is to be accessed. If the address bus consists of 8 lines,
the number of 8-bit words, and hence number of distinct addresses, is 28 = 256.
With 16 address lines, 65 536 addresses are possible.
– The control bus carries the signals used by the CPU for control, e.g. to
inform memory devices whether they are to receive data from an input or output
data and to carry timing signals used to synchronise actions.
– The system bus is used for communications between the input/output ports
and the input/output unit.

7.3.3. Memory
There are several memory elements in a PLC system:
– System read-only-memory (ROM) to give permanent storage for the
operating system and fixed data used by the CPU.

70
– Random-access memory (RAM) for the user’s program.
– Random-access memory (RAM) for data. This is where information is
stored on the status of input and output devices and the values of timers and
counters and other internal devices. The data RAM is sometimes referred to as a
data table or register table. Part of this memory, i.e. a block of addresses, will be
set aside for input and output addresses and the states of those inputs and outputs.
Part will be set aside for preset data and part for storing counter values, timer
values, etc.
– Possibly, as a bolt-on extra module, erasable and programmable read-
only-memory (EPROM) for ROMs that can be programmed and then the program
made permanent.
The programs and data in RAM can be changed by the user. All PLCs will
have some amount of RAM to store programs that have been developed by the user
and program data. However, to prevent the loss of programs when the power
supply is switched off, a battery is used in the PLC to maintain the RAM contents
for a period of time. After a program has been developed in RAM it may be loaded
into an EPROM memory chip, often a bolt-on module to the PLC, and so made
permanent. In addition there are temporary buffer stores for the input/output
channels. The storage capacity of a memory unit is determined by the number of
binary words that it can store. Thus, if a memory size is 256 words then it can store
256 × 8 = 2048 bits if 8-bit words are used and 256 × 16 = 4096 bits if 16-bit
words are used. Memory sizes are often specified in terms of the number of storage
locations available with 1K representing the number 210, i.e. 1024. Manufacturers
supply memory chips with the storage locations grouped in groups of 1, 4 and 8
bits. A 4K % 1 memory has 4 % 1 % 1024 bit locations. A 4K % 8 memory has 4
% 8 % 1024 bit locations. The term byte is used for a word of length 8 bits. Thus
the 4K % 8 memory can store 4096 bytes. With a 16-bit address bus we can have
216 different addresses and so, with 8-bit words stored at each address, we can
have 216 % 8 storage locations and so use a memory of size 216 % 8/210 = 64K %
8 which we might be as four 16K % 8 bit memory chips.

7.3.4. Input/output unit


The input/output unit provides the interface between the system and the
outside world, allowing for connections to be made through input/output channels
to input devices such as sensors and output devices such as motors and solenoids.
It is also through the input/output unit that programs are entered from a program

71
panel. Every input/output point has a unique address which can be used by the
CPU. It is like a row of houses along a road, number 10 might be the ‘house’ to be
used for an input from a particular sensor while number ‘45’ might be the ‘house’
to be used for the output to a particular motor. The input/output channels provide
isolation and signal conditioning functions so that sensors and actuators can often
be directly connected to them without the need for other circuitry. Electrical
isolation from the external world is usually by means of optoisolators (the term
optocoupler is also often used). Figure 7.8 shows the principle of an optoisolator.
When a digital pulse passes through the light-emitting diode, a pulse of infrared
radiation is produced. This pulse is detected by the phototransistor and gives rise to
a voltage in that circuit. The gap between the light-emitting diode and the
phototransistor gives electrical isolation but the arrangement still allows for a
digital pulse in one circuit to give rise to a digital pulse in another circuit.

Figure 7.8 Optoisolator

The digital signal that is generally compatible with the microprocessor in the
PLC is 5 V d.c. However, signal conditioning in the input channel, with isolation,
enables a wide range of input signals to be supplied to it. A range of inputs might
be available with a larger PLC, e.g. 5 V, 24 V, 110 V and 240 V digital/discrete,
i.e. on−off, signals (Figure 7.9). A small PLC is likely to have just one form of
input, e.g. 24 V.

Figure 7.9 Input levels

72
The output from the input/output unit will be digital with a level of 5 V.
However, after signal conditioning with relays, transistors or triacs, the output
from the output channel might be a 24 V, 100 mA switching signal, a d.c. voltage
of 110 V, 1 A or perhaps 240 V, 1 A a.c., or 240 V, 2 A a.c., from a triac output
channel (Figure 7.10). With a small PLC, all the outputs might be of one type, e.g.
240 V a.c., 1 A. With modular PLCs, however, a range of outputs can be
accommodated by selection of the modules to be used.

Figure 7.10 Output levels

Outputs are specified as being of relay type, transistor type or triac


type:
– With the relay type, the signal from the PLC output is used to operate
a relay and is able to switch currents of the order of a few amperes in an external
circuit. The relay not only allows small currents to switch much larger currents but
also isolates the PLC from the external circuit. Relays are, however, relatively slow
to operate. Relay outputs are suitable for a.c. and d.c. switching. They can
withstand high surge currents and voltage transients.
– The transistor type of output uses a transistor to switch current
through the external circuit. This gives a considerably faster switching action. It is,
however, strictly for d.c. switching and is destroyed by overcurrent and high
reverse voltage. As a protection, either a fuse or built-in electronic protection are
used. Optoisolators are used to provide isolation.
– Triac outputs, with optoisolators for isolation, can be used to control
external loads which are connected to the a.c. power supply. It is strictly for a.c.
operation and is very easily destroyed by overcurrent. Fuses are virtually always
included to protect such outputs.

7.3.5. Sourcing and sinking


The terms sourcing and sinking are used to describe the way in which d.c.
devices are connected to a PLC. With sourcing, using the conventional current

73
flow direction as from positive to negative, an input device receives current from
the input module, i.e. the input module is the source of the current (Figure 7.11(a)).
If the current flows from the output module to an output load then the output
module is referred to as sourcing (Figure 7.11(b)). With sinking, using the
conventional current flow direction as from positive to negative, an input device
supplies current to the input module, i.e. the input module is the sink for the
current (Figure 7.12(a)). If the current flows to the output module from an output
load then the output module is referred to as sinking (Figure 7.12(b)).

Figure 7.11 Sourcing

Figure 7.12 Sinking

7.4. Controller selection criteria


Basic choices and selection criteria follow to help in specifying a controller
for an industrial application. Eleven criteria in the table can help when selecting a
controller type from among relays, timers, analog instruments, smart relays,
programmable logic controller (PLCs), and programmable automation controllers
(PACs).
Relays, timers, and analog instruments: Despite the proliferation of
inexpensive smart relays and micro PLCs, many thousands of control systems are
built each year using these basic components. The big advantage of these systems
is simplicity, as even the dullest technician can understand and troubleshoot them.
No programming is required; thus there is no need for software licenses or PCs.
When the number of relays and timers exceeds four, it's best to upgrade to a smart
relay.

74
Smart relays: These have grown more capable over the years, blurring the line
between them and micro PLCs. Smart relays can be programmed with PC-based
software, but many can also be programmed from their front panel display. Ladder
logic or function block is the language of choice, and analog capabilities range
from slim to none.
PLCs: These workhorses run the range from micros with about 32 built-in
input/output (I/O) points to full-featured systems capable of handling thousands of
I/O. PLCs are programmed with PC-based software, and any changes to the
program require a PC. But, many parameters can be adjusted with a local operator
interface, which is built-in with a combination of PLC and human machine
interface (HMI) units, an emerging class of controllers combining a PLC with a
graphical interface (Table 1).
Table 1: Controller selection criteria
Characteristics Relay/timer Smart relay PLC PAC
Maximum I/O 10 20 Up to 2,000 Up to 100,000
Footprint Largest Smallest Depends on I/O Depends on I/O
quantity quantity
Local expansion n/a n/a Medium High
capability
Remote n/a n/a Medium High
expansion
capability
Programming n/a Ladder, some Ladder and maybe Multiple-Ladder,
languages function blocks other specialty structure text,
function blocks function block, etc.
Programming n/a Free to low Free to medium Medium to High
software cost
Hardware cost Lowest Low Medium High
Program n/a Low High Very High
memory
Ease of use Easiest Easy Medium Difficult
Flexibility Very low Low High High
Connectivity to Hard-wired One Multiple Multiple
other systems only communication communication communication
port and protocol ports and protocols ports and protocols
PACs: Lines are once again blurred, but this time between high-end PLCs and
PACs. But PACs add more capabilities than PLCs, particularly for control of very
complex systems. PACs can handle advanced motion control, incorporate vision

75
systems, and perform advanced control of analog loops—a set of tasks that might
unduly burden a PLC.
Distributed control systems (DCS) were intentionally left out of this
discussion because most are now PAC-based. While exceptions abound, these
criteria are a good starting point for controller selection (Table 1). Table 2 shows
the current and anticipated future state of these markets.
Table 2:The current and anticipated future state of markets

7.5. PLC vs. PAC


While PLCs (programmable logic controllers) have been around for more
than 40 years, recent advances have greatly increased their capabilities, blurring
the line between a PLC and PAC (programmable automation controller). What
differences remain between these two categories? Is there a performance gap
between PLCs and PACs that users should keep in mind when choosing the best
solution for a particular application?
A brief bit of history can put the discussion in context. PLCs were created in
the late 1960s to replace relay-based systems. Conceptually they were similar and
used ladder logic that mimicked the appearance of wiring diagrams engineers used
to represent physical relays and timers, and the connections among them. Early
PLCs required dedicated proprietary terminals for programming, had very limited
memory, and lacked remote I/O.
By the 1980s, PC-based software was introduced for programming PLCs,
which had become faster and had added more features as years passed. Since then,

76
many new technologies have been applied to PLCs, greatly expanding their
capabilities on an almost continuous basis.
PACs are relatively new to the automation market, using the term coined by
the market research firm ARC in 2001. Since then, there has been no specific
agreement as to what differentiates a PAC from a PLC. Some users feel the term
PAC is simply marketing jargon to describe highly advanced PLCs, while others
believe there is a definite distinction between a PLC and a PAC. In any case,
defining exactly what constitutes a PAC isn’t as important as having users
understand the types of applications for which each is best suited.

7.5.1. Determining users’ needs


Most suppliers carry a wide range of PLCs and PACs, which can make it
difficult to choose the right product for a particular application
Typically PLCs have been best suited for machine control, both simple and
high speed. Common characteristics of these PLCs are simple program execution
scans, limited memory, and a focus on discrete I/O with on/off control (fig. 7.13).
On the other hand, a PAC is geared more toward complex automation system
architectures composed of a number of PC-based software applications, including
HMI (human machine interface) functions, asset management, historian, advanced
process control (APC), and others. A PAC is also generally a better fit for
applications with extensive process control requirements, as PACs are better able
to handle analog I/O and related control functions. A PAC tends to provide greater
flexibility in programming, larger memory capacity, better interoperability, and
more features and functions in general.
As a result of having an architecture based on ladder logic and a focus on
discrete on-off control, expanding a PLC beyond its original capabilities – such as
adding extensive analog control capabilities – has often proved difficult. In older or
lower-end PLCs, separate hardware cards usually had to be added and programmed
to accomplish functions outside the PLC’s core focus. These functions included,
but weren’t limited to, networking multiple components, extensive process control,
and sophisticated data manipulation.
To answer the demand for more PLC functionality, manufacturers have added
features and capabilities. For example, older PLCs could only accommodate a
relatively small number of PID loops, typically about 16, while new PLCs can
handle thousands of such loops. Newer PLCs often feature multiple

77
communication ports, and greatly increased memory as compared to older models
(see Figure 7.11).

Figure 7.13 PLC’s application

On the other hand, PACs provide a more open architecture and modular
design to facilitate communication and interoperability with other devices,
networks, and enterprise systems. They can be easily used for communicating,
monitoring, and control across various networks and devices because they employ
standard protocols and network technologies such as Ethernet, OPC, and SQL.
PACs also offer a single platform that operates in multiple domains such as
motion, discrete, and process control. Moreover, the modular design of a PAC
simplifies system expansion and makes adding and removing sensors and other
devices easy, often eliminating the need to disconnect wiring. Their modular
design makes it easy to add and effectively monitor and control thousands of I/O
points, a task beyond the reach of most PLCs.
Another key differentiator between a PLC and a PAC is the tag-based
programming offered by a PAC. With a PAC, a single tag-name database can be
used for development, with one software package capable of programming

78
multiple models. Tags, or descriptive names, can be assigned to functions before
tying to specific I/O or memory addresses. This makes PAC programming highly
flexible, with easy scalability to larger systems.
For simple applications, such as controlling a basic machine, a PLC is a better
choice than a PAC. Likewise, for most applications that consist primarily of
discrete I/O, a PLC is the best choice—unless there are other extraordinary
requirements such as extensive data handling and manipulation (fig. 7.14).

Figure 7.14 PAC’s application

If the application includes monitoring and control of a large number of analog


I/O points, then a PAC is generally the better solution. This is also the case when
the application encompasses an entire plant or factory floor, a situation that
typically calls for distributed I/O in large numbers, along with extensive loop
control – functions better suited to a PAC than to a PLC.
The confusion arises when an application lies somewhere between simple and
complex, and in these circumstances a high-end PLC or a low-end PAC platform
will work. Ultimately, a choice between the two will be defined strictly by other

79
factors outside of specific application requirements. These factors include, but
aren’t limited to, past experience with each platform, price, the level of local
support, and anticipated future growth and changes.
Once a decision is made between a PLC or a PAC, users typically have a wide
range of products from which to choose, even if only a single vendor is being
considered. That’s because PLCs and PACs are typically designed in systems of
scale, meaning there is a family of controllers to choose from that range from
lower I/O count to larger system capacity, with correspondingly more features and
functions as I/O counts and prices increase.

7.5.2. Functional differences


The demarcation line between PLCs and PACs has become less clear, but
there are still some applications that clearly favor a PAC, due to its greater range of
features, functions, and capabilities (Table 3).
Table 3: PAC advantages over PLCs

Here are a few observations:


From a programming perspective, a PLC typically has a fixed memory map
and addressing. In contrast, a PAC allows tag naming, letting users define data
types as they program. This provides more flexibility, especially when expanding
the system.
While many high-level PLCs have excellent execution speeds, PACs typically
offer much greater I/O capacity and user memory size for larger projects and larger

80
overall system sizes. This often makes them a better choice for large systems
encompassing several areas of a plant.
While advanced PLCs have increased communication and data handling
options, PACs still offer more built-in features such as USB data logging ports, a
web server to view system data and data log files, and an LCD screen for enhanced
user interface and diagnostics (Figure 2).
PACs are designed to be integrated more tightly with SQL and other
databases. They often are still the choice for process control applications because
they deliver other advantages such as standard 16-bit resolution analog for higher
precision measurements.
Modern PLCs and PACs share many of the same features, and either will
work in many applications.
The final selection will typically be determined by dozens of factors for any
given application and company environment, including functional requirements,
future expansion plans, company/vendor relationships, and past experience with
specific automation platforms.

7.5.3. PLC & PAC model comparison


The main difference from other computers is that PLCs are armored for
severe condition (dust, moisture, heat, cold, etc) and have the facility for extensive
input/output (I/O) arrangements. These connect the PLC to sensors and actuators.
PLCs read limit switches, analog process variables (such as temperature and
pressure), and the positions of complex positioning systems. Some even use
machine vision. On the actuator side, PLCs operate electric motors, pneumatic or
hydraulic cylinders, magnetic relays or solenoids, or analog outputs. The
input/output arrangements may be built into a simple PLC, or the PLC may have
external I/O modules attached to a computer network that plugs into the PLC.
PLCs were invented as replacements for automated systems that would use
hundreds orressed all decision making logic in simple ladder logic which appeared
similar to electrical schematic diagrams. The electricians were quite able to trace
out circuit problems with schematic diagrams using ladder logic. This program
notation was chosen to reduce training demands for the existing technicians. Other
early PLCs used a form of instruction list programming, based on a stack-based
logic solver.
The functionality of the PLC has evolved over the years to include sequential
relay control, motion control, process control, distributed control systems and

81
networking. The data handling, storage, processing power and communication
capabilities of some modern PLCs are approximately equivalent to desktop
computers. PLC-like programming combined with remote I/O hardware, allow a
general-purpose desktop computer to overlap some PLCs in certain applications.
Under the IEC 61131-3 standard, PLCs can be programmed using standards-
based programming languages. A graphical programming notation called
Sequential Function Charts is available on certain programmable controllers.
PLCs are well-adapted to a range of automation tasks. These are typically
industrial processes in manufacturing where the cost of developing and
maintaining the automation system is high relative to the total cost of the
automation, and where changes to the system would be expected during its
operational life. PLCs contain input and output devices compatible with industrial
pilot devices and controls; little electrical design is required, and the design
problem centers on expressing the desired sequence of operations in ladder logic
(or function chart) notation. PLC applications are typically highly customized
systems so the cost of a packaged PLC is low compared to the cost of a specific
custom-built controller design. On the other hand, in the case of mass-produced
goods, customized control systems are economic due to the lower cost of the
components, which can be optimally chosen instead of a “generic” solution, and
where the non-recurring engineering charges are spread over thousands of sales.
For high volume or very simple fixed automation tasks, different techniques
are used. For example, a consumer dishwasher would be controlled by an
electromechanical cam timer costing only a few dollars in production quantities.
A microcontroller-based design would be appropriate where hundreds or
thousands of units will be produced and so the development cost (design of power
supplies and input/output hardware) can be spread over many sales, and where the
end-user would not need to alter the control. Automotive applications are an
example; millions of units are built each year, and very few end-users alter the
programming of these controllers. However, some specialty vehicles such as transit
busses economically use PLCs instead of custom-designed controls, because the
volumes are low and the development cost would be uneconomic.
Very complex process control, such as used in the chemical industry, may
require algorithms and performance beyond the capability of even high-
performance PLCs. Very high-speed or precision controls may also require
customized solutions; for example, aircraft flight controls.

82
PLCs may include logic for single-variable feedback analog control loop, a
“proportional, integral, derivative” or “PID controller.” A PID loop could be used
to control the temperature of a manufacturing process, for example. Historically
PLCs were usually configured with only a few analog control loops; where
processes required hundreds or thousands of loops, a distributed control system
(DCS) would instead be used. However, as PLCs have become more powerful, the
boundary between DCS and PLC applications has become less clear-cut.
Shows what a bare bones hardware system might cost. We try to find the cheapest
CPU in the line and couple it with the cheapest backplane (if the line uses them),
and if IO isn’t included with the CPU we add the cheapest IO we can find.
Ethernet. Does it exist? If so, is it standard or optional? Does the manufacturer
viewed as an integral part or an afterthought. Ethernet often enables inexpensive,
simple ways to link PLCs together
USB. Does it exist? If so, is it standard or optional? Does the manufacturer
viewed as an integral part or an afterthought. USB often is used to simplify
programming and provide additional useful features
Analog, Thermocouple and Motion Control. These are to help you decide if the
hardware provides options needed for your project. The analog and thermocouple
merits are pretty cut and dry. We’ve taken our first stab at Motion Control, it’s
still erring on the side of manufactures. For instance: if they claim built-in high
speed counters, we give it to them, even if they aren’t very high-speed at all. Just
be careful to check the references on this one.
Software Price. We tried to get pricing for basic programming software for a
single user. We added the price of programming cables to this for Programmable
Relays when we saw some manufacturers giving the software away only to charge
crazy prices for the cables. From playing with many many packages we must say
that price doesn’t correlate well with quality. Our favorite packages were all over
the price scale (a couple were free). Be sure to use the free trials before
committing to a product line. Hardware you setup every now and then, software
you’re stuck looking at for many days.
Are addresses referred to by user defined names (Tags), or are they referred to by
their locations in memory (ex: i1,i2,o13)? Tags help make programs easier to
undersand.
Subroutines: They help break programs into manageable pieces, and enable re-
usability of code. To get this merit, we required that subroutines be allowed to
call other subroutines and that values must be passable by value and reference.
Seamless Data Transfer Between PLCs: When you have PLCs in different
locations and they need to communicate values, is the setup to handle this dead
simple? If so you get this merit.

83
84
85
7.6. REMOTE TERMINAL UNIT
A remote terminal unit (RTU) is a microprocessor-controlled electronic
device that interfaces objects in the physical world to a distributed control system
or SCADA (supervisory control and data acquisition) system by transmitting
telemetry data to a master system, and by using messages from the master
supervisory system to control connected objects. Other terms that may be used for
RTU is remote telemetry unit or remote telecontrol unit.

7.6.1. Architecture
An RTU monitors the field digital and analog parameters and transmits data
to the Central Monitoring Station. It contains setup software to connect data input
streams to data output streams, define communication protocols, and troubleshoot
installation problems.
An RTU may consist of one complex circuit card consisting of various
sections needed to do a custom fitted function or may consist of many circuit cards
including CPU or processing with communications interface(s), and one or more of
the following: (AI) analog input, (DI) digital input, (DO/CO) digital or control
(relay) output, or (AO) analog output card(s).

7.6.2. Central Processing Unit (CPU)


Current RTU designs utilize a 16bit or 32 bits microprocessor with a total
memory capacity of 256kbytes expandable to 4 Mbytes. It also has two or three

86
communication ports (RS232, RS422 and RS485) or Ethernet link. This system is
controlled by a firmware and a real-time clock with full calendar is used for
accurate time stamping of events. A watchdog timer provides a check that the RTU
program is executing regularly. The RTU program regularly resets the watchdog
timer and if this is not done within a certain time-out period the watchdog timer
flags an error condition and can sometimes reset the CPU.

Figure. 7.15 RTU Hardware Structure

7.6.3. Power supply


RTUs need a continuous power supply to function, but there are situations
where RTUs are located at quite a distance from an electric power supply. In these
cases, RTUs are equipped with alternate power source and battery backup facilities
in case of power losses. Solar panels are commonly used to power low-powered
RTUs, due to the general availability of sunlight. Thermo electric generators can
also be used to supply power to the RTUs where gas is easily available like in
pipelines.
Digital or status inputs
Most RTUs incorporate an input section or input status cards to acquire two
state real world information. This is usually accomplished by using an isolated
voltage or current source to sense the position of a remote contact (open or closed)
at the RTU site. This contact position may represent many different devices,

87
including electrical breakers, liquid valve positions, alarm conditions, and
mechanical positions of devices.
Analog inputs
An analog input signal is generally a voltage or current that varies over a
defined value range, in direct proportion to a physical process measurement. 4-20
milliamp signals are most commonly used to represent physical measurements like
pressure, flow and temperature. Five main components that makes up the analog
input module are as follows:
Input multiplexer: This samples several analog inputs in turn and
switches each to the output in sequence. The output goes to the analog
digital converter.
Input signal amplifier: This amplifies the low-level voltages to match
the input range of the board’s A/D converter
Sample and hold circuit
A/D converter: This measures the input analog voltage and output a
digital code corresponding to the input voltages.
Bus interface and board timing system.
Typical analog input modules features include:
– 8, 16, or 32 analog inputs
– Resolution of 8 to 12 bits
– Range of 4-20 mA
– Input resistance typically 240kohms to 1 Mohms
– Conversion rates typically 10 microseconds to 30 milliseconds.

7.6.4. Digital (control) outputs


RTUs may drive high current capacity relays to a digital output (or "DO")
board to switch power on and off to devices in the field. The DO board switches
voltage to the coil in the relay, which closes the high current contacts, which
completes the power circuit to the device.
RTU outputs may also consist of driving a sensitive logic input on an
electronic PLC, or other electronic device using a sensitive 5 V input.
Analog outputs
Analog Output modules function is to convert a digital value supplied by the
CPU to an analog value by means of a digital to analog converter. This analog
representation can be used for variable control of actuators.
Analog output modules features are as follow:

88
– 8, 16 or 32 analog outputs
– Resolution of 8 or 12 bits
– Conversion rate from 10µ seconds to 30 milliseconds
– Outputs ranging from 4-20 mA/0 to 10 volts

7.6.5. Software and logic control


Modern RTUs are usually capable of executing simple programs
autonomously without involving the host computers of the DCS or SCADA system
to simplify deployment and to provide redundancy for safety reasons. An RTU in a
modern water management system will typically have code to modify its behavior
when physical override switches on the RTU are toggled during maintenance by
maintenance personnel. This is done for safety reasons; a miscommunication
between the system operators and the maintenance personnel could cause system
operators to mistakenly enable power to a water pump when it is being replaced,
for example.

7.6.6. Communications
A RTU may be interfaced to multiple master stations and IEDs (Intelligent
Electronic Device) with different communication media (usually serial (RS232,
RS485, RS422) or Ethernet). An RTU may support standard protocols (Modbus,
IEC 60870-5-101/103/104, DNP3, IEC 60870-6-ICCP, IEC 61850 etc.) to
interface any third party software.
Data transfer may be initiated from either end using various techniques to
insure synchronization with minimal data traffic. The master may poll its
subordinate unit (Master to RTU or the RTU poll an IED) for changes of data on a
periodic basis. Analog value changes will usually only be reported only on changes
outside a set limit from the last transmitted value. Digital (status) values observe a
similar technique and only transmit groups (bytes) when one included point (bit)
changes. Another method used is where a subordinate unit initiates an update of
data upon a predetermined change in analog or digital data. Periodic complete data
transmission must be used periodically, with either method, to insure full
synchronization and eliminate stale data. Most communication protocols support
both methods, programmable by the installer.
Multiple RTUs or multiple IEDs may share a communications line, in a multi-
drop scheme, as units are addressed uniquely and only respond to their own polls
and commands.

89
IED communications
IED communications transfer data between the RTU and an IED. This can
eliminate the need for many hardware status inputs, analog inputs, and relay
outputs in the RTU. Communications are accomplished by copper or fibre optics
lines. Multiple units may share communication lines.
Master communications
Master communications are usually to a larger control system in a control
room or a data collection system incorporated into a larger system. Data may be
moved using a copper, fibre optic or radio frequency communication system.
Multiple units may share communication lines.

7.6.7. Comparison with other control systems


RTUs differ from programmable logic controllers (PLCs) in that RTUs are
more suitable for wide geographical telemetry, often using wireless
communications, while PLCs are more suitable for local area control (plants,
production lines, etc.) where the system utilizes physical media for control. The
IEC 61131 programming tool is more popular for use with PLCs, while RTUs
often use proprietary programming tools.
RTUs, PLCs and DCS are increasingly beginning to overlap in
responsibilities, and many vendors sell RTUs with PLC-like features and vice
versa. The industry has standardized on the IEC 61131-3 functional block language
for creating programs to run on RTUs and PLCs, although nearly all vendors also
offer proprietary alternatives and associated development environments.
In addition, some vendors now supply RTUs with comprehensive
functionality pre-defined, sometimes with PLC extensions and/or interfaces for
configuration.
Some suppliers of RTUs have created simple graphical user interfaces GUI to
enable customers to configure their RTUs easily. In some applications dataloggers
are used in similar applications.
A programmable automation controller (PAC) is a compact controller that
combines the features and capabilities of a PC-based control system with that of a
typical PLC. PACs are deployed in SCADA systems to provide RTU and PLC
functions. In many electrical substation SCADA applications, "distributed RTUs"
use information processors or station computers to communicate with digital
protective relays, PACS, and other devices for I/O, and communicate with the
SCADA master in lieu of a traditional RTU.

90
7.6.8. Applications
Remote monitoring of functions and instrumentation for:
1. Oil and gas (offshore platforms, onshore oil wells)
2. Networks of pump stations (waste water collection, or for water
supply)
3. Environmental monitoring systems (pollution, air quality, emissions
monitoring)
4. Mine sites
5. Air traffic equipment such as navigation aids (DVOR, DME, ILS, GP)
6. Remote monitoring and control of functions and instrumentation for:
7. Hydro-graphic (water supply, reservoirs, sewage systems)
8. Electrical power transmission networks and associated equipment
9. Natural gas networks and associated equipment
10. Outdoor warning sirens

7.6.9. RTU manufacturers


There are various manufacturers of Remote Terminal Unit for various
functions and industries. A list of some RTU manufactures and their products are
presented in the Table 4:
1. Vmonitor iX-S8 Wireless RTU: An intelligent remote terminal unit
and wireless technology with low power consumption to provide a reliable and
cost effective means of remotely monitor and automate your applications in the oil
and gas fields.
2. ControlWave® Micro Hybrid RTU/PLC: A highly programmable
controller that combine the unique capabilities of a programmable logic controller
(PLC) and a remote terminal unit (RTU) into a single hybrid controller.
3. Zetron Model 1732 RTU: A cost-effective solution for applications
that need to connect widely distributed remote sites to a central control program
using radio, telephone and wire line communications media.
4. Brodersen RTU32: Brodersen RTU32 RTU, PLC and controller series
based on a 32-bit platform provides RTU/PLC with power and leading edge
functionality.
5. Siemens Vicos RTU: A telecontrol with standard SIMATIC S7
Programmable Logic controller

91
6. Oleumtech Wireless RTU/Modbus Gateway: Wio wireless RTU
products are low cost remote terminal units that combine traditional remote IO
functionality of a standard
Table 4: Manufacturers of Remote Terminal Unit

8. INDUSTRIAL DATA COMMUNICATIONS


Data communication involves the transfer of information from one point to
another. Many communication systems handle analog data; examples are telephone
systems, radio and television. Modern instrumentation is almost wholly concerned
with the transfer of digital data. Any communications system requires a transmitter
to send information, a receiver to accept it, and a link between the two. Types of
link include copper wire, optical fiber, radio and microwave. Digital data is
sometimes transferred using a system that is primarily designed for analog
communication. A modem, for example, works by using a digital data stream to
modulate an analog signal that is sent over a telephone line. Another modem
demodulates the signal to reproduce the original digital data at the receiving end.
The word 'modem' is derived from modulator and demodulator. There must be
mutual agreement on how data is to be encoded, i.e. the receiver must be able to
understand what the transmitter is sending. The structure in which devices
communicate is known as a protocol. The standard that has created an enormous

92
amount of interest in the past few years is Ethernet. Other protocol, which fits onto
Ethernet extremely well, is TCP/IP, and being derived from the Internet is very
popular and widely used.

8.1. Open Systems Interconnection (OSI) model


The OSI model, developed by the International Organization for
Standardization, has gained widespread industry support. The OSI model reduces
every design and communication problem into a number of layers as shown in
Figure 8. 1. A physical interface standard such as RS-232 would fit into the layer
1, while the other layers relate to the protocol software.

Figure 8.1 OSI model representation: two hosts interconnected via a router

The OSI model is useful in providing a universal framework for all


communication systems. However, it does not define the actual protocol to be used
at each layer. It is anticipated that groups of manufacturers in different areas of
industry will collaborate to define software and hardware standards appropriate to
their particular industry. Those seeking an overall framework for their specific
communications’ requirements have enthusiastically embraced this OSI model and
used it as a basis for their industry specific standards.
As previously mentioned, the OSI model provides a framework within which
a specific protocol may be defined. A protocol, in turn, defines a frame format that
might be made up of various fields as follows (fig. 8.2)

93
Figure 8. 2 Basic structure of an information frame

8.2. RS-232 interface standard


The RS-232 interface standard (officially called TIA-232) defines the
electrical and mechanical details of the interface between Data Terminal
Equipment (DTE) and Data Communications Equipment (DCE), which employ
serial binary data interchange. The current version of the standard refers to DCE as
Data Circuitterminating Equipment. Figure 8. 3 illustrates the signal flows across
a simple serial data communications link.

Figure 8. 3 A typical serial data communications link

The RS-232 standard consists of three major parts, which define: • Electrical
signal characteristics • Mechanical characteristics of the interface • Functional
description of the interchange circuits

8.2.1. Half-duplex operation of RS-232


The following description of one particular mode of operation of the RS-232
interface is based on half-duplex data interchange. The description encompasses
the more generally used full-duplex operation.
Figure 8. 4 shows the operation with the initiating user terminal, DTE, and its
associated modem, DCE, on the left of the diagram and the remote computer and
its modem on the right. Full-duplex operation requires that transmission and
reception must be able to occur simultaneously. In this case, there is no RTS/CTS
interaction at either end. The RTS and CTS lines are left ON with a carrier to the
remote computer.

94
Figure 8. 4 Half- duplex operational sequence of RS-232

8.3. Fiber Optics


Fiber optic communication uses light signals guided through a fiber core.
Fiber optic cables act as waveguides for light, with all the energy guided through
the central core of the cable. The light is guided due to the presence of a lower
refractive index cladding around the central core. Little of the energy in the signal
is able to escape into the cladding and no energy can enter the core from any
external sources. Therefore the transmissions are not subject to any
electromagnetic interference. The core and the cladding will trap the light ray in
the core, provided the light ray enters the core at an angle greater than the ‘critical
angle’. The light ray will then travel through the core of the fiber, with minimal

95
loss in power, by a series of total internal reflections. Figure 5. 5 illustrates this
process.

Figure 8. 5 Light ray traveling through an optical fiber

8.3.1. Applications for fiber optic cables


Fiber optic cables offer the following advantages over other types of
transmission media:
Light signals are impervious to interference from EMI or electrical
crosstalk
Light signals do not interfere with other signals
Optical fibers have a much wider, flatter bandwidth than coaxial cables
and equalization of the signals is not required
The fiber has a much lower attenuation, so signals can be transmitted
much further than with coaxial or twisted pair cable before amplification is
necessary
Optical fiber cables do not conduct electricity and so eliminate
problems of ground loops, lightning damage and electrical shock
Fiber optic cables are generally much thinner and lighter than copper
cables • Fiber optic cables have greater data security than copper cables

8.3.2. Fiber optic cable components


The major components of a fiber optic cable are the core, cladding, coating
(buffer), as shown in Figure 8. 6. Some types of fiber optic cable even include a
conductive copper wire that can be used to provide power to a repeater.

Figure 8. 6 Fiber optic cable components

96
The fiber components include:
1. Fiber core
2. Cladding
3. Coating (buffer)
4. Strength members
5. Cable sheath
There are four broad application areas into which fiber optic cables can be
classified: aerial cable, underground cable, sub-aqueous cable and indoor cable.

8.4. Modbus

8.4.1. Modbus protocol


Modbus Messaging protocol is an Application layer (OSI layer 7) protocol
that provides client/server communication between devices connected to different
types of buses or networks, how it is shown in the Figure 7.8. The Modbus
Messaging protocol is only a protocol and does not imply any specific hardware
implementation. Also note that the Modbus Messaging protocol used with Modbus
Serial is the same one used with Modbus Plus and Modbus TCP.
Modbus messaging is based on a client/server model and employs the
following messages:
1. Modbus requests, i.e. the messages sent on the network by the
clients to initiate transactions. These serve as indications of the
requested services on the server side
2. Modbus responses, i.e. the response messages sent by the
servers. These serve as confirmations on the client side

Figure 8.7 Modbus transaction

97
The interaction between client and sever (controller and target device) can be
depicted as follows. The parameters exchanged by the client and server consist of
the Function Code (‘what to do’), the Data Request (‘with which input or output’)
and the Data response (‘result’).
The Application Data Unit (ADU) structure of the Modbus protocol is shown
in the Figure 8.8

Figure 8.8 Modbus serial ADU format

Modbus functions can be divided into four groups or ‘Conformance Classes’.


The Function Codes are normally expressed in decimal; the hexadecimal
equivalents are shown in brackets.

8.4.2. Modbus Plus


Modbus (or to be more exact; the Modbus Messaging protocol) is just a
protocol, Modbus Plus is a complete system with a predefined medium and
Physical layer (OSI layer 1) implementation. It is a LAN system for industrial
control applications, allowing networked devices to exchange messages for the
control and monitoring of processes at remote locations in the industrial plant.
Modbus Plus uses a token-passing medium access control mechanism, which
results in deterministic operation, albeit not necessarily fast under all conditions.
The Modbus Plus layer 7 messaging protocol is essentially the same as that
used for Modbus Serial and Modbus/TCP. The Physical layer is implemented with
RS485 and functions over shielded twisted pair cable.
The Data Link layer (layer 2)
protocol is based on the ISO/IEC
3309:1991 HDLC (High-level Data
Link Control) multi-drop protocol,
which uses a token passing medium
access control mechanism and transmits
data in a synchronous fashion as
opposed to the asynchronous
transmission of Modbus Serial.
Figure 8. 10 Modbus Plus protocol stack

98
This results in transmission of data at 1 Mbps (fig. 8.10).
Unlike Modbus, Modbus Plus is a proprietary standard developed to
overcome the ‘single-master’ limitation prevalent in Modbus Serial.

8.5. Data Highway Plus /DH485


There are three main configurations used in Allen Bradley data
communications: Data Highway: This is a Local Area Network (LAN) that allows
peer to peer communications amongst up to 64 nodes. It uses a half-duplex
(polled) protocol and rotation of link mastership. It operates at 57.6kbaud. Data
Highway Plus: This is similar to the Data Highway network although is designed
for fewer nodes, and operates at a data rate of 57.6kbaud. It has peer to peer
communications with a token passing scheme to rotate link mastership among the
nodes. Note that both systems implement peer to peer communications through a
modified token passing system called the ‘floating master’. This is a fairly
efficient mechanism as each node has an opportunity to become a master, at which
time it can immediately transmit without risking contention on the bus. Both
systems use a differential signaling system similar to RS-485.
The Allen Bradley Data Highway plus implements three layers of the OSI
layer model, viz.:
– Physical layer hardware
– Data Link layer protocol
– Application layer protocol
Data Highway-485: This is used by the SLC range of Allen Bradley
controllers and is based on RS-485.

8.6. HART
The HART system (and its associated protocol) was originally developed by
Rosemount and is regarded as an open standard, available to all manufacturers. Its
main advantage is that it enables the retention of the existing 4-20mA
instrumentation cabling whilst using, simultaneously, the same wires to carry
digital information superimposed on the analog signal. HART is a hybrid analog
and digital system, as opposed to most field bus systems, that are purely digital. It
uses a Frequency Shift Keying (FSK) technique based on the Bell 202 standard.
Two individual frequencies of 1200 and 2200 Hz, representing digits ‘1’ and ‘0’
respectively, are used. The average value of the 1200/2400Hz sine wave

99
superimposed on the 4-20mA signal is zero; hence, the 420mA analog information
is not affected (fig. 8.11).
HART can be used in three ways:
– In conjunction with the 4-20mA current signal in point-to-point mode
– In conjunction with other field devices in multi-drop mode
– In point-to-point mode with only one field device broadcasting in
burst mode

Figure 8. 11 Frequency allocation of HART signaling system

Traditional point-to-point loops use zero for the smart device polling address.
Setting the smart device polling address to a number greater than zero implies a
multi-drop loop. Obviously the 4-20mA concept only applies to a loop with a
single transducer; hence for a multi-drop configuration the smart device sets its
analog output to a constant 4mA and communicates only digitally.
The HART protocol has two formats for digital transmission of data, viz:
– Poll/response mode
– Burst (broadcast) mode
In the poll/response mode, the master polls each of the smart devices on the
highway and requests the relevant information. In burst mode the field device
continuously transmits process data without the need for the master to send request
messages. Although this mode is fairly fast (up to 3.7 times/second), it cannot be
used in multidrop networks. The protocol is implemented with the OSI model
using layers 1, 2 and 7.

8.7. AS-i
Actuator Sensor-interface is an open system network developed by eleven
manufacturers. AS-i is a bit-oriented communication link designed to connect

100
binary sensors and actuators. Most of these devices do not require multiple bytes to
adequately convey the necessary information about the device status, so the AS-i
communication interface is designed for bit-oriented messages in order to increase
message efficiency for these types of devices. It was not developed to connect
intelligent controllers together since this would be far beyond the limited capability
of such small message streams. Modular components form the central design of
AS-i.
Connection to the network is made with unique connecting modules that
require minimal, or in some cases no tools to provide for rapid, positive device
attachment to the AS-i flat cable. Provision is made in the communications system
to make 'live' connections, permitting the removal or addition of nodes with
minimum network interruption. Connection to higher level networks (e.g.
ProfiBus) is made possible through plug-in PC and PLC cards or serial interface
converter modules.

8.8. DeviceNet
DeviceNet, developed by Allen Bradley, is a low-level device oriented
network based on CAN (Controller Area Network) developed by Bosch (GmbH)
for the automobile industry. It is designed to interconnect lower level devices
(sensors and actuators) with higher level devices (controllers).
DeviceNet is classified as a field bus, per specification IEC-62026. The
variable, multi-byte format of the CAN message frame is well suited to this task as
more information can be communicated per message than with bit-type systems.
The DeviceNet specification is an open specification and available through the
ODVA. DeviceNet can support up to 64 nodes, which can be removed individually
under power and without severing the trunk line.
A single, four-conductor cable (round or flat) provides both power and data
communications. It supports a bus (trunk line drop line) topology, with branching
allowed on the drops. Reverse wiring protection is built into all nodes, protecting
them against damage in the case of inadvertent wiring errors. The data rates
supported are 125, 250 and 500K baud (i.e. bits per second in this case).
Figure 8.12 illustrates the positioning of DeviceNet and CANBUS within
the OSI model. CANBUS represents the bottom two layers in the lower middle
column, just below DeviceNet Transport.

101
Figure 8. 12 Devicenet (as well as ControlNet and Ethernet/IP) vs. the OSI model

Unlike most other field buses, DeviceNet does implement layers 3 and 4,
which makes it a routable system. There are two other products in the same
family; Control Net and Ethernet/IP. They share the same upper layer protocols
(implemented by CIP, the Control and Information Protocol) and only differ in the
lower four layers.

8.9. Profibus
ProfiBus (PROcess FIeld BUS) is a widely accepted international networking
standard, commonly found in process control and in large assembly and material
handling machines. It supports single-cable wiring of multi-input sensor blocks,
pneumatic valves, complex intelligent devices, smaller sub-networks (such as
ASi), and operator interfaces.
It is an open, vendor independent standard. It adheres to the OSI model,
ensuring that devices from a variety of different vendors can communicate easily
and effectively. It has been standardized under the German National standard as
DIN 19 245 Parts 1 and 2 and, in addition, has also been ratified under the
European national standard EN 50170 Volume 2.
The bus interfacing hardware is implemented on ASIC (Application Specific
Integrated Circuit) chips produced by multiple vendors, and are based on RS-485
as well as the European EN50170 Electrical specification. ProfiBus uses 9-Pin D-
type connectors (impedance terminated) or 12mm round (M12-style) quick-
disconnect connectors. The number of nodes is limited to 127.

102
The distance supported is up to 24km (with repeaters and fiber optic
transmission), with speeds varying from 9600bps to 12Mbps. The message size
can be up to 244 bytes of data per node per message (12 bytes of overhead for a
maximum message length of 256 bytes), while the medium access control
mechanisms are polling and token passing.
ProfiBus supports two main types of devices, namely, masters and slaves.
Master devices control the bus and when they have the right to access the bus,
they may transfer messages without any remote request. These are referred to as
active stations
Slave devices are typically peripheral devices i.e. transmitters/sensors and
actuators.
They may only acknowledge received messages or, at the request of a master,
transmit messages to that master. These are also referred to as passive stations.

8.10. Foundation Fieldbus


Foundation Fieldbus allows end-user benefits such as:
– Reduced wiring
– Communications of multiple process variables from a single
instrument
– Advanced diagnostics
– Interoperability between devices of different manufacturers
– Enhanced field level control
– Reduced start-up time
– Simpler integration.
The concept behind Foundation Fieldbus is to preserve the desirable features
of the present 4-20mA standard while taking advantage of the new digital
technologies. This provides the features noted above because of:
– Reduced wiring due to the multi-drop capability
– Flexibility of supplier choices due to interoperability
– Reduced control room equipment due to distribution of control
functions to the device level
– Increased data integrity and reliability due to the application of digital
communications.
Foundation Fieldbus implements four OSI layers. Three of them correspond
to OSI layers 1, 2 and 7. The fourth is the so-called ‘user layer’ that sits on top of
layer 7 and is often said to represent OSI ‘layer 8’.The user layer provides a

103
standardized interface between the application software and the actual field
devices.

8.11. . Industrial Ethernet


Early Ethernet systems (of the 10 Mbps variety) use the CSMA/CD access
method. This gives a system that operates with little delay if lightly loaded, but
becomes very slow if heavily loaded. Ethernet network interface cards are
relatively cheap and produced in vast quantities. Ethernet has, in fact, become the
most widely used networking standard. However, CSMA/CD is a probabilistic
medium access mechanism, there is no guarantee of message transfer and
messages cannot be prioritized.
Modern Ethernet systems are a far cry from the original design. From
100BaseT onwards they are capable of full duplex (sending and receiving at the
same time via switches, without collisions) and the Ethernet frame can be modified
to make provision for prioritization and virtual LANs. Early Ethernet was not
entirely suitable for control functions as it was primarily developed for office-type
environments.
Ethernet technology has, however, made rapid advances over the past few
years. It has gained such widespread acceptance in Industry that it is becoming the
de facto field bus technology for OSI layers 1 and 2. An indication of this trend is
the inclusion of Ethernet as the level 1 and 2 infrastructure for Modbus/TCP
(Schneider), Ethernet/IP (Rockwell Automation and ODVA), ProfiNet (Profibus)
and Foundation Fieldbus HSE.
10 Mbps Ethernet
The IEEE 802.3 standard (also known as ISO 8802.3) defines a range of
media types that can be used for a network based on this standard such as coaxial
cable, twisted pair cable and fiber optic cable.
It supports various cable media and transmission rates at 10 Mbps, such as:
– 10Base2 : thin wire coaxial cable (RG-58), 10 Mbps baseband
operation, bus topology
– 10Base5 : thick wire coaxial cable (RG-8), 10 Mbps baseband
operation, bus topology
– 10BaseT : UTP cable (Cat3), 10 Mbps baseband operation, star
topology
– 10BaseFL : optical fiber, 10 Mbps baseband operation, point-topoint
topology

104
Other variations included 1Base5, 10BaseFB, 10BaseFP and 10Broad36, but
these versions never became commercially viable.

8.11.1. 100 Mbps Ethernet


100BaseT is the shorthand identifier for 100 Mbps Ethernet systems, viz.
100BaseTX (copper) and 100BaseFX (fiber). 100BaseT4 was designed to operate
at 100 Mbps over 4 pairs of Cat3 cable, but this option never gained widespread
acceptance. Yet another version, 100BaseT2, was supposed to operate over just 2
pairs of Cat3 cable but was never implemented by any vendor. One of the
limitations of hub-based (CSMA/CD) 100BaseT systems is the size of the collision
domain, which is only 250 meters or 5.12 microseconds.
This is the maximum size of a network segment in which collisions can be
detected, being one tenth of the maximum size of a 10 Mbps network. This
effectively limits the distance between a workstation and hub to 100 m, the same
as for 10BaseT. As a result, networks larger than 200 meters must be logically
interconnected by storeand-forward devices such as bridges, routers or switches.
This is not a bad thing, since it segregates the traffic within each collision
domain, reducing the number of collisions on the network. The use of bridges and
routers for traffic segregation, in this manner, is often done on industrial Ethernet
networks. Of course, the use of switches instead of hubs allows the construction of
very large networks because of the full duplex operation. The format of the frame
has been left unchanged. The only difference is that it is transmitted 10 times faster
than in 10 Mbps Ethernet, hence its length (in time) is 10 times less.

8.11.2. Gigabit Ethernet


1000BaseX is the shorthand identifier for the Gigabit Ethernet system based
on the 8B/10B block encoding scheme adapted from the fiber channel networking
standard, developed by ANSI.
1000BaseX includes 1000BaseSX, 1000BaseLX and 1000BaseCX.
1000BaseSX is the short wavelength fiber version
1000BaseLX is the long wavelength fiber version
1000BaseCX is a short copper cable version, based on the fiber channel
standard.
1000BaseT, on the other hand, is a 1000 Mbps version capable of operating
over Cat5 (or better, such as Cat5e) UTP, and has largely replaced 1000BaseCX.

105
1000BaseT is based on a different encoding scheme. As with Fast Ethernet,
Gigabit Ethernet supports full duplex and auto-negotiation.
It uses the same frame format as 10 Mbps and 100 Mbps Ethernet systems,
and operates at ten times the clock speed of Fast Ethernet, i.e. at 1Gbps. By
retaining the same frame format as the earlier versions of Ethernet, backward
compatibility is assured. Despite the similar frame format, the system had to
undergo a small change to enable it to function effectively at 1Gbps in CSMA/CD
mode.
The slot time of 64 bytes used with both 10 Mbps and 100 Mbps systems had
to be increased by a factor of 8, to 512 bytes. This is equivalent to 4.096 μs.
Without this increased slot time the collision domain would have been
impracticably small at 25 meters. The irony is that in practice all Gigabit Ethernet
systems are full duplex, and do not need this large slot time.

8.11.3. TCP/IP
TCP/IP is the de facto global standard for the Internet (network) and host–to–
host (transport) layer implementation of internet work applications because of the
popularity of the Internet. The Internet (known as ARPANet in its early years),
was part of a military project commissioned by the Advanced Research Projects
Agency (ARPA), later known as the Defense Advanced Research Agency or
DARPA. The communications model used to construct the system is known as the
ARPA model. Whereas the OSI model was developed in Europe by the
International Standards Organization (ISO), the ARPA model (also known as the
DoD model) was developed in the USA by ARPA. Although they were developed
by different bodies and at different points in time, both serve as models for a
communications infrastructure and hence provide ‘abstractions’ of the same
reality.
The remarkable degree of similarity is therefore not surprising. Whereas the
OSI model has 7 layers, the ARPA model has 4 layers. The OSI layers map onto
the ARPA model as follows.
– The OSI session, presentation and applications layers are contained in
the ARPA process and application layer.
– The OSI transport layer maps onto the ARPA host–to–host layer
(sometimes referred to as the service layer). • The OSI network layer maps onto
the ARPA Internet layer.

106
– The OSI physical and data link layers map onto the ARPA network
interface layer.
The relationship between the two models is depicted in Figure 8.13.
TCP/IP, or rather the TCP/IP protocol suite is not limited to the TCP and IP
protocols, but consists of a multitude of interrelated protocols that occupy the
upper three layers of the ARPA model. TCP/IP does NOT include the bottom
network interface layer, but depends on it for access to the medium.

Figure 8. 13 OSI vs. ARPA models

As depicted in Figure 8.14, an Internet transmission frame originating on a


specific host (computer) would contain the local network (for example, Ethernet)
header and trailer applicable to that host. As the message proceeds along the
Internet, this header and trailer could be replaced depending on the type of network
on which the packet finds itself - be that X.25, frame relay or ATM. The IP
datagram itself would remain untouched, unless it has to be fragmented and
reassembled along the way.
The Internet layer: This layer is primarily responsible for the routing of
packets from one host to another. The host–to–host layer: This layer is primarily
responsible for data integrity between the sender host and receiver host regardless
of the path or distance used to convey the message.

107
Figure 8. 14 Internet frame

The process/application layer: This layer provides the user or application


programs with interfaces to the TCP/IP stack.
Internet layer protocols (packet transport): Protocols like internet protocol
(IP), the internet control message protocol (ICMP) and the address resolution
protocol (ARP) are responsible for the delivery of packets (datagrams) between
hosts.
Routing: Unlike the host–to–host layer protocols (for example, TCP), which
control end–to–end communications, IP is rather ‘shortsighted.’ Any given IP
node (host or router) is only concerned with routing (switching) the datagram to
the next node, where the process is repeated.

9. OPC TECHNOLOGY
9.1. Introduction in OPC
OLE for Process Control (OPC) is a series of standards and specifications for
industrial telecommunication. An industrial automation industry task force
developed the original standard in 1996 under the name OLE for Process Control
(Object Linking and Embedding for process control). OPC specifies the
communication of real-time plant data between control devices from different
manufacturers.
After the initial release in 1996, the OPC Foundation was created to maintain
the standard.[1] As OPC has been adopted beyond the field of process control, the
OPC Foundation changed the name to Open Platform Communications in 2011.[2]
The change in name reflects the applications of OPC technology for applications in
building automation, discrete manufacturing, process control and many others.
OPC has also grown beyond its original OLE (Object Linking and Embedding)

108
implementation to include other data transportation technologies including
Microsoft's .NET Framework, XML, and even the OPC Foundation's binary-
encoded TCP format.

9.1.1. OPC Background


A standard mechanism for communicating to numerous data sources, either
devices on the factory floor, or a database in a control room is the motivation for
OPC. The information architecture for the Process Industry shown in Figure 1-1
involves the following levels: Field Management. With the advent of “smart” field
devices, a wealth of information can be provided concerning field devices that was
not previously available. This information provides data on the health of a device,
its configuration parameters, materials of construction, etc. All this information
must be presented to the user, and any applications using it, in a consistent manner.
Process Management. The installation of Distributed Control Systems (DCS) and
SCADA systems to monitor and control manufacturing processes makes data
available electronically which had been gathered manually. Business Management.
Benefits can be gained by installing the control systems. This is accomplished by
integrating the information collected from the process into the business systems
managing the financial aspects of the manufacturing process. Providing this
information in a consistent manner to client applications minimizes the effort
required to provide this integration. To do these things effectively, manufacturers
need to access data from the plant floor and integrate it into their existing business
systems. Manufacturers must be able to utilize off the shelf tools (SCADA
Packages, Databases, spreadsheets, etc.) to assemble a system to meet their needs.
The key is an open and effective communication architecture concentrating on data
access, and not the types of data.

9.1.2. Purpose
What is needed is a common way for applications to access data from any
data source like a device or a database.
OPC Server in this figure and in the following sections is used as synonym
for any server that provides OPC interfaces, e.g., OPC DataAccess Server, OPC
Alarm&Event Server, OPC HistoricalData Server.

109
9.1.3. The Current Client Application Architecture
There are many client applications that have been developed that require data
from a data source and access that data by independently developing “Drivers” for
their own packages (fig. 9.1, fig 9.2)).

Figure 9.1 Process Control Information Architecture

Figure 9.2. Applications Working with Many OPC Servers

This leads to the problems that follow:


– Much duplication of effort Everyone must write a driver for a
particular vendor’s hardware.
– Inconsistencies between vendors drivers Hardware features not
supported by all driver developers.
– Support for hardware feature changes A change in the hardware’s
capabilities may break some drivers

110
– Access Conflicts Two packages generally cannot access the same
device simultaneously since they each contain independent Drivers.
Hardware manufacturers attempt to resolve these problems by developing
drivers, but are hindered by differences in client protocols. Today they cannot
develop an efficient driver that can be used by all clients.
OLE for Process Control (OPC) draws a line between hardware providers and
software developers. It provides a mechanism to provide data from a data source
and communicate the data to any client application in a standard way. A vendor
can now develop a reusable, highly optimized server to communicate to the data
source, and maintain the mechanism to access data from the data source/device
efficiently. Providing the server with an OPC interface allows any client to access
their devices.

9.1.4. The Custom Application Architecture


A growing number of custom applications are being developed in
environments like Visual Basic (VB), Delphi, Power Builder, etc. OPC must take
this trend into account. Microsoft understands this trend and designed OLE/COM
to allow components (written in C and C++ by experts in a specific domain) to be
utilized by a custom program (written in VB or Delphi for an entirely different
domain). Developers will write software components in C and C++ to encapsulate
the intricacies of accessing data from a device, so that business application
developers can write code in VB that requests and utilizes plant floor data.
The intent of all specifications is to facilitate the development of OPC Servers
in C and C++, and to facilitate development of OPC client applications in the
language of choice.
The architecture and design of the interfaces are intended to support
development of OPC servers in other languages as well.

9.1.5. General
OLE for Process Control (OPC™) is designed to allow client applications
access to plant floor data in a consistent manner. With wide industry acceptance
OPC will provide many benefits:
– Hardware manufacturers only have to make one set of software
components for customers to utilize in their applications.
– Software developers will not have to rewrite drivers because of feature
changes or additions in a new hardware release.

111
– Customers will have more choices with which to develop World Class
integrated manufacturing systems.
With OPC, system integration in a heterogeneous computing environment
will become simple. Leveraging OLE/COM the environment shown in Figure 9.3
becomes possible.

Figure 9-3. Heterogeneous Computing Environment

9.2. Scope
A primary goal for OPC is to deliver specifications to the industry as quickly
as possible. With this in mind, the scope of the first document releases is limited to
areas common to all vendors. Additional functionality will be defined in future
releases. Therefore, the first releases focus on
– Online DataAccess, i.e., the efficient reading and writing of data
between an application and a process control device flexibly and efficiently;
– Alarm and Event Handling, i.e., the mechanisms for OPC Clients to
be notified of the occurrence of specified events and alarm conditions, and
– Historical Data Access, i.e., the reading, processing and editing of
data of a historian engine.
Functionality such as security, batch and historical alarm and event data
access belong to the features which are addressed in subsequent releases. The
architecture of OPC leverages the advantages of the COM interface, which

112
provides a convenient mechanism to extend the functionality of OPC. Other goals
for the design of OPC were as follows:
– simple to implement
– flexible to accommodate multiple vendor needs
– provide a high level of functionality
– allow for efficient operation
The specifications include the following:
– A set of custom COM interfaces for use by client and server writers.
– References to a set of OLE Automation interfaces to support clients
developed with higher level business applications such as Excel, Visual Basic, etc.
The architecture is intended to utilize the Microsoft distributed OLE
technology (DCOM) to facilitate clients interfacing to remote servers.

9.3. OPC Fundamentals


OPC is based on Microsoft’s OLE/COM technology.

9.3.1. OPC Objects and Interfaces


This specification describes the OPC COM Objects and their interfaces
implemented by OPC Servers. An OPC Client (fig. 9.3) can connect to OPC
Servers provided by one or more vendors.

Figure 9.3 OPC Client

OPC Servers (fig. 9.4) may be provided by different vendors. Vendor


supplied code determines the devices and data to which each server has access, the
data names, and the details about how the server physically accesses that data.

113
Figure 9.4 OPC Client/Server Relationship

9.3.2. OPC DataAccess Overview


At a high level, an OPC DataAccess Server is comprised of several objects:
the server, the group, and the item. The OPC server object maintains information
about the server and serves as a container for OPC group objects. The OPC group
object maintains information about itself and provides the mechanism for
containing and logically organizing OPC items.
The OPC Groups (fig. 9.5) provide a way for clients to organize data. For
example, the group might represent items in a particular operator display or report.
Data can be read and written. Exception based connections can also be created
between the client and the items in the group and can be enabled and disabled as
needed. An OPC client can configure the rate that an OPC server should provide
the data changes to the OPC client. There are two types of groups, public and local
(or ‘private’). Public is for sharing across multiple clients, local is local to a client.
Refer to the section on public groups for the intent, purpose, and functionality and
for further details. There are also specific optional interfaces for the public groups.
Within each Group the client can define one or more OPC Items.

Figure 9.5 - Group/Item Relationship

114
The OPC Items represent connections to data sources within the server. An
OPC Item, from the custom interface perspective, is not accessible as an object by
an OPC Client. Therefore, there is no external interface defined for an OPC Item.
All access to OPC Items is via an OPC Group object that “contains” the OPC item,
or simply where the OPC Item is defined. Associated with each item is a Value,
Quality and Time Stamp. The value is in the form of a VARIANT, and the Quality
is similar to that specified by Fieldbus.
Note that the items are not the data sources - they are just connections to
them. For example, the tags in a DCS system exist regardless of whether an OPC
client is currently accessing them. The OPC Item should be thought of as simply
specifying the address of the data, not as the actual physical source of the data that
the address references.

9.3.3. OPC Alarm and Event Handling Overview


These interfaces provide the mechanisms for OPC Clients to be notified of the
occurrence of specified events and alarm conditions. They also provide services
which allow OPC Clients to determine the events and conditions supported by an
OPC Server, and to obtain their current status. We make use of entities commonly
referred to in the process control industry as alarms and events. In informal
conversation, the terms alarm and event are often used interchangeably and their
meanings are not distinct. Within OPC, an alarm is an abnormal condition and is
thus a special case of a condition. A condition is a named state of the OPC Event
Server, or of one of its contained objects, which is of interest to its OPC Clients.
For example, the tag FC101 may have the following conditions associated with it:
HighAlarm, HighHighAlarm, Normal, LowAlarm, and LowLowAlarm. On the
other hand, an event is a detectable occurrence which is of significance to the OPC
Server, the device it represents, and its OPC Clients. An event may or may not be
associated with a condition. For example, the transitions into HighAlarm and
Normal conditions are events which are associated with conditions. However,
operator actions, system configuration changes, and system errors are examples of
events which are not related to specific conditions. OPC Clients may subscribe to
be notified of the occurrence of specified events.
The IOPCEventServer interface provides methods enabling the OPC Client
to:
– Determine the types of events which the OPC Server supports.

115
– Enter subscriptions to specified events, so that OPC Clients can
receive notifications of their occurrences. Filters may be used to define a subset of
desired events.
– Access and manipulate conditions implemented by the OPC Server. In
addition to the IOPCEventServer interface, an OPC Event Server may support
optional interfaces for browsing conditions implemented by the server and for
managing public condition groups (defined in the following section).

9.3.4. OPC Historical Data Access Overview


Historical engines today produce an added source of information that must be
distributed to users and software clients that are interested in this information.
Currently most historical systems use their own proprietary interfaces for
dissemination of data. There is no capability to augment or use existing historical
solutions with other capabilities in a plug-n-play environment.
This requires the developer to recreate the same infrastructure for their
products as all other vendors have had to develop independently with no
interoperability with any other systems. In keeping with the desire to integrate data
at all levels of a business, historical information can be considered to be another
type of data. There are several types of Historian servers.
Some key types supported by this specification are:
– Simple Trend data servers. These servers provided little else then
simple raw data storage. (Data would typically be the types of data available from
an OPC Data Access server, usually provided in the form of a tuple [Time Value &
Quality])
– Complex data compression and analysis servers. These servers
provide data compression as well as raw data storage. They are capable of
providing summary data or data analysis functions, such as average values,
minimums and maximums etc. They can support data updates and history of the
updates. They can support storage of annotations along with the actual historical
data storage.

9.3.5. Where OPC Fits


Although OPC is primarily designed for accessing data from a networked
server, OPC interfaces can be used in many places within an application. At the
lowest level they can get raw data from the physical devices into a SCADA or
DCS, or from the SCADA or DCS system into the application.. The architecture

116
and design makes it possible to construct an OPC Server which allows a client
application to access data from many OPC Servers provided by many different
OPC vendors running on different nodes via a single object (fig 9.6).

Figure 9.6 - OPC Client/Server Relationship

9.3.6. General OPC Architecture and Components


OPC specifications always contain two sets of interfaces; Custom Interfaces
and Automation interfaces. This is shown in Figure 9.7.

Figure 9.7 - The OPC Interfaces

The OPC Specification specifies COM interfaces (what the interfaces are),
not the implementation (not the how of the implementation) of those interfaces. It
specifies the behavior that the interfaces are expected to provide to the client
applications that use them.
Included are descriptions of architectures and interfaces that seemed most
appropriate for those architectures. Like all COM implementations, the architecture
of OPC is a client-server model where the OPC Server component provides an
interface to the OPC objects and manages them.
There are several unique considerations in implementing an OPC Server. The
main issue is the frequency of data transfer over non-sharable communications
paths to physical devices or other data bases. Thus, we expect that OPC Servers
will either be a local or remote EXE which includes code that is responsible for
efficient data collection from a physical device or a data base.
An OPC client application communicates to an OPC server through the
specified custom and automation interfaces. OPC servers must implement the
custom interface, and optionally may implement the automation interface. In some

117
cases the OPC Foundation provides a standard automation interface wrapper. This
“wrapperDLL” can be used for any vendor-specific custom-server (fig 9.8).

Figure 9.8 - Typical OPC Architecture

9.3.7. Local vs. Remote Servers


It is expected that OPC Server vendors will take one of two approaches to
networking:
1. They can indicate that the client should always connect to a local
server which makes use of an existing proprietary network scheme. This approach
will commonly be used by vendors who are adding OPC capability to an existing
distributed product.
2. They can indicate the client should connect to the desired server on
the target node and make use of DCOM™ to provide networking. For this reason
all of the RPC_E_* error codes should also be considered as possible returns from
the functions below.

9.4. New Automation Concepts with OPC Unified Architecture


OPC technology has evolved into a de-facto standard for interoperable data
exchange between multi-vendor software applications that is applicable in a wide
range of industries spanning from manufacturing and process industries to building
automation and many others. Today more than 20,000 OPC products from over
3,500 different manufacturers are in use all over the world. OPC facilitates the data
transfer between widely distributed parts of a plant. The OPC interfaces bridge the
divide between heterogeneous automation worlds.
With the introduction of the platform-independent OPC Unified Architecture
(OPC UA), OPC technology has started to conquer new application areas, such as

118
embedded systems or the IT world. Recently, the new technology has established
itself in areas where OPC was hardly seen before, e.g. device parameterization.
The reason for this breadth of application is that the OPC UA technology offers
extended features when compared to Classic OPC. OPC UA's platform
independence and, in particular, its scalability opens up many possibilities for new
and efficient automation concepts.

9.4.1. Standardized communication


OPC UA facilitates the secure, reliable, and manufacturer-independent
transport of raw data and pre-processed information from the factory floor to the
production planning or ERP system. The desired information is provided to any
authorized application and person, anytime and anywhere. This functionality is
independent of the manufacturer that created the applications, the programming
language they were developed in, and the operating system they run on. OPC UA
is no longer DCOM based, but uses a service-oriented architecture (SOA). This
makes OPC UA easy to port. Today, OPC UA builds a bridge between the
enterprise level with Unix systems and embedded automation components running
different versions of Windows and nonWindows operating systems (Fig. 9.9).

Fig. 9.9 OPC UA allows multiplatform communication between applications

OPC UA provides a set of features that allows the implementation of new


concepts for secure and robust plant automation. They include:
– Standardized communication over the Internet and across firewalls:
OPC UA uses a dedicated UA Binary protocol for data exchange that is based on
TCP; Web services and HTTP are also supported. Only one port needs to be
opened in a firewall – that's enough. Integrated security mechanisms guarantee
secure Internet communication.
– Protection against unauthorized data access: The sophisticated
security concept provided by OPC UA technology ensures efficient protection

119
against unauthorized access, sabotage, and faults caused by negligent use. OPC
UA security is based on global standards developed by the World Wide Web
consortium. It offers various possibilities to identify applications, authenticate
users and protect against unauthorized access, as well as sign messages and encrypt
the payload transferred.
– Data security and reliability: The communication standard defines a
robust architecture with reliable communication mechanisms, configurable
timeouts, automatic fault detection, and recovery mechanisms. The communication
link between a OPC UA client and server can be monitored by both the client and
the server. If a connection is temporarily interrupted, the data can be buffered in
the server. In security-critical areas, OPC UA defines an additional redundancy
concept that can be used for devices, servers and clients.
– Platform independence and scalability: Using service-oriented base
technologies ensures that OPC UA is platform independent and opens up many
possibilities for new and cost-effective automation concepts. Embedded field
devices, process control systems, PLCs, gateways, or operator panels are
developed using lean OPC UA server implementations that have been ported
directly to operating systems including embedded Linux, VxWorks, QNX, RTOS,
and many more.
– Simplification by unification: OPC UA defines an integrated address
space and an information model that maps process data, alarms, historical data,
and program invocations. In this way, even complex processes can be fully
described with OPC UA. While classic OPC requires three different OPC servers –
DA, AE and HDA – with different semantics to acquire, for example, the current
value of a temperature sensor, the event of excess temperature, and the historical
average temperature, OPC UA needs only one component. This helps to reduce
configuration and engineering times.
– High performance: OPC UA is based on a binary protocol that is TCP-
based. This very efficient protocol allows for a fast data exchange that will meet
the high-performance requirements of most applications. The actual binary
protocol implementation is available through the OPC Foundation and serves as
the basis for all OPC UA servers and clients.
– New application possibilities: The wide breadth of the OPC UA
technology allows implementing new vertical integration concepts. By cascading
OPC UA components, information can be transported securely and reliably from
the factory floor all the way up to the production planning or ERP system (Fig.

120
9.10). For this purpose, OPC UA enabled client and server components at the
automation level connect embedded UA servers at the field level with OPC UA
clients integrated in ERP systems at the enterprise level. The individual OPC UA
components can be geographically distributed and separated from each other by
firewalls without problems.

Fig 9.10 OPC UA allows secure and robust “information permeability” – from the sensor to the
ERP system

9.4.2. Developing OPC UA based concepts


Many automation component vendors already offer OPC UA clients or
servers for their PLCs, process control systems, field devices, or operator panels.
An efficient and cost-effective way to build OPC UA components is to rely on
conformant OPC toolkits, such as the OPC Toolbox UA (Fig. 9.11) from Softing
[1]. This software allows the quick and easy development of OPC UA clients and
servers for Windows, Linux, VxWorks and other embedded operating systems.
OPC UA specific functionality for creating, browsing, and managing an OPC UA
address space; for creating, deleting, reading, and writing OPC UA objects; for
event handling, method calls, and much more has been fully implemented in the
compact OPC Toolbox UA.
With the toolbox, developers can save several months of time and reduce the
time-to-market for their products.

121
The future IEC communication standard OPC UA provides features that offer
new possibilities of embedding an internationally standardized communication
interface in disparate systems ranging from PLCs, process control systems, drives,
gateways and operator panels to MES or ERP systems. The result is savings in
installation, setup, commissioning, maintenance and operation.

Fig. 9.11 OPC UA toolkits comprising platform dependent and platform independent parts
allow implementing OPC UA clients and servers on almost any target platform

Communication structures can be simplified and vertical integration concepts


consistently implemented. Toolkits like the OPC Toolbox UA allow the cost-
effective and efficient implementation of OPC UA components under Windows
and in embedded systems.

122
10. COMPLEX COMPUTER SYSTEMS
10.1. Defining ‘Complex systems’
A complex system is an automated system that presents a finite set of
interrelated subsystems united by general operation goals. These in turn can be
divided into a finite number of smaller subsystems down to a subsystem of the
lowest level, i.e. elements of a complex system, which either cannot be subjected
to further fragmentation or are not entitled to this procedure because of some
relevant arrangements. Thus, any subsystem of a complex system can be viewed as
a complex system, consisting of elements (subsystems of lower level), being at the
same time an element of a higher level.
A complex system is characterized by the following distinctive features:
– advanced architecture
– multipurpose nature
– complicated control algorithm
– high level of automatization
– large number of staff and/or users
– lengthy process of creation and a long life

The concept of a complex system is used in computer-based system


engineering, system-oriented analysis, operations research and generally in various
fields of science, engineering and production.
A complex automated control system with advanced computer architecture
should be viewed as integral.
Integrity is a fundamental property of a complex system that is manifested in
the unity of the system operation purpose and conceptual irreducibility of system
properties to the totality of properties of its elements. The integrity of a complex
system, its systemic properties and goal-directed behavior is provided by a tailored
system organization which is understood as matching of systemic operational
(performance), logic and structural specifications to physical processes taking
place in the system as well as to field-performance organizational measures.
On the basis of this reasoning, it can be said that a good number of modern
industrial automation systems, especially at large-scale industries, automation
systems and automatic control systems of complex objects can be designated as
complex systems (for details see s. 10.2). Complex systems also include large
computer systems designed to solve complex computational tasks. The system of

123
classification and the architectural concept of such computer systems are dealt with
in ss. 10.3 and 10.4, respectively.

10.2. General design concepts of complex computer systems


At present are witnessing a constant extension of the scope of complex
computer systems covering new fields in different branches of science, technology
and production. Advances in automatic control systems for complex technical
objects and modern industries, particularly state-of-the art technologies, require
continual improvement of both hardware and software. This is dictated by the tasks
solved with real-time control systems, online problem solving in area networks,
simulation modeling of complex processes (for example, in the field of
nanotechnologies, high energy physics, system science, meteorology, genetics,
medicine and other fields). There is a broad class of tasks associated with short-
term planning and control and researching operations to overcome “the curse of
dimensionality”. Tasks such as these require such mobilization of computation
power that makes designing complex computer systems and super computers ever
more relevant. Along with the increasing scope of application possible due to the
advances in complex computer systems design, there is constant increase in
complexity and number of tasks in the fields traditionally employing highly-
efficient multiprogramming system computing tools.
At present there are some fundamental and applied tasks that can be
effectively solved only by means of supercomputer power. This scope of functions
is denoted as «Grand challenges» and includes the following:
– prognosis of weather, climate and global changes in the atmosphere
– material science
– design of semiconductor devices
– superconductivity
– structural biology
– genetics
– quantum chromodynamics
– transportation problems
– hydro- and gas dynamics
– astronomy
– development of pharmaceuticals
– controlled nuclear fusion
– combustion systems

124
– efficiency geographic information system
– exploration survey
– ocean science
– speech recognition and synthesis
– image recognition
The capabilities of computer tools as well as the required speed of problem-
solving is constantly improving due to the implementation of structural methods.
Structural methods are viewed as complex computer system design based on
multiprocessing, distribution and paralleling. Paralleling is used both in the design
of certain computer devices (control devices (CPU), instruction buffers, memory
module, arithmetic-logical units, conveyors, etc.) and cooperative parallel and
distributed data processing by many computers.
Complex computer systems have different configurations, the main ones
being the following:
– high reliability systems
– high performance computing
– multithreaded systems
Complex computer systems designed on multiprocessors are viewed as an
ideal design to improve the general reliability of an information computer system.
Due to the single-system view, separate nodes and components of the system can
seamlessly replace defective elements, providing continuity and no-failure
operation even in case of such complex applications as databases. Disaster-proof
decisions are reached by spacing the nodes of the complex computer system
hundreds of kilometers apart and by providing mechanisms of global data
synchronization between these nodes. There are many examples of scientific
computations and engineering designs based on parallel processor operation that
ensure simultaneous concurrent execution of a great number of operations.
Complex computer systems designed for high-performance computing are usually
assembled from many computers. Designing such systems is a complex process
that requires constant coordination of such tasks as installation, maintenance and
the simultaneous operation of a large number of computers. It also concerns
technical requirements to parallel and highly-efficient access to the same resource,
interprocessor communication between the nodes and coordination of parallel
operation.
Multithreaded systems are used to provide a common interface for a variety of
resources that may arbitrarily increase or decrease in number, the typical example

125
being a group of web-servers. It should be noted that distinctions between such
types of complex computer systems are to some extent fuzzy and quite often the
system may possess properties or functions that lie outside the scope of those listed
above. Moreover, configuring a large system used as a general purpose system
(GPS) requires separation of blocks performing all the functions listed above.

10.3. Computer system classification


The concept of complex computer system architecture appears to be rather a
broad one as architecture can be understood in several following ways:
– as parallel data processing method employed (used) in a system;
– as storage organization
– as inter-processor communication topology
– as a method of performing arithmetical operations by a system
The first attempts to classify the whole variety of architectures were made as
early as late 1960-s and this problem still remains a live issue today.
Let us consider the architecture classification proposed by M. Flynn in 1966
(Flynn). The classification is based on thread computing that is viewed as sequence
of elements, instructions or data handled by a processor. At present this
classification system is considered standard and describes four classes of
architecture:
1. SISD - Single Instruction Single Data
2. MISD - Multiple Instruction Single Data
3. SIMD - Single Instruction Multiple Data
4. MIMD - Multiple Instruction Multiple Data
Here is a brief description of computer system structures according to this
classification (Fig 10.1.)
SISD (single instruction stream, single data stream) is a type of computer
architecture in which a single uni-core processor executes a single instruction
stream to operate on data stored in a single memory. This type of systems is
represented by ordinary serial computers. At the moment, nearly all highly
efficient computer systems have more than one central processor unit (CPU),
though each of them is executing separate sequences of instructions, that turns
such systems into complexes (networks) operating in different dataspaces. In the
case of vector systems, the vector flux should be viewed as a flux of single
indivisible vectors. Most workstations of Compaq, Hewlett-Packard и Sun
Microsystems are examples of SISD architecture computers. MISD – (multiple

126
instruction, single data) is a type of parallel computing architecture where many
functional units perform different operations on the same data. Such systems may
accomplish pipeline processing by pipeline processors of a multiprocessor
computer system to increase command processing speed and arithmetic operation
speed.

Fig. 10.1 Flynn's taxomomy

SIMD (single instruction, multiple data) is a class of parallel computers in


Flynn's taxonomy. This class of systems describes computers with multiple
processing elements that perform the same operation on multiple data points
simultaneously. The number of processors varies from 1024 to 16384, but though
there are simultaneous computations, only a single process (instruction) is carried
out at a given moment. The examples of SIMD machines are such systems as CPP
DAP, Gamma II and Quadrics Apemille.
Another subclass of SIMD systems is vector computers. These computers
process arrays of information in a way similar to scalar machines. This is possible
because of the use of specially designed vector CPUs. Vector processors handle
data in vector mode virtually in parallel that makes them by several times faster
than when processing in scalar mode. One of the examples of such computer type
is Hitachi S3600 computers.
MIMD (multiple instruction, multiple data) are systems that have a number
of processors that function asynchronously and independently. At any time,
different processors may be executing different instructions on different pieces of
data. This class of systems is represented by most parallel multiprocessor computer

127
systems. Systems using MIMD are able to concurrently execute an array of sub-
tasks in order to minimize the main task execution time.
This classification of computer systems architectures is essential for
understanding special aspects of certain architecture types, but is not detailed
enough to use in complex system designing. Thus, it is important to introduce a
more detailed classification associated with various computer architectures and
employed hardware.
Now let us have a closer look at computer systems architecture in terms of
the above-viewed basic one.

10.4. Main architectures of complex computer systems


Historically the first complex automated computer systems were designed on
single-processor multiprogramming SISD systems high-efficiency was achieved
due to time distribution of the main system units among concurrently executed
programmes.
Further improvement in automated system performance became possible
thanks to programmes multiprocessing of, i.e. division of programmes into
separate blocks as well as due to concurrent processing of these blocks by several
processing units forming an automated computer system.
Multiprocessing allows not only to improve performance but also to minimize
the execution time of several programs that are divided into parts and distributed
among different processing units.
This type of computer system architecture is known as MPP, or Massive
Parallel Processing), the core of such automated computer systems being a
multiple-computer complex (MC), i.e. a multi-computer system. A multi-computer
complex consists of computers based on classical SISD architecture that allows
data exchange. The main advantage of a system with physically separate storage is
good scalability: each processor has access to its local memory so there is no need
in CPU clock synchronization. Virtually all present day performance records are
attributed to computers with this type of architecture comprising several thousands
of processors (ASCI Red, ASCI Blue Pacific).
There are also some drawbacks:
– a special programming technique is required to manage messaging
between processors;
– each processor can use only a limited space of the local databank.

128
Because of these disadvantages in architecture, using the system’s resources
to maximum extent requires a lot of effort.
The next step in improving the performance of automated computer systems
was designing a multiprocessor computer system employing SMP (Symmetric
MultiProcessing) architecture. The main characteristic feature of this type of
computers is a physical shared memory distributed between its processors.
Shared memory is used for messaging between processors, with all computing
devices having equal access to it and the same addressing for all memory cells. For
this reason, SMP-architecture is called symmetrical.
The most well-known SMP systems are SMP-servers and Intel-based
workstations (IBM, HP, Compaq, Dell, ALR, Unisys, DG, Fujitsu, etc). The whole
system operates under a single OS (usually UNIX-like, but for Intel platform it is
supported by Windows NT/2000/2003). The existence of a single OS makes
automatic system resource distribution on various stages of its operation possible.
This results in high robustness of a system, so that in case of failure of any separate
modules the load is redistributed among the operational units securing execution of
the most important functions of an automated system. The main advantage of a
SMP system is simplicity and genericity in programming. SMP architecture
ограничения на doesn’t place restrictions on a programming model used to build
an application. It is a common case to use a parallel branch model when all
processors operate independently is applied, though it is possible to employ
models with interprocessor communication. Use of shared memory increases the
speed of information exchange between individual processors as well as guarantees
a user access to the total memory capacity of a system. There are rather efficient
means for paralleling of SMP systems.
The disadvantage of a system with a shared memory is that it is poorly
scaled. The cause for this poor scalability is that at any specific time a bus is able
to process only one transaction that results conflict problem solving in case of
several processor simultaneously address to the same areas of shared physical
memory. This happens because computing elements begin interfere with each
other. Occurrence time of such a conflict depends on communication speed and a
number of processor. Generally, conflicts take place when number of processors
amounts to 8-24. Moreover, system bus has limited (though rather high) message
rate and a limited number of slots. This makes difficult to improve systems
performance when both a number of processors and users increases. A real system
can have not more than 32 processors.

129
For designing scaling systems based on SMP, MPP hybrid cluster
architectures are applied. A cluster represents two or more computers (often called
nodes) united with a bus or a switchboard, cluster nodes being servers,
workstations or ordinary PCs. The main characteristic of hybrid cluster architecture
is NUMA (NonUniform Memory Access). Hybrid architecture combines
advantages of systems with shared memory with relatively low prices of those with
disjoint memory,
The essence (the main point) of this architecture is a specific storage
pattern (organization) that is storage (memory) is physically distributed among
different components (units) of a system while being logically shared for a user
sees (faces) single address space. A system is built with homogeneous base
modules consisting of a small number of processors and a memory unit (Fig.4.4).
(The) modules are linked (connected) with a high-speed switchboard or a
communications network. There is architectural support for single address space
as well as hardware based access to remote memory, that is to the memory of other
to modules, access the local memory being several times quicker than to the
remote one. Basically, NUMA architecture is MPP architecture where SMP nodes
are used as separate computing units. Memory access and data exchange within
each SMP node is executed through node’s local memory in no time and the access
to the processors of another node is also possible but takes more time and
addressing is a more complex process.
Further development of the idea of multiprocessing has lead to designing
large high performance known as computer multiprocessor systems known as
highly parallel computer systems. These (such) computer systems depending on
their structure nare able to concurrently process multiple data or instruction flow.
The instruction flow is thought of as a sequence of instructions executed by a
computer system, the data flow being a sequence of data handled the under
instruction flow control.
Highly parallel SIMD-type architectures of (single instruction, multiple data)
are known as matrix computing systems. They contain a number of relatively
simple high performance processors, (linked with each other to create a network
(matrix) with processors in its nodes. All (the) processors execute the same
command but on different operands delivered to the processors from shared
memory by several data flows.
Highly parallel MISD-type (multiple instruction, single data) computer
structures are called pipelined computer systems. These systems contain a chain of

130
series-connected processors so that output information of one processor is an input
one for another processor. Each processor handles the respective part of a task
transferring the results to a neighboring processor as input data.
Thus, for example, an addition operation on floating point numbers can be
divided into four steps (stages): order comparison, matching of exponents,
mantissa addition and postnormalization. In pipelined computer systems all these
stages of computation are executed by separate processors forming a conveyor.
Highly parallel computer systems show better performance, reliability and
liveness compared to multiprocessor systems. On the other hand, their obvious
drawbacks are more complicated system control, programming complexity and
small system capacity.
The first two of abovementioned disadvantages are overcome by use of LSI
(large-scale integration circuit) and specific programming languages, though the
third one results in the fact that most of highly parallel computers systems are
designed for dedicated applications.

10.5. Computer System Classification based on different characteristics


(properties)
Complex automated systems require a great number of operator stations to
allow many users to simultaneously interact with a computer on their programs.
Quite often such operator stations are located far away from a computer system
and are linked with it by means of various communication channels, monitors
usually being used as an end device. Such connection between a computer system
and many remotely located stations is called multistational.
Multistational operation which provides parallel access to a computer system
by numerous users requires proper service organization. As a rule such service is
accomplished in time-sharing mode.
An Automated time-sharing computer system sequentially and repeatedly
inquires with all user terminals, registers the information from the requester and
serves them in the same терминалы всех пользователе in the same sequence.
Each user is assigned a defined and strictly limited time quantum (a fraction of a
second at the max). During these cyclically repeating time quanta, ideally the time
interval between switch on times of the same program should not exceed normal
human reaction time. In this case a user working with his program won’t even
notice discretion of its execution and the user will have an impression of individual
communication with his computer system.

131
Shared computer systems form the basis of automated control systems,
designed to serve many users working at the same time.
To meet the abovementioned requirements such systems should have the
following features:
– advanced OS which guarantees concurrent execution of different
programs and users’ access to standard programs;
– Computer language translators that make program preparing and
maintenance easier for software specialists;
– hardware (means) that ensure dynamic memory distribution among
the programs as well as free programs relocation during computing process;
– memory protection means against other programs interference;
– a time clock that allows in accordance with the user’s request tо
allocate them with necessary working time upon the expiry of which the computer
system automatically switches to other programs execution.
– both hardware and software for prioritizing tasks simultaneously
waiting to be executed
Multiprocessor and multicomputer automated computer systems can also be
classified according to other characteristic features. Let us consider some of them.
According to their function, computer systems are divided into general-
purpose systems and specialized systems. General purpose systems are designed
solve a wide variety of specific automation and control problems whereas
specialized ones – to solve a certain scope of tasks concerning, for example,
control of some unique equipment or solution of some specific problems. For this
reason specific computer systems, as a rule, should have both hardware and
software specially designed for this particular system.
According to hardware type, complex automated computer systems break
down into homogeneous and heterogeneous systems. Homogeneous systems
contain a number of similar computer systems (or processors), whereas
heterogeneous ones contain different-type computers (or processors). The main
drawback of homogeneous computer systems is underutilization of separate
computers (processors) during their working process. In order to improve
computer system (processor) performance heterogeneous computer systems are
used.
According to the structure type, complex computer systems are divided into
stiff structure and variable structure computer systems. The structure of a complex
computer system is understood as a configuration of a system and schemes of

132
functional and control links between its elements. In the case of stiff systems the
configuration of its functional and control links does not change during its
operation process. A variable structure is a characteristic feature of adaptive
systems, i.e. systems structures of which changes during its operation process
according to the analysis of processing information. Such systems allow to achieve
optimal state in any varied performance environment.
According to the degree of control centralization, automated complex
computer systems are divided into the following three groups: centralized,
decentralized and with combined programmed control.
In centralized complex computer systems all control functions are
concentrated in a single element represented by one of the computer systems called
CPU. In decentralized complex computer system each processor or computer
system operates autonomously solving its particular tasks. In systems with
combined programmed control a complex computer system is divided into groups
of interacting computer systems (or processors), each being centralized while the
control between each group is decentralized.

10.6. Other important characteristics of automated computer systems


One of the key characteristics of automated complex computer systems is its
performance, i.e. a number of operations per time unit. One should distinguish
between peak performance and real application performance. Peak performance is
a value equal to the product of peak performance of one processor by a number of
processors in the system, it being understood that all computer units work in
maximum efficiency mode.
The peak performance of a computer is calculated precisely and is a basic
characteristic to compare HPC (high-performance computing) systems. The higher
is the performance, the quicker (in theory) a user can solve their task. Peak
performance is a theoretical value and is generally speaking unachievable in the
case of a real specific application. Real performance accomplished for this
application depends on interactions of software model, made for this application
with architectural characteristics of a certain computer this application is run on.
There are two methods of peak performance evaluation. One of them is based
on a number of commands (instructions) executed per time unit, a basic unit for
measuring time usually being MIPS (Million Instructions Per Second).
Performance stated (expressed) in MIPS tells how quickly the computer can
execute its instructions (commands). But in fact, for one thing it is not possible to

133
know in advance how many instructions will each particular program have.
Secondly, each program is a specific one and a number of instructions can greatly
vary depending on a program. As such this characteristic provides us with a very
general concept of computer performance.
Another method to measure performance is to define a number of real
operations executed per time unit, a basic unit for measuring time being FLOPS,
i.e. a number of floating point operations per second). This method of measuring
is more acceptable for a user is aware of the computational complexity of a
program and using this characteristic a user can get a lower-bound estimate of its
execution time. Yet peak performance can be achieved only in ideal conditions, i.e.
absence of conflicts in memory access in the case of balanced load on all system
units. In real applications the execution of a certain program is affected by the
following hardware characteristics: specific nature of computer architecture, a set
of instructions, functional module composition, the execution of an input/output
statement and effectiveness of compilers. The most critical factor is interaction
time with a memory device which is specified by its structure, capacity and
subsystem memory architecture.
Most modern computers use a tiered storage as the most efficient access to a
memory unit, its layers being represented by registers and register memory,
primary (general) RAM, cache memory, virtual and hard disk and tape device. The
hierarchy is organized as follows: the data processing speed should be increased as
memory level increases whereas memory level capacity should be decreased.
Computer efficiency in such a hierarchy is achieved by storing frequently used
data in the top-level memory, access time to which is minimal. Such memory is
rather expensive so it cannot be large. Memory hierarchy is one of the
characteristics of computer architecture that are of great importance for
improving performance.
The performance efficiency of a computer system unit is viewed as the degree
of involvement of this unit in the total system performance when solving some
particular problem, i.e. work unit efficiency. Paralleling is justified if it leads to a
substantial increase in the average work system efficiency. This directly affects the
task time. At present, we talk about tasks that require complex system genericity
dictated by modern application fields, rather than about a special set of tasks.
The real breakthrough in this sphere was the switch to microprocessor unit
assembly bases responsible for designing multiprocessor computing systems.

134
Designing complex automated systems is the most efficient way of dealing
deal with inconsistency between the ever growing demand in reliable high-speed
computing tools and the limits of computer systems at the present stage of
techlonogical development.

11. REAL-TIME SYSTEMS


11.1. Real-time mode: concept, definition and terminology
At present in publications of different subject-matters you can come across
requirements, support, etc. of ‘real-time processing’. Thus we must ask ourselves,
what is the essence of the notion of ‘real-time processing’ for purposes of
computer systems? According to the Computer-systems Dictionary, a real-time
system is any system where the input signal generation time is essential. This
generally stems from the fact that an input signal corresponds to certain changes in
the physical process and an output signal should somehow be connected with these
changes.
A time delay between an input signal reception or its change and an output
signal (a response to this change) should not be long so as to provide a required
response time. The response time is a system-defined characteristic as, for
example, in the case of guided missile control, where the response time should
amount to a millisecond or in the case of vessel traffic or train operation control
where the response time is measured in days. Systems are assumed to be real-time
if their response time amounts to milliseconds, dialogue systems being systems
whose response time amounts to several seconds, while response time of batch
processing systems is measured in hours and days.
The examples of real-time systems are systems that control physical
processes involving the use of computer systems, industrial process control
systems, automated control systems and test systems. A dictionary on IT gives us
the following definition: real time processing is a mode of data processing which
guarantees interaction between a computer-based system and external processes at
a rate similar to the actual rate of these processes.
The canonical identification of real time given by Donald Gillies is the
following: "A real-time system is one in which the correctness of the
computations not only depends upon the logical correctness of the computation but
also upon the time at which the result is produced. If the timing constraints of the
system are not met, system failure is said to have occurred.", a classical example

135
being a robot taking something from a conveyor belt. The objects on the conveyer
belt are moving and there is some interval of time for the robot to take the required
object. If the robot fails, the object will not be available any more even if its
movement was a right one. If the robot is too quick, the object won’t be there yet.
Moreover, in this case the robot can lock out object movement.
Another example is the traffic control loop of an airplane managed by a
computer (an autopilot). The aircraft's sensors should continuously downlink flight
measured data to a control. If the measured data is lost, control quality goes down,
possibly causing a plane crash.
It should be noted that in the case of a robot we deal with hard real time, i.e.
if the robot is behind time, this will result in an erroneous operation. Although this
case may be viewed as a soft real time mode if the only consequence were loss of
productivity. Much of what is done in the field of real time programming, in fact,
functions in soft real time. Properly designed systems usually have a safety/
correction level of their behavior even for a case when computations haven’t been
finished at the required moment, so that if a computer needs a bit more time, this
can be somehow compensated. Sometimes the term real time system is used to
denote an on-line system, though it is typically a mere gimmick. For instance,
ticket reservation systems or depot-handling systems are not qualified as real time
systems as a human-operator doesn’t really attach much importance to a delay of
several hundreds of milliseconds. Sometimes the term real time system is
employed to denote ‘high-performance system.’ It is important to note that the
term ‘real time’ is not a synonym for ‘high-performance.’ Thus, it bears repeating
that the term ‘real time’ does not mean that a system responds to input signals
instantly and a delay may amount to seconds or even more, but means that it
guarantees some maximum response delay time that makes successful problem
solving possible. It is also important to mention that algorithms providing a
guaranteed response time have a lower average performance than those systems
that don’t guarantee a certain response time.
Thus, the abovementioned facts lead to the following conclusions:
– the term ‘real-time system’ can be understood as a system, the
functional correctness of which is defined not only by correctness of computations
but also by the time needed to get the required result. An inability to meet time
requirements is viewed as system failure. In order to meet the specified
requirements for real-time systems hardware, software and operating procedures
(operation algorithms should guarantee set time parameters for the system’s

136
reaction. A system should not necessarily have fast response time but it should be
guaranteed as well as meet specified requirements;
– use of the above defined term ‘real-time system’ to denote high-
performance and interactive response systems is assumed as incorrect.
– though the term ‘soft real time’ is used often, it is not clearly defined.
Indeed, the meaning of the term ‘real-time system is interpreted by specialists
differently depending on the area of their professional interests as well as depending
on whether they are theoretical scientists or practical specialists and even on their
personal experience and social circle. As there is no exact definition for ‘soft’ real
time, let us assume, that this category includes all real-time systems that are outside
the category of ‘hard real time’ ones.
– almost all industrial automation systems are real-time systems.
– whether a particular system belongs to the category of real-time
systems or not doesn’t depend on its operation speed. For example, if a system is
designed for a ground-water level control, it works in real-time mode even though it
takes measurements only once every half an hour.
Intuition suggests that the higher the speed of the processes in the control
object is, the higher should be the operation speed of real-time systems. In order to
evaluate the necessary operation speed in digital control systems dealing with
stationary processes, it is common to use Kotelnikov’s sampling theorem. From
this theorem it follows that the signal sampling frequency should be at least twice
as high as the threshold frequency of their spectrum [4]. When dealing with
wideband transient processes, it is common to use high-performance ADCs
(analog-digital converters) with fast buffer memory which records signals
realizations at a required speed for a later analysis and/or registration by the
computer system.
The required processing should be completed by the beginning of the next
transient process, otherwise the information will get lost.
Systems of a like nature are called quasi-real-time systems.
For a number of automation tasks, software systems should function as part
of large automated systems without human input. In such cases real-time systems
are called embedded.
Embedded systems can be defined as software and hardware which represent
components of another, larger system that operates without outside interference on
the part of humans.

137
Hardware of a real-time system, on which real-time operating system
(RTOS) functions and software is commonly referred to as a target platform. Due
to the possible uniqueness of a target platform, especially in embedded systems,
program designing can be carried out on different equipment or in some cases on a
different operating system (OS) and target program testing is conducted remotely
by means of tooling or emulation of a target OS operation.

11.2. Architectures of real-time operating systems (RTOS)


As any other operating system, RTOS in automation and control systems has
the following functions:
– guarantee conflict-free interaction of sets of parallel tasks with
hardware;
– conflict-free computer system resources (memory, disks and so on);
– guarantee secure data transmission between processes in protected
address spaces;
– provision of standard data access arrangements;
– provision of standard telecommunications and network support;
– service support of system and network timers;
– creating highly secure computing environments.
– These functions should be performed by a RTOS within a guaranteed
and definite time.
– There are different RTOS architectures:
– monolithic architecture;
– microkernel architecture;
– object-oriented architecture.
It is important that any OS is meant to separate hardware from executed tasks,
providing standard secure access methods to it and guarantees tasks interactions.
Monolithic OS design principle is that OS is developed as a set of interacting
modules, provide application programs with input interfaces so that they can
interact and have access to hardware.
There are two main levels of monolithic OS:
1. application level, consisting of running application processes;
2. system level, consisting of an OS monolithic kernel, in which, in its
turn, is made up of the following parts: applications programming interface ((API),
system kernel proper, an interface between a kernel and hardware (hardware
device drivers).

138
An interface in such systems serves a dual function:
1. controls interaction of application processes with a system;
2. provides continuity of the execution of code (i.e. provides absence of
task switching during the execution of code).
The key advantage of monolithic architecture is its relatively high
performance rate compared to other types of architecture. Though this can be
achieved mainly by writing considerable proportion of the programs in an
assembly language.
The drawbacks of a monolithic architecture are the following:
1. System calls, requiring privilege level switching (from a user task to the (a)
kernel), should be implemented as interruptions or as a special type of exceptions.
That substantially increases their execution time.
2. A kernel is non-preemptable. As a result a high-priority task might not
be controlled because of a low-priority one.
3. The complexity of ‘moving’ a system to new CPU architectures due to
substantial number of assembler inserts.
4. Inflexibility and design complexity: partial changes of a kernel require
its total recompilation. The general disadvantage of such an architecture is poor
predictability of its behavior which is caused by complex interactions of modules.
A microkernel architecture is believed to be one of the most efficient RTO
architectures. A compact fast kernel either is resident and located in random access
memory (RAM) or is located in read-only memory (ROM) in the case of
embedded systems. Other supplementary OS modules can be added as need arises
(in particular, they can be timely replaced and improved.
The main principle of such an architecture is separation of OS services. A
kernel functions as a message dispatcher between front-end user programs and
servers, i.e. system services. Module architecture was developed as an attempt to to
exclude an interface between applications and a kernel so that to make system
modernization easier in case there is a need to ‘transfer’ it onto new OS
architectures.
At present a microkernel serves a dual function:
– controls interactions between different system parts (for example, job
and file managers),
– provides continuity of provides continuity of the execution of code
(i.e. provides absence of task switching during the execution of code).
.

139
– On the one hand, such an architecture has a number of advantages as
far as requirements to RTOS and embedded systems are concerned.
– The most important of these requirements are the following:
– Higher OS reliability, because each service is in itself a stand-alone
application and therefore it is easier to bed it in and to catch any
errors.
– Such a system has an edge in scaling up, as unnecessary services can
be excluded from the system without loss of its performance.
– These systems also exhibit higher fault tolerance, as a ‘hung’ service
can be restarted without hard reboot.
On the other hand, module architecture of OS has one key disadvantage – in
case of intensive use of OS functions operation speed is lower than one of a
monolithic architecture system. This is explained by the fact that supplementary
OS functions (those that are not located in a kernel) are called as processes and in
case of task concurrence this results in task switching. This process may require
much more time. Among well-known RTOS systems employing microkernel
architecture one should note OS9 and QNX.
RTOS designed on object-oriented approach have more complex
architecture.
A microkernel in such OS is removed to a user task level, each task
containing a certain number of microkernels needed for its proper operation. Each
user task contains a thread or several threads (to interact both between tasks and to
provide system service calls, messages coming from user tasks using mailboxes.
This principle is appropriate for designing RTOS of complicated layered
distributed systems.
Actual equality of all system components make task switching possible at
any time. Object-oriented approach guarantees design modularity system security,
modernization simplicity and code re-execution. Unlike the previously viewed
systems, not all components of the system itself should be brought into RAM. If
the microkernel is already loaded for another application, it should not be loaded
again and the code and the data of the existing microkernel are used.
All these techniques allow to reduce required memory space.
As different applications share the same microkernels, they should work in
the same address space. Thus, the system cannot use virtual memory and therefore
it works faster (as delays caused by translation of a virtual address into physical
address are exсluded).

140
11.3. Processes and Threads in RTOS
Increasing the scope of real-time systems resulted in stepping up the
requirements to these systems. At present a mandatory requirement to OS which is
meant to be used for solving real-time tasks is a possibility of multitasking. The
same is relevant for a general-purpose OS. But as far as multitasking is concerned,
in case of real-time systems there are a number of additional requirements to be
met. These requirements are defined by the mandatory characteristic of a real-time
system, i.e. predictability.
Multitasking means parallel execution of several operations, though practical
implementation of paralleling bumps into computer system resource sharing. The
main resource sharing of which between several tasks is called scheduling, is a
processor. That is why truly parallel processing of several tasks is impossible in
single-processor systems. Fig. 6.4 shows the realization of multitasking in a
single-processor system. The processor executes dispatching by means of task-
control blocks, each task-control block containing a special field for a task priority
record. There is a fairly large number of different dispatching methods and the
most important ones will be dealt with later.
The problem of resource sharing is relevant to multiprocessor systems too
because several processors have to share a single system bus. That is why groups
of computing complexes united by a common control block are used for designing
real-time systems meant for simultaneously solving several tasks. The possibility
of using several processors within one computing complex and to provide
maximally transparent interactions between several computing complexes with, for
instance, a local network, is an important characteristic of RTOS that greatly
enlightens its application opportunities.
The notion of task in terms of OS and software applications can be
understood as two different things: processes and threads. A process is a
generalized representation of a task, as it denotes an independent program module
or an entire executable file together with its address space, state of registers, P-
counter, function and procedure code, whereas a thread is an integral part of a
process and means the sequence of an executable code. Each process contains at
least one thread, the maximum number of threads within one process in most OS
being limited only by total available RAM of a computing complex. Threads of
one process divide its address space; that is why they can easily exchange data.
Also, the switching time between such threads (i.e. time that a processor needs to
switch from execution of commands from one thread to execution of commands

141
from another thread) turns out to be less than the switching time between
processes. Hence, in real-time applications concurrent tasks are grouped to the
greatest possible extent as threads executed within one process.
Each thread has an important property on the basis of which OS makes a
decision when it can consume with processor time. This property is called thread
priority and is expressed as a whole-number value. Number of priorities (or
priority levels) is determined by OS functionality, the lowest value (0) is attributed
to ‘idle’ thread of OS which is used for correct operation of a system when ‘there
is nothing for it to be executed’.
A thread may be in one of the following five states: dormant, ready, running,
waiting or interrupting (execution of interrupt service routine - ISR) (see. fig.
11.1).

Figure. 11.1. – state of a thread

The dormant state corresponds to a thread in memory but is not accessible to


a multi-task kernel. The thread is in a ready state when it can be executed but it is
of lesser priority that the one that is being executed at that moment. The steam is in
an execution state when it controls a processor. The thread is in a waiting state
when it waits for some event to occur (for example, completion of processing an
output operation, a shared resource deallocation, presence of pulse timing,
expiration of time, etc.). Finally, the thread is in the interrupt when the thread stops
what it is doing and the processor is busy with procedure of processing it. The fig
6.5 also shows the functions of thread switching that may occur in a system.

11.4. Thread Scheduling


Dispatch methods, i.e. methods of granting different threads access to the
processor, in general may be divided in two groups. The first group refers to the

142
cases when all threads that share the processor are of the same priority, i.e. are of
the same importance from the operating system’s point of view:
1. FIFO - First Input First Output means that the first thread in a queue is
executed first and is being executed until it is completed or blocked in waiting for
some resource deallocation or event. After that control is delegated to the next-in-
queue tread.
2. Round-robin process scheduling means one method of having different
program process take turns using the resources of the computer is to limit each
process to a certain short time period, then suspending that process to give another
process a turn (or "time-slice"). After that control is delegated to the next-in-queue
thread. When the time of the last thread is over, the control is delegated to the first
thread in queue that is in ready state. Thus, execution of each thread is divided into
a sequence of time cycles.
Another group of methods is used to share a processor’s time among threads
of different importance, i.e. priority.
3. In the simplest case when two threads of different priority are in ready
state, processing time is provided to a thread of higher priority. Such a method is
referred to as preemptive multitasking. The use of this method is associated
with some complexity. For example, if there is one group of threads of some
priority and another group of a lower priority, then in case of round-robin process
scheduling of each group in a preemptive multitasking system the low priority
threads may not get access to the processor at all.
4. One of the solutions of the abovementioned problem is so-called adaptive
process scheduling. Essentially this means that the priority of the thread that is not
being executed for some period of time increases by one. Priority re-establishment
occurs within one time slice after thread completion or when a thread is blocked.
Thus, in case of round-robin process scheduling, a queue (or «round-robin») of
threads of higher priority cannot полностью block execution of a lower priority
thread queue.
5. In real-time tasks, dispatch methods should mean some specific
requirements as a procedure of control delegation should be defined by deadline-
driven scheduling. This requirement is to the fullest extent satisfied by preemptive
multitasking. The principle of this method is that as soon as a thread of higher
priority than the one of an active thread passes into a ready state, the active thread
is involuntary preempted (i.e. passes from active state into a ready state) and
control is delegated to a thread of higher priority.

143
In practice, both combinations of the above described methods and various
modifications of them are widely used. In the context of scheduling of several
threads of different priority levels in real-time systems, the most important
problem is to prioritize them in such a way that each thread is executed within its
deadline. If all the threads meet the deadline, the system is said to be schedulable.
For real-time systems used for periodic event processing, there is a
mathematical model which makes it possible to calculate whether the system in
question is schedulable. The model was developed by C. L. Liu and J. Layland in
1970 [4] and is called Rate Monotonic Analysis (RMA). The usability and
efficiency of this mathematical model resulted in the acceptance of RMA as a
standard by world leading RTOS manufacturers.
Taking into account everything said above, it is possible to formulate one of
the most important requirements a RTOS that is used in complicated automated
and control systems: RTOS should guarantee multitasking with support of pre-
emptive priority scheduling.

11.5. Synchronization mechanisms


Besides CPU time different threads may can have some other resources, they
have to share, for example, variables in memory, device buffers, etc. To avoid
damage, caused by concurrent editing the same data by different threads specific
variables called synchronization objects are used. Such objects are represented by
mutex (mutual exclusion), semaphores, events and so on.
Mutual exclusion
The simplest way of interaction of different threads is to implement shared
data structure. The task of synchronization is simplified if all threads are located in
single address space. Thus, threads can refer to global variables, indexes, buffers,
linked lists, bounded buffers, etc. Though shared data разделяемые данные
simplify exchange of information, it is necessary to provide a task exclusive access
to data in order to avoid collisions and data damage.
The most common methods of getting an exclusive access to shared resources
are the following:
– interrupt prohibition ;
– assaying and installation;
– dispatch blockage;
– using semaphores
Interrupt prohibition and enabling

144
The simplest way to get an exclusive access to shared resources is interrupt
prohibition and enabling. Such an approach should be taken with care and should
not prohibit interruptions for long so not to impair interrupt response time. This
method is acceptable if only several variables are copied or changed. It is also the
only way to exchange data between a thread and an Interrupt Service Routine
(ISR). In any case, interruptions should be prohibited for a minimal time period.
Trial and Installation
If the kernel is not used, two threads may ‘agree’ that before they get access
to the resources, they should check for some global variable and if it equals zero
(0), the access is considered to be allowed, the first of the threads to get access sets
the value of this variable at one (1). This process is commonly called Test-And-Set
(TAS). This operation should either be executed by the CPU itself, or it is
necessary to prohibit interruptions during the execution time of this operation.
Dispatch Blockage
If a process does not share variables of data structures with IRS sub-
programs, the dispatch can be blocked and unblocked, as shown in the listing given
below. In this case two processes can share data without risk of collision. It should
be noted that when the dispatcher is blocked, interruptions are allowed and if
interruption occurs when a program undergoes its critical section, then ISR will
immediately start. When ISR is over, the control gets back to the interrupted task
even if ISR made a task with higher priority ‘ready’. Once OSSchedUnlock is
requested, there is a search for high priority tasks, and if they are found, there is a
context switch. Though this method is a rather efficient one, dispatch blockage
should be avoided as such an approach partially stultifies the use of a kernel in
principle.
Semaphores
A semaphore is a programming concept that is frequently used in most
multithreaded kernels to solve multi-threading problems
Semaphores are used to:
– to control access to shared resources;
– signal occurrence of an event;
– to allow two threads to synchronize their activity.
A semaphore is a key that a thread should get in order to continue execution.
If a semaphore is already being used, a requesting thread interrupts its execution to
wait until the semaphore is free. In other words, the thread waits for a key and if it
has already been taken by someone, it the requesting thread waits when it’s free.

145
There are two types of semaphores: binary semaphores and counting
semaphores. As the name implies, binary semaphores can possess only two values
- 0 or 1. A counting semaphore (variable of integer type) can possess values
ranging from 0 to 255, 65535 or 4294967295, depending on the capacity of the
used semaphore - 8, 16 or 32 bits, respectively. This value depends on the kernel
implemented. Also, in addition to semaphore value a kernel should store a list of
threads waiting to access it.
There are three main operations that can be executed with semaphores:
INITIALIZE (or CREATE), WAIT (or PEND) and SIGNAL (or POST). The
initial value of a semaphore is set at the stage of its creation. A list of threads
waiting for this semaphore should initially be empty.
A thread that wants to possess the semaphore executes the WAIT operation. If
the semaphore is accessible, that is, its value is greater than 0, the value will
decrease by one, and the thread continues its execution. If the value of a semaphore
equals 0, the thread, executing the operation WAIT, is added to the list of the
threads, waiting for the semaphore. Many kernels also allow to define time-out, at
the end of which the thread is restarted, with a return code sent to it warning that
time-out has elapsed.
Some synchronization mechanisms can be used in any multitask systems as
correct control of execution of several threads with one resource (for example, a
device buffer or some shared variable) cannot be guaranteed without them.
However, in real-time problems, synchronization objects should meet some
specific requirements. This is accounted for by the fact that synchronization
objects can cause serious delays of thread execution, because the purpose of these
objects is in fact to block access to a certain shred resource. One of the most
serious problems that may occur in case of possible resource blockage is priority
inversion.

146
REFERENCES
1. Deuel A 1994 The benefits of a manufacturing execution system for
plantwide automation ISA Transactions 113-124
2. Mcclellan M 2004 The Collaborative Effect Intelligent Enterprises 7(16) 35
3. Fuchs F and Thiel K 2009 Manufacturing Execution Systems: Optimal
Design, Planning, and Deployment (New York: McGraw-Hill)
4. Hadjimichael B 2004 Manufacturing Execution Systems Integration and
Intelligence (Master Thesis McGill University)
5. Cao W Jing S and Wang X 2008 Research on Manufacturing Execution
System for Cement Industry IEEE Conference on Industrial Electronics and
Applications 1614-1618
6. Waldron T A 2011 Strategic Development of a Manufacturing Execution
System (MES) for Cold Chain Management Using Information Product Mapping
(Master Thesis - Massachusetts Institute of Technology)
7. Scholten B and Schneider M 2010 ISA-95 As-Is / To-Be Study MESA
White Paper 23
8. Govindaraju R Lukman K and Chandra D R 2014 Manufacturing Execution
System Design using ISA-95, Advanced Materials Research 980 248-252
9. MESA International 1997 MES explained: A High Level Vision White
Paper 6
10. MESA International 2000 Enterprise-Control System Integration Part
1: Model and Terminology
11. MESA International 2005 Enterprise-Control System Integration Part
3: Activity Models of Manufacturing
12. Pressman R 2010 Software Engineering: A Practitioner’s Approach
7th ed. (New York: McGraw-Hill)
13. Kauppinen M 2005 Introducing Requirements Engineering into
Product Developoment: Towards Systemetic User Requirement Definition. Espoo:
Helsinki University of Technology
14. Manufacturing Approach, International Journal for Scientific
Research & Development| 2(4) 337-340

15. Гвоздева Т.В., Баллод Б.А. Проектирование информационных


систем: учебное пособие. – Ростов н/Д.: Феникс, 2009. – 508 с.

147
16. Ким Д.П. Теория автоматического управления: линейные
системы. – М.: ФИЗМАТЛИТ, 2003. – 288 с.
17. Лодон Дж., Лодон К. Управление информационными системами /
Пер. с англ. Трутнева Д.Р. – СПб.: Питер, 2005. – 910 c.
18. О’Лири Д. ERP системы. Современное планирование и
управление ресурсами предприятия. Выбор, внедрение и эксплуатация / Пер.
с англ. Водянова Ю.И. – М.: Вершина, 2004. – 272 с.
19. Андреев Е.Б., Куцевич И.В., Куцевич Н.А. MES-системы: взгляд
изнутри. – М.: РТСофт, 2015. – 240 с.
20. Андреев Е.Б., Куцевич Н.А., Синенко О.В. SCADA-системы:
взгляд изнутри. – М.: РТСофт, 2004. – 176 с.
21. Леньшин В.Н., Куминов В.В. Производственные исполнительные
системы (MES) – путь к эффективному предприятию. – URL:
http://asutp.ru/?p=600359 (дата обращения: 12.01.2016).
22. Фролов Е.Б., Загидуллин Р.Р. MES-системы, как они есть или
эволюция систем планирования производства (часть I). – URL:
http://www.fobos-mes.ru/stati/mes-sistemyi-kak-oni-est-ili-evolyutsiya-sistem-
planirovaniya-proizvodstva.-chast-i.html (дата обращения: 12.01.2016).
23. Фролов Е.Б., Загидуллин Р.Р. MES-системы, как они есть или
эволюция систем планирования производства (часть II). – URL:
http://www.fobos-mes.ru/stati/mes-sistemyi-kak-oni-est-ili-evolyutsiya-sistem-
planirovaniya-proizvodstva.-chast-ii.html (дата обращения: 12.01.2016).
24. Солдатов С. Интеграция SCADA-систем и систем управления
предприятием // Современные технологии автоматизации. – 2016. – №1. –
c.90-95.
25. Фролов Е.Б., Загидуллин Р.Р. Оперативно-календарное
планирование и диспетчирование в MES-системах (часть I). – URL:
http://www.fobos-mes.ru/stati/operativno-kalendarnoe-planirovanie-i-
dispetchirovanie-v-mes-sistemah.-chast-i.html (дата обращения: 12.01.2016).
26. Лилеев П. Типовые модели интеграции SAP: ERP и MES.
Современные подходы к интеграции ERP и MES на металлургических
предприятиях (часть

148
LIST OF CONTENT
1. AUTOMATED CONTROL SYSTEMS. INTRODICTION AND
DEFINITIONS.............................................................................................................. 3
1.1. Defining ‘computer technology’ ................................................................. 3
1.2. Defining ‘automated systems’ ..................................................................... 3
1.2.1. Automated systems’ ........................................................................... 3
1.2.2. Processes occurring in automated systems ........................................ 4
1.3. Types of control systems ............................................................................. 6
1.4. Types of support for automated subsystems ............................................... 7
2. CLASSIFYING AUTOMATED SYSTEMS ................................................... 10
2.1. Product lifecycle ........................................................................................ 10
2.2. Complex systems for industrial automation ............................................. 13
2.3. The structure of complex automated control systems ............................... 14
2.4. The principles of constructing complex automation systems.
Characteristic features of man-machine systems. ................................................ 17
3. OLAP TECHNOLOGY..................................................................................... 19
3.1. What is OLAP? ......................................................................................... 19
3.2. Why do we need OLAP? ........................................................................... 20
3.2.1. Increasing data storage..................................................................... 21
3.2.2. Data versus Information................................................................... 21
3.2.3. Data layout ....................................................................................... 22
3.3. OLAP fundamentals .................................................................................. 22
3.3.1. What is a cube? ................................................................................ 23
3.3.2. Multidimensionality ......................................................................... 25
3.3.3. "Slicing & dicing" ............................................................................ 28
3.3.4. Nested dimensions ........................................................................... 28
3.3.5. Hierarchies & groupings .................................................................. 29
4. ENTERPRISE RESOURCE PLANNING ...................................................... 30
4.1. Basic Concepts and Definitions ................................................................ 31
4.2. Benefits and Importance............................................................................ 32
4.3. Value of ERP ............................................................................................. 35
4.3.1. IT value of ERP systems .................................................................. 36
4.3.2. Business value of ERP systems ....................................................... 36
4.3.3. Business process integration ............................................................ 38
4.3.4. Importance of strategic alignment of ERP with business goals ...... 42
4.4. ERP System Use in Organizations ............................................................ 43
4.5. Future impacts to industry and organizations ........................................... 44
5. MANUFACTURING EXECUTION SYSTEMS (MES)................................ 45

149
5.1. Manufacturing Execution Systems Implementation ................................. 46
5.2. Model Development .................................................................................. 47
5.2.1. Specific functional model. ............................................................... 49
5.2.2. Configure, Build and Test................................................................ 50
5.3. Methodology ............................................................................................. 51
6. SUPERVISORY CONTROL AND DATA ACQUISITION (SCADA) ........ 53
6.1. Field Data Interface Devices ..................................................................... 55
6.2. Communications Network......................................................................... 56
6.3. Central Host Computer .............................................................................. 57
6.4. Operator workstations and software components ..................................... 58
6.5. SCADA Architectures ............................................................................... 59
6.5.1. Monolithic SCADA Systems ........................................................... 59
6.5.2. Distributed SCADA Systems .......................................................... 60
6.5.3. Networked SCADA Systems ........................................................... 62
7. GENERAL SCADA COMPONENTS ............................................................. 63
7.1. PLC BASICS ............................................................................................. 63
7.1.1. Controllers........................................................................................ 63
7.1.2. Microprocessor controlled system ................................................... 65
7.1.3. The programmable logic controller ................................................. 65
7.2. Hardware ................................................................................................... 67
7.3. Internal architecture ................................................................................... 69
7.3.1. The CPU........................................................................................... 69
7.3.2. The buses ......................................................................................... 70
7.3.3. Memory ............................................................................................ 70
7.3.4. Input/output unit .............................................................................. 71
7.3.5. Sourcing and sinking ....................................................................... 73
7.4. Controller selection criteria ....................................................................... 74
7.5. PLC vs. PAC ............................................................................................. 76
7.5.1. Determining users’ needs................................................................. 77
7.5.2. Functional differences...................................................................... 80
7.5.3. PLC & PAC model comparison ...................................................... 81
7.6. REMOTE TERMINAL UNIT .................................................................. 86
7.6.1. Architecture...................................................................................... 86
7.6.2. Central Processing Unit (CPU) ........................................................ 86
7.6.3. Power supply.................................................................................... 87
7.6.4. Digital (control) outputs................................................................... 88
7.6.5. Software and logic control ............................................................... 89
7.6.6. Communications .............................................................................. 89
7.6.7. Comparison with other control systems .......................................... 90

150
7.6.8. Applications ..................................................................................... 91
7.6.9. RTU manufacturers.......................................................................... 91
8. INDUSTRIAL DATA COMMUNICATIONS ................................................ 92
8.1. Open Systems Interconnection (OSI) model............................................. 93
8.2. RS-232 interface standard ......................................................................... 94
8.2.1. Half-duplex operation of RS-232 .................................................... 94
8.3. Fiber Optics ............................................................................................... 95
8.3.1. Applications for fiber optic cables ................................................... 96
8.3.2. Fiber optic cable components .......................................................... 96
8.4. Modbus ...................................................................................................... 97
8.4.1. Modbus protocol .............................................................................. 97
8.4.2. Modbus Plus .................................................................................... 98
8.5. Data Highway Plus /DH485 ...................................................................... 99
8.6. HART ........................................................................................................ 99
8.7. AS-i.......................................................................................................... 100
8.8. DeviceNet ................................................................................................ 101
8.9. Profibus.................................................................................................... 102
8.10. Foundation Fieldbus ............................................................................. 103
8.11. . Industrial Ethernet .............................................................................. 104
8.11.1. 100 Mbps Ethernet ....................................................................... 105
8.11.2. Gigabit Ethernet ........................................................................... 105
8.11.3. TCP/IP.......................................................................................... 106
9. OPC TECHNOLOGY ..................................................................................... 108
9.1. Introduction in OPC ................................................................................ 108
9.1.1. OPC Background ........................................................................... 109
9.1.2. Purpose........................................................................................... 109
9.1.3. The Current Client Application Architecture ................................ 110
9.1.4. The Custom Application Architecture ........................................... 111
9.1.5. General ........................................................................................... 111
9.2. Scope ....................................................................................................... 112
9.3. OPC Fundamentals .................................................................................. 113
9.3.1. OPC Objects and Interfaces ........................................................... 113
9.3.2. OPC DataAccess Overview ........................................................... 114
9.3.3. OPC Alarm and Event Handling Overview .................................. 115
9.3.4. OPC Historical Data Access Overview ......................................... 116
9.3.5. Where OPC Fits ............................................................................. 116
9.3.6. General OPC Architecture and Components ................................. 117
9.3.7. Local vs. Remote Servers .............................................................. 118

151
9.4. New Automation Concepts with OPC Unified Architecture .................. 118
9.4.1. Standardized communication......................................................... 119
9.4.2. Developing OPC UA based concepts ............................................ 121
10. COMPLEX COMPUTER SYSTEMS ........................................................... 123
10.1. Defining ‘Complex systems’................................................................ 123
10.2. General design concepts of complex computer systems...................... 124
10.3. Computer system classification ........................................................... 126
10.4. Main architectures of complex computer systems ............................... 128
10.5. Computer System Classification based on different characteristics
(properties) ......................................................................................................... 131
10.6. Other important characteristics of automated computer systems ........ 133
11. REAL-TIME SYSTEMS ................................................................................. 135
11.1. Real-time mode: concept, definition and terminology ........................ 135
11.2. Architectures of real-time operating systems (RTOS)......................... 138
11.3. Processes and Threads in RTOS .......................................................... 141
11.4. Thread Scheduling ............................................................................... 142
11.5. Synchronization mechanisms ............................................................... 144
REFERENCES ......................................................................................................... 147

152
Anastasiia D. Stotckaia
Alexander V. Nikoza

Computer-based technologies of control in technical systems.


Lecture notes

Educational material for the discipline


«Computer-based technologies of control in technical systems»

153

You might also like