You are on page 1of 266

EXPERIMENTAL COPY

FACULTY OF ENGINEERING TECHNOLOGY


DEPARTMENT OF MECHANICAL ENGINEERING
OPEN BACHELOR OF TECHNOLOGY HONOURS IN ENGINEERING
UNIVERSITY DMX7304 – FACTORY AUTOMATION
OF SRI LANKA

FACTORY AUTOMATION

BOOK 02
BOOK 02

FACTORY AUTOMATION

DMX7304
DMX7304 – FACTORY AUTOMATION

Unit 5: Industrial communication


Overview of industrial communication
Session 15 1
systems
Communication for distributed control
Session 16 27
systems
Communication protocols for building
Session 17 48
automation

Unit 6: Industrial controllers and PLC Programming


Overview of Programmable controllers and
Session 18 97
PLCs
Session 19 PLC hardware selection and Programming 118
Session 20 Industrial controllers - selection of hardware 142

Unit 7: Distributed Control System (DCS)


Session 21 Overview of DCS 172
Session 22 DCS integration with PLC and computers 187
Session 23 Features and advantages of DCS 200
Unit 8: Modelling and simulation for plant automation
Session 24 Overview of modelling and simulation 230
Session 25 Building mathematical model of a plant 255

Published by the
Open University of Sri Lanka
Session 15: Overview of industrial communication systems

Session 15
Overview of industrial communication
systems
Content
15.1 Basic Information ............................................................................ 2
15.1.1 History ...................................................................................... 2
15.1.2 Classification ............................................................................ 2
15.1.3 Requirements in Industrial Automation Networks ................... 3
15.2 Virtual Automation Networks.......................................................... 4
15.2.1 Definition, Characterization, Architectures .............................. 4
15.2.2 Domains .................................................................................... 5
15.2.3 Interfaces, network Transitions, Transmission Technologies .. 5
15.3 Wired Industrial Communications ................................................... 6
15.3.1 Sensor/Actuator Networks ........................................................ 7
15.3.2 Fieldbus Systems ...................................................................... 9
15.3.3 Controller Networks ............................................................... 12
15.4 Wireless Industrial Communications ............................................. 18
15.4.1 Wireless Local Area Networks (WLAN) ............................... 19
15.4.2 Wireless Sensor/Actuator Networks ....................................... 19
15.5 Wide Area Communications .......................................................... 21
15.6 Emerging Trends ............................................................................ 25

1
Session 15: Overview of industrial communication systems

15.1 Basic Information

15.1.1 History

Digital communication is now well established in dis- tributed computer


control systems both in discrete manufacturing as well as in the process
control industries. Proprietary communication systems within SCADA
(supervisory control and data acquisition) systems have been supplemented
and partially displaced by Fieldbus and sensor bus systems. The introduction
of Fieldbus systems has been associated with a change of paradigm to deploy
distributed industrial automation systems, emphasizing device autonomy and
decentralized decision making and control loops. Nowadays, (wired)
Fieldbus systems are standardized and are the most important communication
systems used in commercial control installations. At the same time, Ethernet
won the battle as the most commonly used communication technology within
the office domain, resulting in low component prices caused by mass
production. This has led to an increasing interest in adapting Ethernet for
industrial applications and several approaches have been proposed (Sect.
15.1.4). Ethernet-based solutions are dominating as a merging technology.
In parallel to advances on Ethernet-based industrial protocols, the use of
wireless technologies in the industrial domain has also been increasingly
researched. Following the trend to merge automation and office networks,
heterogeneous networks (virtual automation networks (VAN)), consisting of
local and wide area net- works, as well as wired and wireless communication
systems, are becoming important [15.1].
15.1.2 Classification

Industrial communication systems can be classified as follows regarding


different capabilities:
 Real-time behavior: Within the automation domain, real-time
requirements are of uttermost importance and are focused on the
response time behavior of data packets. Three real-time classes can be
identified based on the required temporal behavior:
– Class 1: soft real-time. Scalable cycle time, used in factory
floor and process automation in cases where no severe
problems occur when deadlines are not met.
– Class 2: hard real-time. Typical cycle times from 1 to 10 ms,
used for time-critical closed loop control.
– Class 3: isochronous real-time, cycle times from 250 μs to 1
ms, with tight restrictions on jitter (usually less than 1 μs), used
for motion control applications.

2
Session 15: Overview of industrial communication systems

Additionally, there is a class non real-time, which means systems


without real-time requirements; these are not considered here. It
means (regarding industrial automation) exchange of engineering data
maintenance, etc.
 Distribution: The most important achievement of industrial
communication systems are local area communication systems,
consisting of sensor/actuator networks, Fieldbus systems, and
Ethernet-based local area net- works (LAN). Of increasing
importance is the use of wide area networks (WAN)
(telecommunication networks, Internet, etc.). Thus, it should be
advantageous to consider WANs as part of an industrial
communication system (Sect. 15.2), mostly within the upper layers of
an enterprise hierarchy.
 Homogeneity: There are homogeneous parts (e.g. standardized
Fieldbus systems) within an industrial communication system. But in
real applications the use of heterogeneous networks is more common,
especially when using WANs and when connected with services of
network providers.
 Installations types: While most of the installed enterprise networks
are currently wired, the number of wireless installations is increasing
and this trend will continue.
15.1.3 Requirements in Industrial Automation Networks

The main requirements are:


 Real-time behavior: Diagnosis, maintenance, com- missioning, and
slow mobile applications are examples of non-real-time applications.
Process automation and data acquisition usually present soft real-time
requirements. Examples of hard real-time applications are closed-loop
control ap- plications, such as in fast mobile applications and machine
tools. Motion control is an example of an isochronous hard real-time
application.
 Functional safety: Protection against hazards caused by incorrect
functioning including communication via heterogeneous networks.
There are several safety integrities levels (SIL) [15.2]. It includes the
influence of noisy environments and the degree of reliability.
 Security: This means a common security concept for distributed
automation using a heterogeneous network with different security
integrity levels (not existent yet).
 Location awareness: The desired context awareness leads to the usage
of location-based communication services and context-sensitive
applications.

3
Session 15: Overview of industrial communication systems

15.2 Virtual Automation Networks

15.2.1 Definition, Characterization, Architectures

Future scenarios of distributed automation lead to desired mechanisms for


geographically distributed automation functions for various reasons:
 Centralized supervision and control of (many) de-centralized (small)
technological plants
 Remote control, commissioning, parameterization, and maintenance
of distributed automation systems
 Inclusion of remote experts or external machine- readable knowledge
for plant operation and maintenance (for example, asset management,
condition monitoring, etc.).
This means that heterogeneous networks, consisting of (partially
homogeneous) local and wide areas, as well as wired and wireless
communication systems, will play an increasing role. Figure 15.1 depicts the
communication environment of a complex automation scenario. Following a
unique design concept, regarding the objects to be transmitted between
geographically distributed communication end points, the heterogeneous
network becomes a virtual automation network (VAN) [15.3, 4]. VAN
characteristics are defined for domains, where the expression domain is
widely used to address areas and devices with common properties/behavior,
common network technology, or common application purposes.

Figure 0.1 Different VAN domains related to different automation applications

4
Session 15: Overview of industrial communication systems

15.2.2 Domains

Within the overall automation and communication environment, a VAN


domain covers all devices that are grouped together on a logical or virtual
basis to represent a complex application such as an industrial application.
Therefore, the encompassed networks may be heterogeneous, and devices can
be geographically distributed over a physical environment, which shall be
covered by the overall application. But all devices that have to exchange
information within the scope of the application (equal to a VAN domain) must
be VAN aware or VAN enabled devices. Other- wise, they are VAN
independent and are not a member of a VAN domain. Figure 15.1 depicts
VAN domain examples representing three different distributed applications.
Devices related to a VAN domain may reside in a homogeneous network
domain (e.g. the industrial domain shown in Fig. 15.1). But, depending on
the application, additional VAN relevant devices may only be reached by
crossing other network types (e.g., wide area network type communication)
or they need to use proxy technology to be represented in the VAN domain
view of a complex application.

Figure 0.2 Network transitions (local area networks (LAN), wire- less LAN (WL), wired/wireless
(W/WL), real-time Ethernet (RTE), metropolitan area network (MAN), wide area network (WAN))

15.2.3 Interfaces, network Transitions, Transmission Technologies

A VAN network consists of several different communication paths and


network transitions. Figure 15.2 depicts the required transitions in
heterogeneous net- works.
Depending on the network and communication technology of the single path
there will be differences in the addressing concept of the connected network
segments. Also, the communication paths have different communication line
properties and capabilities. There- fore, for the path of two connected devices
within a VAN domain the following views are possible:

5
Session 15: Overview of industrial communication systems

 The logical view: describing the properties/capabilities of the whole


communication path
 The physical view: describing the detailed proper- ties/capabilities of
the passed technology-dependent communication paths
 The behavioral view: describing the different cyclic/acyclic temporal
behavior of the passed segments.
There are different opportunities to achieve a communication path between
network segments/devices (or their combinations). These are: Ethernet line
(with/without transparent communication devices), wireless path,
telecommunication network (1:1), public networks (n:m, provider-oriented),
VPN tunnel, gate- way (without application data mapping), proxy (with
application data mapping), VAN access point, and IP mapping. All networks,
which cannot be connected via an IP-based communication stack, must be
connected using a proxy. For connecting nonnested/cascaded VAN
subdomains via public networks the last solution (VAN access point) should
be preferred.

15.3 Wired Industrial Communications

Wired digital communication has been an important driving force of


computer control systems for the last 30 years. To allow the access to data in
various layers of an enterprise information system by different users, there is
a need to merge different digital communication systems within the plant,
control, and device levels of an enterprise network. On these different levels,
there are distinct requirements dictated by the nature and type of information
being exchanged. Network physical size, number of supported devices,
network bandwidth, response time, sampling frequency, and payload size are
some of the performance characteristics used to classify and group specific
network technologies. Real-time requirements depend on the type of
messages to be ex- changed: deadlines for end-to-end data transmission,
maximum allowed jitter for audio and video stream transmission, etc.
Additionally, available resources at the various network levels may vary
significantly. At the device level, there are extremely limited resources
(hardware, communications), but at the plant level powerful computers allow
comfortable software and memory consumption.
Due to the different requirements described above, there are different types
of industrial communication systems as part of a hierarchical automation
system within an enterprise:
 Sensor/actuator networks: at the field (sensor/actuator) level

6
Session 15: Overview of industrial communication systems

 Fieldbus systems: at the field level, collecting/distributing process


data from/to sensors/actuators, communication medium between field
devices and controllers/PLCs/management consoles
 Controller networks: at the controller level, trans- mitting data
between powerful field devices and controllers as well as between
controllers
 Wide area networks: at the enterprise level, connecting networked
segments of an enterprise automation system.
Vendors of industrial communication systems offer a set of fitting solutions
for these levels of the automation/communication hierarchy.
15.3.1 Sensor/Actuator Networks

At this level, several well established and widely adopted protocols are
available:
 HART (HART Communication Foundation): high- way addressable
remote transducer, coupling analog process devices with engineering
tools [15.5]
 ASi (ASi Club): actuator sensor interface, coupling binary sensors in
factory automation with control devices [15.6].
Additionally, CAN-based solutions (CAN in automation (CIA)) are used for
wide-spread application fields, coupling decentralized devices with
centralized devices based on physical and MAC layers of the controller area
network [15.7]. Recently, IO Link has been specified for bi-directional digital
transmission of parameters between simple sensor/actuator devices in factory
automation [15.8, 9].
HART
HART Communication [15.5] is a protocol specification, which performs a
bi-directional digital trans- mission of parameters (used for configuration and
parameterization of intelligent field instruments by a host system) over analog
transmission lines. The host system may be a distributed control system
(DCS), a programmable logic controller (PLC), an asset management system,
a safety system, or a handheld device. HART technology is easy to use and
very reliable. The HART protocol uses the Bell 202 Frequency Shift Keying
(FSK) standard to superimpose digital communication signals at a low level
on top of the 4–20 mA analog signal. The HART protocol communicates at
1200 bps without interrupting the 4–20 mA signal and allows a host
application (master) to get two or more digital updates per second from a field
device. As the digital FSK signal is phase continuous, there is no interference
with the 4–20 mA signal. The HART protocol permits all digital
communication with field devices in either point-to-point or multidrop
network configurations. HART provides for up to two masters (primary and

7
Session 15: Overview of industrial communication systems

secondary). As depicted in Fig. 15.3, this allows secondary masters (such as


handheld communicators) to be used without interfering with
communications to/from the primary master (i. e. control/monitoring system).

Figure 0.3 A HART system with two masters (http://www.hartcomm.org)

ASi (IEC 62026-2)


ASi [15.6] is a network of actuators and sensors (op- tical, inductive,
capacitive) with binary input/output signals. An unshielded twisted pair cable
for data and power (max. 2 A; max. 100 m) enables the connection of 31
slaves (max. 124 binary signals of sensors and/or actuators). This enables a
modular design using any network topology (i. e. bus, star, tree). Each slave
can receive any available address and be connected to the cable at any
location.
AS-Interface uses the APM method (alternating pulse modulation) for data
transfer. The medium access is controlled by a master–slave principle with
cyclic polling of all nodes. ASi masters are embedded (ASi) communication
controllers of PLCs or PCs, as well as gateways to other Fieldbus systems.
To connect legacy sensors and actuators to the transmission line, various
coupling modules are used. AS-Interface messages can be classified as
follows:
 Single transactions: maximum of 4 bit information transmitted from
master to slave (output information) and from slave to master (input
information)
 Combined transactions: more than 4 bits of coherent information are
transmitted, composed of a series of master calls and slave replies in
a defined context.

8
Session 15: Overview of industrial communication systems

15.3.2 Fieldbus Systems

Nowadays, Fieldbus systems are standardized (though unfortunately not


unified) and widely used in industrial automation. The IEC 61158 and 61784
standards [15.11, 12] contain ten different Fieldbus concepts. Seven of these
concepts have their own complete protocol suite: PROFIBUS (Siemens,
PROFIBUS International); Interbus (Phoenix Contact, Interbus Club);
Foundation Fieldbus H1 (Emerson, Fieldbus Foundation); SwiftNet (B.
Crowder); P-Net (Process Data); and WorldFIP (Schneider, WorldFIP).
Three of them are based on Ethernet functionality: high speed Ethernet (HSE)
(Emerson, Fieldbus Foundation); Ethernet/IP (Rockwell, ODVA);
PROFINET/CBA: (Siemens, PROFIBUS International). The world-wide
leading positions within the automation domain regarding the number of
installed Fieldbus nodes hold PROFIBUS and Interbus followed by
DeviceNet (Rock- well, ODVA), which has not been part of the IEC 61158
standard. For that reason, the basic concepts of PROFIBUS and DeviceNet
will be explained very briefly. Readers interested in a more comprehensive
de- scription are referred to the related web sites.

Figure 0.4 Profibus medium access control

PROFIBUS
PROFIBUS is a universal fieldbus for plantwide use across all sectors of the
manufacturing and process industries based on the IEC 61158 and IEC 61784
standards. Different transmission technologies are sup- ported [15.10]:
 RS 485: Type of medium attachment unit (MAU) corresponding to
[15.13]. Suited mainly for factory automation. Technical details see
[15.10, 13]. Number of stations: 32 (master stations, slave stations or
repeaters); Data rates: 9.6/19.2/45.45/93.75/
187.5/500/1500/3000/6000/12 000 kb/s.
 Manchester bus powered (MBP). Type of MAU suited for process
automation: line, tree, and star topology with two wire transmission;

9
Session 15: Overview of industrial communication systems

31.25 kBd (preferred), high speed variants w/o bus powering and
intrinsic safety; synchronous transmission (Manchester encoding);
optional: bus powered de- vices ( 10 mA per device; low power
option); optional: intrinsic safety (Ex-i) via additional constraints
according to the FISCO model. Intrinsic safety means a type of
protection in which a portion of the electrical system contains only
intrinsically safe equipment (apparatus, circuits, and wiring) that is
incapable of causing ignition in the surrounding atmosphere. No
single device or wiring is intrinsically safe by itself (except for
battery- operated self-contained apparatus such as portable pagers,
transceivers, gas detectors, etc., which are specifically designed as
intrinsically safe self- contained devices) but is intrinsically safe only
when employed in properly designed intrinsically safe system. There
are couplers/link devices to couple MBP and RS485 transmission
technologies.
 Fibre optics.
There are two medium access control (MAC) mechanisms (Fig. 15.4):
1. Master–master traffic using token passing
2. Master–slave traffic using polling.
PROFIBUS differentiates between two types of masters:
1. Master class 1, which is basically a central controller that cyclically
exchanges information with the distributed stations (slaves) at a
specified message cycle.
2. Master class 2, which are engineering, configuration, or operating
devices. The slave-to-slave communication is based on the application
model publisher/subscriber using the same MAC mechanisms.
The dominating PROFIBUS protocol is the applica- tion protocol DP
(decentralized periphery), embedded into the protocol suite (Fig. 15.5).
Depending upon the functionality of the masters, there are different volumes
of DP specifications. There are various profiles, which are grouped as
follows:
1. Common application profiles (regarding functional safety,
synchronization, redundancy etc.)
2. Application field specific profiles (e.g. process automation,
semiconductor industries, motion control).
These profiles reflect the broad experience of the PROFIBUS International
organization.
DeviceNet

10
Session 15: Overview of industrial communication systems

DeviceNet is a digital, multi-drop network that connects and serves as a


communication network between industrial controllers and I/O devices. Each
device and/or controller is a node on the network. DeviceNet uses a
trunkline/drop-line topology that provides separate twisted pair busses for
both signal and power distribution. The possible variants of this topology are
shown in [15.14].
Thick or thin cables can be used for either trunk- lines or droplines. The
maximum end-to-end network length varies with data rate and cable
thickness. DeviceNet allows transmission of the necessary power on the
network. This allows devices with limited power requirements to be powered
directly from the net- work, reducing connection points and physical size.
DeviceNet systems can be configured to operate in a master-slave or a
distributed control architecture using peer-to-peer communication. At the
application layer, DeviceNet uses a producer/consumer application model.
DeviceNet systems offer a single point of connection for configuration and
control by supporting both I/O and explicit messaging.
DeviceNet uses CAN (controller area network [15.7]) for its data link layer,
and CIP (common industrial protocol) for the upper network layers. As with
all CIP networks, DeviceNet implements CIP at the session (i. e. data
management services) layer and above and adapts CIP to the specific
DeviceNet technology at the network and transport layer, and below. Figure
56.6 depicts the DeviceNet protocol suite.

Figure 0.5 PROFIBUS protocol suite

The data link layer is defined by the CAN specification and by the
implementation of CAN controller chips. The CAN specification [15.7]
defines two bus states called dominant (logic 0) and recessive (logic 1). Any
transmitter can drive the bus to a dominant state. The bus can only be in the
recessive state when no transmitter is in the dominant state. A connection with
a device must first be established in order to exchange information with that
device. To establish a connection, each DeviceNet node will implement either
an unconnected message manager (UCMM) or a Group 2 unconnected port.

11
Session 15: Overview of industrial communication systems

Both perform their function by reserving some of the available CAN


identifiers. When either the UCMM or the Group 2 unconnected port is
selected to establish an explicit messaging connection, that connection is then
used to move information from one node to the other (using a
publisher/subscriber application model), or to establish additional I/O
connections. Once I/O connections have been established, I/O data may be
moved among devices on the network.

Figure 0.6 DeviceNet protocol suite

At this point, all the protocol variants of the DeviceNet I/O message are
contained within the 11-bit CAN identifier. CIP is strictly object oriented.
Each object has attributes (data), services (commands), and behavior
(reaction to events). Two different types of objects are defined in the CIP
specification: communication objects and application-specific objects.
Vendor-specific objects can also be defined by vendors for situations where
a product requires functionality that is not in the specification. For a given
device type, a minimum set of common objects will be implemented. An
important advantage of using CIP is that for other CIP-based networks the
application data remains the same regardless of which network hosts the de-
vice. The application programmer does not even need to know to which
network a device is connected.
CIP also defines device profiles, which identifies the minimum set of objects,
configuration options, and the I/O data formats for different types of devices.
Devices that follow one of the standard profiles will have the same I/O data
and configuration options, will respond to all the same commands, and will
have the same behavior as other devices that follow that same profile. For
more information on DeviceNet readers are referred to www.odva.org.

15.3.3 Controller Networks

12
Session 15: Overview of industrial communication systems

This network class requires powerful communication technology.


Considering controller networks based on Ethernet technology, one can
distinguish between (related to the real-time classes, see Sect. 15.1):
1. Local soft real-time approaches (real-time class 1)
2. Deterministic real-time approaches (real-time class 2)
3. Isochronous real-time approaches (real-time class 3).
The standardization process started in 2004. There were many candidates to
become part of the extended Fieldbus standard IEC 61158 (edition 4): high
speed Ethernet HSE (Emerson, Fieldbus Foundation); Ethernet/IP (Rockwell,
ODVA); and PROFINET/CBA (Siemens, PROFIBUS International). Nine
Ethernet- based solutions have been added. In this section a short survey of
the previously mentioned real-time classes will be given, and two practical
examples will be examined.
Local Soft Real-Time Approaches (Real-Time Class 1)
These approaches use TCP (UDP)/IP mechanisms over shared and/or
switched Ethernet networks. They can be distinguished by different
functionalities on top of TCP (UDP)/IP, as well as by their object models and
application process mechanisms. Protocols based on Ethernet-TCP/IP offer
response times in the lower millisecond range but are not deterministic, since
data transmission is based on the best effort principle. Some examples are
given below.
MODBUS TCP/IP (Schneider) [15.15]. MODBUS is an application layer
messaging protocol for client/server communication between devices
connected via different types of buses or networks. Using Ethernet as the
transmission technology, the application layer proto- col data unit (A-PDU)
of MODBUS (function code and data) is encapsulated into an Ethernet frame.
The connection management on top of TCP/IP controls the access to TCP.
Ethernet/IP (Rockwell, ControlNet International, Open DeviceNet Vendor
Association) uses a common industrial protocol CIP [15.16]. In this context,
IP stands for industrial protocol (not for Internet protocol). CIP represents a
common application layer for all physical networks of Ethernet/IP,
ControlNet and DeviceNet. Data packets are transmitted via a CIP router
between the networks. For the real-time I/O data transfer, CIP works on top
of UDP/IP. For the explicit messaging, CIP works on top of TCP/IP. The
application process is based on a producer/consumer model.

High Speed Ethernet HSE (Fieldbus Foundation) [15.17]. A field device


agent represents a specific Field- bus Foundation application layer function
(including Fieldbus message specification). Additionally, there are HSE
communication profiles to support the different device categories: host

13
Session 15: Overview of industrial communication systems

device, linking device, I/O gate- way, and field device. These devices share
the tasks of the system using distributed function block applications.
PROFINET (PNO PROFIBUS User Organization, Siemens) [15.19]. uses
object model CBA (component based architecture) and DCOM wire protocol
with the remote procedure call mechanisms (DCE RPC) (OSF C 706) to
transmit the soft real-time data. An open source code and various exemplary
implementations/portations for different operating systems are available on
the PNO web site.
P-N et on IP (Process Data) [15.20]. Based on P-Net Fieldbus standard IEC
61158 Type 4 [15.11], P-Net on IP contains the mechanism to use P-Net in
an IP environment. Therefore, P-Net PDUs are wrapped into UDP/IP
packages, which can be routed through IP net- works. Nodes on the IP
network are addressed with two P-Net route elements. P-Net clients (master)
can access servers on an IP network without knowing anything about IP
addresses.
All of the above mentioned approaches are able to support widely used office
domain protocols, such as SMTP, SNMP, and HTTP. Some of the approaches
support BOOTP and DHCP for web access and/or for engineering data
exchange. But the object models of the approaches differ.
Deterministic Real-Time Approaches (Real-Time Class 2)
These approaches use a middleware on top of the MAC layer to implement
scheduling and smoothing functions. The middleware is normally represented
by a software implementation. Industrial examples include the following.
PROFINET (PROFIBUS International, Siemens) [15.19]. This variant of the
Ethernet-based PROFINET IO system (using the main application model
background of the Fieldbus PROFIBUS DP) uses the object model IO
(input/output). Figure 15.7 roughly depicts the PROFINET protocol suite,
containing the connection establishment for PROFINET/CBA via
connection- oriented RPC on the left side, as well as for the PROFINET IO
via connectionless RPC on the right side. The exchange of (mostly cyclic)
productive data uses the real-time functions in the center.
The PROFINET IO service definition and protocol specification [15.21]
covers the communication between programmable logical controllers (PLCs),
supervisory systems, and field devices or remote input and output devices.
The PROFINET IO specification complies with IEC 61158, Parts 5 and 6,
specially the Fieldbus application layer (FAL). The PROFINET protocol is
defined by a set of protocol machines.

14
Session 15: Overview of industrial communication systems

Figure 0.7 PROFINET protocol suite of PROFINET (active control connection object (ACCO),
connection-oriented (CO), connectionless (CL), remote procedure call (RPC))

Time-Critical Control Network (Tcnet, Toshiba) [15.23]. Tcnet specifies in


the application layer a so-called com- mon memory for time-critical
applications, and uses the same mechanisms as mentioned for PROFINET IO
for TCP(UDP)/IP-based non real-time applications. An extended data link
layer contains the scheduling functionality. The common memory is a virtual
memory globally shared by participating nodes as well as ap- plication
processes running on each node. It provides a temporal and spatial coherence
of data distribution. The common memory is divided into blocks with several
memory lengths. Each block is transmitted to member nodes using multicast
services, supported by a publisher node. A cyclic broadcast transmission
mechanism is responsible for refreshing the data blocks. Therefore, the
common memory consists of dedicated areas for the transmitting data to be
refreshed in each node. Thus, the application program of a node has quick
access to all (distributed) data.
The application layer protocol (FAL) consists of three protocol machines: the
FAL service protocol ma- chine (FSPM), the application relationship protocol
machine (ARPM), and the data link mapping protocol machine (DMPM). The
scheduling mechanism in the data link layer follows a token passing
mechanism.
Vnet (Yokogawa) [15.24]. Vnet supports up to 254 subnetworks with up to
254 nodes each. In its application layer, three kinds of application data
transfers are supported:
 A one-way communication path used by an end- point for inputs or
outputs (conveyance paths)
 A trigger policy
 Data transfer using a buffer model or a queue model (conveyance
policy).

15
Session 15: Overview of industrial communication systems

The application layer FAL contains three types of proto- col machines:
the FSPM FAL service protocol machine, ARPMs application
relationship protocol machines, and the DMPM data link layer mapping
protocol machine. For real-time data transfer, the data link layer offers
three services:
1. Connection-less DL service
2. DL-SAP management service
3. DL management service.
Real-time and non-real-time traffic scheduling is located on top of the MAC
layer. Therefore, one or more timeslots can be used within a macro-cycle
(depending on the service subtype). The data can be ordered by four priorities:
urgent, high, normal, time available. Each node has its own synchronized
macro-cycle. The data link layer is responsible for clock synchronization.
Isochronous Real-Time Approaches (Real-Time Class 3)
The main examples are as follows.
Powerlink (Ethernet PowerLink Standardization Group (EPSG), Bernecker
and Rainer), developed for motion control [15.25]. Powerlink offers two
modes: protected mode and open mode. The protected mode uses a
proprietary (B&R) real-time protocol on top of the shared Ethernet for
protected subnet- works. These subnetworks can be connected to an open
standard network via a router. Within the protected subnetwork the nodes
cyclically exchange real-time data avoiding collisions. The scheduling
mechanism is a time-division scheme. Every node uses its own time slot [slot
communication network management (SCNM)] to send its data. The
mechanism uses a man- ager node, which acts comparably with a bus master,
and managed nodes act similar to a slave. This mechanism avoids Ethernet
collisions. The Powerlink protocol transfers the real-time data isochronously.
The open mode can be used for TCP(UDP)/IP based applications. The
network normally uses switches. The traffic has to be transmitted within an
asynchronous period of the cycle.
EtherCAT [EtherCAT Technology Group (ETG), Beckhoff] developed as a
fast backplane communication system [15.26]. EtherCAT distinguishes two
modes: direct mode and open mode. Using the direct mode, a master device
uses a standard Ethernet port between the Ethernet master and an EtherCAT
segment. EtherCAT uses a ring topology within the segment. The medium
access control adopts the master/slave principle, where the master node
(typically the control system) sends the Ethernet frame to the slave nodes
(Ethernet device). One single Ethernet device is the head node of an Ether-
CAT segment consisting of a large number of EtherCAT slaves with their
own transmission technology. The Ethernet MAC address of the first node of
a segment is used for addressing the EtherCAT segment. For the segment,

16
Session 15: Overview of industrial communication systems

special hardware can be used. The Ethernet frame passes each node. Each
node identifies its subframe and receives/sends the suitable information using
that sub- frame. Within the EtherCAT segment, the EtherCAT slave devices
extract data from and insert data into these frames. Using the open mode, one
or several Ether- CAT segments can be connected via switches with one or
more master devices and Ethernet-based basic slave devices.
PROFINET IO/Isochronous Technology (PROFIBUS User Organization,
Siemens) developed for any industrial application [15.27]. PROFINET
IO/Isochronous Technology uses a middleware on top of the Ethernet MAC
layer to enable high-performance transfers, cyclic data exchange and event-
controlled signal transmission. The layer 7 functionality is directly linked to
the middleware. The middleware itself contains the scheduling and smoothing
functions. This means that TCP/IP does not influence the PDU structure. A
special Ethertype is used to identify real-time PDUs (only one PDU type for
real-time communication). This enables easy hardware support for the real-
time PDUs. The technical back- ground is a 100 Mbps full duplex Ethernet
(switched Ethernet). PROFINET IO adds an isochronous real-time channel to
the RT channels of real-time class 2 option channels. This channel enables a
high-performance transfer of cyclic data in an isochronous mode [15.28].
Time synchronization and node scheduling mechanisms are located within
and on top of the Ethernet MAC layer. The offered bandwidth is separated
for cyclic hard real-time and soft/non real-time traffic. This means that within
a cycle there are separate time domains for cyclic hard real-time, for soft/non
real-time over TCP/IP traffic, and for the synchronization mechanism, see
also Fig. 15.8.

Figure 0.8 LMPM MAC access used in PROFINET IO (cyclic real-time (cRT), acyclic real-time
(aRT), nonreal-time (non RT), medium access control (MAC), link layer mapping protocol machine
(LMPM))

The cycle time should be in the range of 250 μs (35 nodes) up to 1 ms (150
nodes) when simultaneously TCP/IP traffic of about 6 Mbps is transmitted.
The jitter will be less than 1 μs. PROFINET IO/IRT uses switched Ethernet
(full duplex). Special four-port and two-port switch ASICs have been
developed and allow the integration of the switches into the devices (nodes)
substituting the legacy communication controllers of Fieldbus systems.
Distances of 100 m per segment (electrical) and 3 km per segment (fiber-
optical) can be bridged.

17
Session 15: Overview of industrial communication systems

Ethernet/IP with Time Synchronization (ODVA, Rock- well Automation).


Ethernet/IP with time synchronization [15.29], an extension of Ethernet/IP,
uses the CIP Synch protocol to enable the isochronous data transfer. Since the
CIP Synch protocol is fully compatible with standard Ethernet, additional
devices without CIP Synch features can be used in the same Ethernet system.
The CIP Synch protocol uses the precision clock synchronization protocol
[15.30] to synchronize the node clocks using an additional hardware function.
CIP Synch can deliver a time-synchronization accuracy of less than 500 ns
between devices, which meets the requirements of the most demanding real-
time applications. The jitter between master and slave clocks can be less than
200 ns.
SERCOS III, (IG SERCOS Interface e.V.). A SERCOS network [56.31],
developed for motion control, con- sists of masters and slaves. Slaves contain
integrated repeaters, which have a constant delay time Trep (in- put/output).
The nodes are connected via point-to-point transmission lines. Each node
(participant) has two communication ports, which are interchangeable. The
topology can be either a ring or a line structure. The ring structure consists of
a primary and a secondary channel. All slaves work in forwarding mode. The
redundancy provided by the ring structure prevents any downtime caused by
a broken cable. The line structure consists of either a primary or a secondary
channel. The last physical slave performs the loopback function. All other
slaves work in forwarding mode. No redundancy against cable breakage is
achieved. It is also possible to insert and remove slaves during operation (hot
plug). This is restricted to the last physical slave.

15.4 Wireless Industrial Communications

Wireless communication networks are increasingly penetrating the


application area of wired communication systems. Therefore, they have been
faced with the requirements of industrial automation. Wireless technology has
been introduced in automation as wireless local area networks (WLAN) and
wireless personal area networks (WPAN). Currently, the wireless sensor
networks (WSN) are under discussion especially for process automation. The
basic standards are the following:
 Mobile communications standards: GSM, GPRS, and UMTS wireless
telephones (DECT)
 Lower layer standards (IEEE 802.11: Wireless LAN [15.32], and
802.15 [15.33]: personal area networks) as a basis of radio-based local
networks (WLANs, Pico networks and sensor/actuator networks)
 Higher layer standards (application layers on top of IEEE 802.11 and
802.15.4, e.g. Wi-Fi, blue- tooth [15.34], wireless HART, and ZigBee

18
Session 15: Overview of industrial communication systems

[15.35]) Proprietary protocols for radio technologies (e.g. wireless


interface for sensors and actuators (WISA) [15.36])
 Upcoming radio technologies such as ultra-wide band (UWB) and
WiMedia
15.4.1 Wireless Local Area Networks (WLAN)

The term WLAN refers to a wireless version of the Ethernet used to build
computer networks for office and home applications. The original standard
(IEEE802.11) specified an infrared, a direct sequence spread spectrum
(DSSS) and a frequency hopping spread spectrum (FHSS) physical layer.
There is an approval for WLAN to use special frequency bands, however it
has to share the medium with other users. The Wi-Fi Alliance was founded to
assure interoperability between WLAN clients and access points of different
vendors. There- fore, a certification procedure and a Wi-Fi logo are provided.
WLANs use a license-free frequency band, and no service provider is
necessary.
WLAN is a mature technology, and it is implemented in PCs, laptops, and
PDAs. Modules for embedded systems development are also available.
WLAN can be used almost worldwide. Embedded WLAN de- vices need a
powerful microcontroller. WLAN enables wireless access to Ethernet based
LANs and is helpful for the vertical integration in an automated
manufacturing environment. It offers high speed data transmission that can
be used to transmit productive data and management data in parallel. The
WLAN propagation characteristics fit into a number of possible automation
applications. WLAN enables more flexibility and a cost-effective installation
in automation associated with mobility and localization. The transition to
Ethernet is simple and other gateways are possible. The largest part of the
implementation is achieved in hardware; however, improvements can be
made above the MAC layer.
15.4.2 Wireless Sensor/Actuator Networks

Various wireless sensor network (WSN) concepts are under discussion,


especially in the area of industrial automation. Features such as time
synchronized operation, frequency hopping, self-organization (with respect
to star, tree, and mesh network topologies), redundant routing, and secure data
transmission are desired. Interesting surveys on this topic are available in
[15.41–44]. Process automation requirements can be generally fulfilled by
two mesh network technologies:

19
Session 15: Overview of industrial communication systems

ZigBee (ZigBee Alliance) [56.35]


 Wireless HART [56.45, 46].
Both technologies use the standard IEEE 802.15.4 (2003) low-rate wireless
personal area network (WPAN) [56.33], specifying the physical layer and
parts of the data link layer (medium access control).
ZigBee
ZigBee distinguishes between three device types:
 Coordinator ZC: root of the network tree, storing the network
information and security keys. It is responsible for connection the
ZigBee network to other networks.
Router ZR: transmits data of other devices.
 End device ZED: automation device (e.g. sensor), which can
communicate with ZR and ZC, but is unable to transmit data of other
devices.
An enhanced version allows one to group devices and to store data for
neighboring devices. Addition- ally, to save energy, there are full-function
devices and reduced-function devices. The ZigBee application layer (APL)
consists of three sublayers: application support layer (APS) (containing the
connection lists of the connected devices), an application framework (AF),
and Zigbee device objects (ZDO) (definition of devices roles, handling of
connection requests, and establishment of communication relations between
devices).
For process automation, the ZigBee application model and the ZigBee
profiles are very interesting. The application functions are represented by
application objects (AO), and the generic device functions by device objects
(DO). Each object of a ZigBee profile can contain one or more clusters and
attributes, transferred to the target AO (in the target device) directly or to a
coordinator, which transfers them to one or more target objects.
WirelessHART
Revision 7 of HART protocol includes the specification of WirelessHART
[15.46]. The mesh type network allows the use of redundant communication
paths between the radio-based nodes. The temporal behavior is determined
by the time synchronized mesh protocol (TSMP) [15.47, 48]. TSMP enables
a synchronous operation of the network nodes (called motes) based on a time
slot mechanism. It uses various radio channels (supported by the MAC layer)
for an end- to-end communication between distributed devices. It works
comparably with a frequency hopping mechanism missed in the basic
standard IEEE 802.15.4.

20
Session 15: Overview of industrial communication systems

TSMP supports star, tree, as well as mesh topologies. All nodes have the
complete routing function (contrary to ZigBee). A self-organization
mechanism enables devices to acquire information of neighboring nodes and
to establish connections between them. The messages have their own network
identifier. Thus, different networks can work together in the same radio area.
Each node has its own list of neighbors, which can be actualized when failures
have been recognized.
To support security, TSMP uses mechanisms for encryption (128-bit
symmetric key), authentication (32 bit MIC for source address), and integrity
(32 bit MIC for message content). Additionally, the frequency hopping
mechanism improves the security features. For detailed information see
[15.46].

15.5 Wide Area Communications

With the application of remote automation mechanisms (remote supervisory,


operation, service) using wide area networks, the stock of existing
communication technology becomes broader and includes the following
[15.18]:
 All appearances of the Internet (mostly supporting best effort quality
of services)
 Public digital wired telecommunication systems: either line-switched
[e.g. integrated services digital network (ISDN)] or packet-switched
[such as asymmetric/symmetrical digital subscriber line (ADSL,
SDSL)]
 Public digital wireless telecommunication systems (GSM-based,
GPRS-based, UMTSbased)
 Private wireless telecommunication systems, e.g. trunk radio systems.
The transition between different network technologies can be made easier by
using multiprotocol label switching (MPLS) and synchronous digital
hierarchy (SDH). There are several private protocols (over leased lines,
tunneling mechanisms, etc.) that have been used in the automation domain
using these technologies. Most of the wireless radio networks can be used in
non-real-time applications, some of them in soft real-time applications;
however industrial environments and industrial, scientific, and medical (ISM)
band limit the applications. Figure 15.9 depicts the necessary remote
channels.
The end-to-end connection behavior via these telecommunication systems
depends on the recently offered quality of service (QoS). It strongly limits the
use of these systems within the automation domains. Therefore, the following
application areas have to be distinguished:

21
Session 15: Overview of industrial communication systems

Non-Real-Time Communication in Automation


 Non real-time communication (Standard IT: up- load/download,
SNMP) with lower priority to real-time communication: for
configuration, diagnostics, automation-specific up/download.
 Manufacturing-specific functions, context management,
establishment of application relationships and connection
relationships to configure IO devices, application monitoring to read
status data (diagnostics, I&M), read/write services (HMI, application
program), open loop control.
The automation domain has the following im- pact on non-real-time WAN
connections: addressing between multiple distributed address spaces, and
redundant transmission for minimized downtime to ensure its availability for
a running distributed application.
Real-Time Communication in Automation
 Cyclic real-time communications (i. e. PROFINET IO data) for closed
loop control and acyclic alarms (i. e. PROFINET IO alarms) as major
manufacturing-specific services
 Transfer (and addressing) methods for RT data across WAN can be
distinguished as follows:

Figure 0.9 Remote communication channels over WAN (input/ output (IO); communication relation
(CR))

– MAC based: Tunnel (real-time class 1, partially real-time class


2 for longer send intervals, e.g. 512 ms), clock across WAN
and reserved send phase for real-time transmission
– IP based: Real-time over UDP (routed); web services based
[15.49, 50].
The automation domain has the following impact on real-time WAN
connections: a constant delay-sensitive and jitter-sensitive real-time base load
(e.g. in LAN: up to 50% bandwidth reservation for real-time transmission).

22
Session 15: Overview of industrial communication systems

To use a wide area network for geographically distributed automation


functions, the following basic design decisions were made following the
definitions in Sect. 15.2:
 A virtual automation network (VAN) is an infrastructure for standard
LAN-based distributed industrial automation concepts (e.g.
PROFINET or other) in an extended environment. The productive
automation functions (applications) are described by their object
models used in existing industrial communications. The application
service elements (ASEs), as they are specified in the IEC 61158
standard, can additionally be used.
 The establishment of the end-to-end connections be- tween distributed
objects within a heterogeneous network is based on web services.
Once this connection has been established, the runtime channel
between these objects is equivalent to the runtime channel within the
local area by using PROFINET (or other) runtime mechanisms.
 The VAN addressing scheme is based on names to avoid the use of
IP and MAC addresses during establishment of the end-to-end path
between logically connected applications within a VAN domain.
Therefore, the IP and MAC addresses remain trans- parent to the
connected application objects.
 Since there is no new Fieldbus or real-time Ethernet protocol, no new
specified application layer is necessary. Thus, the well-tried models
of industrial communications (as they are specified in the IEC 61158
standard) can be used. Only the additional requirements caused by the
influence of wide area networks have to be considered and they lead
to additional functionality following the above- mentioned design
guidelines.
Most of the WAN systems that offer quality-of- service (QoS) support cannot
provide real guarantees, and this strongly limits the use of these systems
within the automation domain. To guarantee a defined QoS for data
transmission between two application access points via a wide area network,
written agreements between customer and service provider [service level
agreements (SLA)] must be contracted. In cases where the provider cannot
deliver the promised QoS, an alternative line must be established to hold the
connection for the distributed automation function (this operation is fully
transparent to the application function). This line should be available from
another provider, independent from the currently used provider. The
automation de- vices (so called VAN access points (VAN-APs)] should
support functions to switch (either manually or automatically) to an
alternative line [56.51].
There are different mechanisms to realize a connection between remote
entities:

23
Session 15: Overview of industrial communication systems

 The VAN switching connection: the logical connection between two


VAN-APs over a WAN or a public network. One VAN switching
connection owns one or more communication paths. VAN switching
line is defined as one physical communication path be- tween two
VAN-APs over a WAN or a public network. The endpoints of a
switching connection are VAN-APs.
 The VAN switching line: the physical communication path between
two VAN-APs over a WAN or a public network. A VAN switching
line has its provider and own QoS parameter. If a provider of- fers
connections with different warranted QoS each of these shall be a new
VAN switching line.
 VAN switching endpoint access: the physical communication path
between one VAN-AP and a WAN or a public network. This is a
newly introduced class for using the switching application service
elements of virtual automation networks for communication via WAN
or public networks.
These mechanisms are very important for the concept of VANs using
heterogeneous networks for automation. Depending on the priority and
importance of the data transmitted between distributed communications
partners, the kind of transportation service and communication technology is
selected based on economical aspects. The VAN provider switching considers
the following alternatives:
 Use case 1: For packet-oriented data transmission via public networks
a connection from a corresponding VAN-AP to the public network
has to be established. The crossover from/to the public net- work is
represented by the VAN switching endpoint access. The requirements
made for this line have to be fulfilled by the service level agreements
from the chosen provider. Within the public network it is not possible
to influence the quality of service. The data package leaves the public
network when the VAN switching endpoint access from the faced
communication partner is achieved. The connection from the public
network to the faced VAN-AP is also pro- vided by the same or an
alternative provider and guarantees defined requirements. The data
exchange between two communication partners is independent of
each other.
 Use case 2: For a connection-oriented data trans- mission (or data
packages with high-level priority) the use of manageable data
transport technology is needed. The VAN switching line represents a
man- ageable connection. A direct known connection between two
VAN-APs has to be established and a VAN switching endpoint access
is not needed. The chosen provider guarantees the defined
requirements for the complete line. When the current line loses the
promised requirements, it is possible to de- fine the VAN-APs to build

24
Session 15: Overview of industrial communication systems

up an alternative line and hold on/disconnect the current line


automatically.

15.6 Emerging Trends

The number of commercially available industrial communication protocols


has continued to increase, despite some trials to converge to a single and
unified protocol, in particular during the definition of the IEC 61178 standard;
the automation community has started to accept that no single protocol will
be able to meet all different communication requirements from different
application areas. This trend will be continued by the emerging wireless
sensor networks as well as the integration of wireless communication
technologies in all mentioned automation-related communication concepts.
Therefore, increasing attention has been given to concepts and techniques to
allow integration among heterogeneous networks, and within this context
virtual automation networks are playing an increasing role.
With the proliferation of networked devices with increasing computing
capabilities, the trend of de- centralization in industrial automation systems
will increase in the future (Figs. 15.10 and 15.11). This situation will lead to
an increased interest in autonomic systems with self-X capabilities, where X
stands for alternatives as configuration, organizing, optimizing, healing, etc.
The idea is to develop automation systems and devices that are able to manage
themselves given high level objectives. Those systems should have sufficient
degrees of freedom to allow a self-organized behavior, which will adapt to
dynamically changing requirements. The ability to deal with widely varying
time and resources demands while still delivering depend- able and adaptable
services with guaranteed temporal qualities is a key aspect for future
automation systems.

Figure 0.10 The wireless factory

25
Session 15: Overview of industrial communication systems

Figure 0.11 Indoor positioning systems in the Smart Factory

26
Session 16: Communication for distributed control systems

Session 16
Communication for distributed control
systems
Content
Introduction ............................................................................................... 28
16.1 Motivations for the Fieldbus .......................................................... 28
16.2 Fieldbus Topology ......................................................................... 31
16.3 Architecture of the Fieldbus .......................................................... 32
16.4 The Physical Layer ........................................................................ 33
16.5 The Data Link Layer ...................................................................... 34
16.5.1 The Link Active Scheduler (LAS).......................................... 35
16.5.2 Cyclic Communication ........................................................... 36
16.5.3 Acyclic/Unscheduled Communication ................................... 37
16.5.4 Macro Cycle and Elementary Cycle ....................................... 38
16.6 The Application Layer ................................................................... 39
16.6.1 Fieldbus Access Sublayer ....................................................... 39
16.6.2 The Fieldbus Message Sublayer (FMS) .................................40
16.7 Fieldbus Devices ............................................................................ 41
16.7.1 Communications Stack ........................................................... 41
16.7.2 Transducer Block .................................................................... 42
16.7.3 Resource Block ....................................................................... 42
16.7.4 Function Block ....................................................................... 42
16.8 Network Management Structure .................................................... 46
16.9 System Management Functions ..................................................... 46
16.10 Device Description ..................................................................... 47

27
Session 16: Communication for distributed control systems

Introduction

Embedded electronics technology has given rise to significant rise in the


number of automatic devices for industrial data acquisition, transmission,
monitoring, diagnostics, control and supervision. Each of these devices is
configurable and capable of two-way communication with other devices.
Effective use of their capabilities can only be enabled by reliable and high-
speed communication architecture for extensive and rapid information
exchange among automation devices for coordination and control. Below we
introduce some of the major motivations that led to major users and suppliers
from the U.S., Japan and Europe coming together to establish the Fieldbus
Foundation in 1994. Their objective has been to develop a worldwide, unified
specification of "Fieldbus", a network communication architecture for field
devices for process control and manufacturing automation.

16.1 Motivations for the Fieldbus

Among the major motivations for the Fieldbus are the following.
 Replacement of analog and digital (serial) point-to-point
communication technology with much superior digital
communication network for high speed ubiquitous and reliable
communication within a harsh industrial environment.
 Enhanced data availability from smart field bus devices needed for
advanced automation functions such as control, monitoring,
supervision etc.
 Easy configurability and interoperability of system components
leading to an easily installable, maintainable and upgradeable open
system that leverages the computing and networking hardware and
software solutions
In industrial automation systems, the field signals have been traditionally
transmitted to the control room using point-to-point communication methods
that employ analog technologies such as the 4-20 mA current loop or, more
recently, digital ones such as the RS-422 or RS-485. The main disadvantages
of this are the highly increased cost of cabling due to the need for a separate
pair of wires for each device connected to the mainframe. Apart from this,
with 4 -20 mA analog current loop, signals can be transmitted only in one
direction. With the need for more complex monitoring and control of a
process plant, installation and maintenance of these point to point
communication media and their signal integrity become more and more
difficult. As an alternative the network communication architecture presents
an attractive option. Firstly, the cabling requirements are marginally

28
Session 16: Communication for distributed control systems

increased as more and more devices are added to the network. Secondly, a
vast array of high-speed networking technologies is available at attractive
costs from the computer market. Thirdly, with the addition of intelligent
devices, such a system enables advanced monitoring supervision and control,
leading to improvements in productivity, quality and reliability of industrial
operations.

Figure 0.1 Wiring system for conventional point-to-point communication systems and the Fieldbus

Fieldbus is a standard for Local Area Network (LAN) of industrial


automation field devices that enables them to intercommunicate. Typical
Fieldbus devices are sensors, actuators, controllers of various types, such as
PLCs, and DCS, and other computer systems such as human-machine
interfaces, process management servers etc. It includes standards for the
network protocol as well as standards for the devices on the network.
Fieldbus allows many input and output variables to be transmitted on the same
medium such as, a pair of metallic wires, optical fibre or even radio, using
standard digital communication technologies such as baseband time -division
multiplexing or frequency division multiplexing. Thus, sensors transmit the
measured signal values as well as other diagnostic information; the controllers
compute the control signals based on these and transmit them to actuators.
Further, advanced features such as process monitoring can be carried out
leading to increased fault tolerance. Online process auto-tuning can be
performed leading to optimized performance of control loops.

29
Session 16: Communication for distributed control systems

Table 16.1 compares some of the key features of 4-20mA and Fieldbus
technology. It should be mentioned that Fieldbus becomes cost-effective only
beyond a certain scale of operations.
Table 0.1 Comparison of Fieldbus with 4-20mA current loop

Item No. Specification 4-20mA Fieldbus


1 No. of devices per wire 1 32

2 Qty. of data/variables 1 Up to thousands


per device
3 Control functions in field No Yes
4 Device Failure Minimal (O/C, Yes, detailed
Notification S/C)
5 Signal degradation over Possible None
wire
6 Power distribution over Yes Yes
wire
7 Interchangeability of Yes Yes (with some
field devices restrictions)
8 Maximum run-length 2Km 1.9Km (5.7Km
with repeaters)
9 Failure diagnosis Technician Operator
reqd. informed at
console
10 Intrinsic safety With barriers With barriers
11 Sampling delay Vendor defined User defined
(within limits)

Fieldbus technology was designed for geographically distributed harsh


environment of process control applications. Also, it was conceived that there
would be frequent changes in the installations. To meet these requirements
the protocol includes the following aspects which are not necessarily found
in other Protocols:
 Control algorithms may be in field-mounted Devices, central
controlled or a combination of both.
 The End User does not have to be concerned with Device numerical
address allocation. The Protocol handles this task, so 'plug and play'
services are available for commissioning, modification and
replacement.
 Devices do not have to be 'configured' before they are attached to the
network.
 Device Definition and Function Blocks create a standard vendor-
independent device interface for each device type which, in turn,
facilitate installation, commissioning and upgradation of multi-vendor
applications.

30
Session 16: Communication for distributed control systems

 The Physical Layer of the Protocol was designed from the outset to
cope with installed cables and flammable atmospheres (hazardous
areas).
 Both precise cyclic updates as well as acyclic and sporadic
communications are catered for within the Protocol.
 Each variable transmitted on the Fieldbus carries with it tags
indicating the current health of the source. Using this information,
recipient Devices can take appropriate action immediately (for
example switch to Manual, Off-line, etc.).

16.2 Fieldbus Topology

As shown in Figure 16.1, Fieldbus generally uses one of the two topologies -
Bus and Tree. With the Bus Topology, devices are connected to the network
'back-bone'. Either through a 'Drop Cable' Device or are directly connected to
the Bus by a 'Splice' connection. The Tree arrangement is used where a
number of Devices share a similar location remote from the equipment room.
A junction box, installed at the geographic center of gravity of the Devices,
communicates with each Device is connected to it via a cable. In general
Fieldbuses can use a combination of both topologies. Thus, trees can be hung
from network buses.

Figure 0.2 Tree and Bus Structures for Fieldbus

31
Session 16: Communication for distributed control systems

16.3 Architecture of the Fieldbus

The Open Systems Interconnect (OSI) model published by the International


Standards Organization is a well-known definition of network
communications based on seven generic layers. It defines seven generic
'Layers' required by a communication standard capable of supporting vast
networks.
The first two layers, namely the Physical and the Data Link layers incorporate
the technologies to realize a reliable, relatively error free and high-speed
communication channel among the communicating devices. It provides
support for all standard and medium dependent functions for physical
communication. DLL actually manages the basic communication protocol as
well as error control set up by higher layers.
In Fieldbus, since the communication takes place over a fixed network routing
and transport layers are made redundant. Moreover, in an industrial control
environment, the network software entities or processes are also generally
invariant. Under such a situation, requirements of the session and the
presentation layers are also minimized. Therefore, the third, fourth, fifth and
sixth layers of the ISO protocols have been omitted in the Field bus protocol.
In fact the requirements of the omitted layers, although limited, have been
included within the Fieldbus Application Layer (FAL) (7), which is sub-
divided into two sub-layers, namely the Fieldbus Message Sub-layer (FMS)
Fieldbus Message Sub-layer (FMS) that builds up a message data structure
for communication as per requirements of user layer and includes the roles of
the session and presentation layers of the ISO-OSI model , and the Field
Access Sub- layer (FAS) that manages the functionality of the networking
and transport layers to the extent needed and provides a virtual
communication channel. Thus, the Foundation Fieldbus utilizes only three
ISO model Layers (1, 2 and 7), plus an additional Layer referred to as the
User Layer (8).
In the Fieldbus standard, the User Layer (8) is also included in the
specification. In this it differs from other communication standards. A typical
function of the User Layer is to define control tasks for a process plant. These
are achieved through abstract software units called Function Blocks. Defining
the User Layer functionality in terms of the open and published standards of
Function Blocks enables interoperability of devices from different vendors.
This is because any two devices that implement the standard abstract function
block interface would interoperate, irrespective of their internal
implementations.

32
Session 16: Communication for distributed control systems

Fieldbus Foundation has standardised a range of Function Block


communications interfaces. The content of a Function Block is not
standardised. For example: Company A and Company B may both supply
PID control algorithms within their products. The Fieldbus Foundation
specification dictates how each vendor's PID Function Block shall
communicate Set-point, Controlled Variable, P, I &D constants etc., but not
how the Function Block's internal algorithms would be realised.
The fieldbus protocol structure is shown alongside with that of the ISO-OSI
model in Fig. 16.2.
Below we discuss each of the above layers of the Fieldbus in more detail.

Figure 0.3 Fieldbus Network Architecture vis-à-vis OSI

16.4 The Physical Layer

Fieldbus allows options for three types of communication media at this layer,
namely, Wire, Fiber-optic and Radio. The Physical Layer is sub-divided into
an upper section (the Media Independent Sub-Layer MIS) and a lower section
which is media specific.
The MIS ensures that the selected Media interfaces in a consistent way with
the Data Link Layer (2), regardless of the media used. The lower sections
define the communications mechanism and media. For example, for wire

33
Session 16: Communication for distributed control systems

medium they describe signal amplitudes, communication rate, waveform,


wire types, etc.
An area-wide network can be implemented through the compartmentalization
of the bus system in the bus segments that can be connected over repeaters.
Standard-transmission rates can be in the range of from about 10KBaud 10
MBaud. The topology of the single bus segment is the line structure (up to
1200 m) with short drop cables (<0.3m). Transmission distances to 12 km are
possible by electrical configuration and to 23.8 km with optical configuration.
The distances are dependent on the transmission rate. With the help of
repeaters, a tree structure can also be constructed as shown:

Figure 0.4 Multi-bus segment Fieldbus network topology

The maximum number of nodes per bus segment amounts to 32. More lines
can connect under one another through performance enhancements
(repeaters) whereby it is noted that each repeater counts as a node. In total a
maximum of 128 nodes are connectable (over all bus segments).

16.5 The Data Link Layer

As the medium of transmission is a bus network, all device communications


take place over the same physical medium. A mechanism is therefore
necessary to ensure that it is shared effectively without collisions, i.e., when
one device transmits none other does. The Fieldbus Data Link Layer protocol
is a hybrid protocol that is capable of supporting both scheduled and
asynchronous transfers. Its maximum packet size is 255 bytes.

34
Session 16: Communication for distributed control systems

It defines three types of data link layer entities, a Link Master(LM), a Basic
Device(BD), and a bridge. Link master devices are capable of assuming the
role of the bus master, called the link active scheduler (LAS). At any point of
time only one of the LM devices act as the LAS. This is depicted in Figure
16.3.

Figure 0.5 Link Active Scheduler, Link Masters and Basic Devices for a Fieldbus implemented on a
High-Speed Ethernet

Basic devices are those devices not capable of becoming the LAS. They
receive and send published data, and they receive and use tokens. When they
hold the token, they are capable of initiating communications with all devices
on the network.
Bridge devices connect link segments together. Bridged networks are
configured into a spanning tree in which there is a single root link segment
and a series of downstream link segments. Bridges interconnect the link
segments. Each bridge may have a single upstream port (in the direction of
the root) and multiple downstream ports (away from the root). The root port
behaves as a basic device and the downstream ports are each the LAS for their
downstream link.
Bridges are responsible for republishing scheduled transfers and forwarding
all other traffic. Configured republishing and forwarding tables identify the
packets that are to receive and republish or forward. Bridges are also
responsible for synchronizing time messages received on their root port
before regenerating them on their downstream ports.
16.5.1 The Link Active Scheduler (LAS)

One of the devices connected to the Fieldbus acts as the Link Active
Scheduler (LAS). This decides which Device transmits next and for how long,
thereby avoiding the collision of messages on the Bus. The LAS is
responsible for the following list of tasks.

35
Session 16: Communication for distributed control systems

1. It detects the connection and disconnection of devices to the network,


in order to maintain a "Live List" of functional devices and ensure
they receive the "Right to transmit" when appropriate. Redundant
LAS's maintain their own Live Lists in readiness to take over when
the on-line LAS fails
2. It distributes time on the bus that can be used for scheduling and time
stamping.
3. It polls device buffers for data according to a predefined schedule.
This capability is used to support publisher/subscriber virtual
communication relationships.
4. It distributes a token to devices in its live list that they can use for
asynchronous transfers. This capability is used to support client/server
and report distribution virtual communication relationships.
The LAS controls all cyclic data transmissions in this manner. In free time
the LAS passes a message called the Pass Token (PT) to each Device in turn
allowing them to use this idle period.
As mentioned before, the Link Active Scheduler (LAS) controls
communications traffic on the Fieldbus. This is also called "Bus master
function". The active LAS grants a "right to transmit" to each device on
Fieldbus in a pre-defined manner. Devices other than LAS can communicate
only when they have the "right to transmit". There are two ways of granting
a "right to transmit". One is a polling method, which grants a right to transmit
in sequence to each device. Another is a time slot method, which grants a
right to transmit at a fixed time interval. The LAS uses these two methods
combined to meet the requirements of precise cyclic updates and unscheduled
traffic, for example, alarm reporting.
16.5.2 Cyclic Communication

Typically, cyclic communications in industrial operations involving input


output operations related to process control loops or PLC scan cycles. Such
communications must be performed at precise update rates. The LAS meets
the requirement of precise cyclic updates of variables by issuing a "Compel
Data" message (called the CD Token), to each source of data according to a
fixed schedule. On receiving the CD, the addressed device transmits the
current data on the bus. This message contains a reference to the source of the
data. Any other device on the bus requiring the data takes a copy for its own
use, for example an HMI or a control loop. Note that only one transmission
is required to satisfy many destinations.

The device transmitting the data is referred to as the "Publisher" and those
who take copies are called "Subscribers". The publisher may not know which

36
Session 16: Communication for distributed control systems

devices are subscribers. The publisher's data is referred to as a Data Transfer


Process Data Unit, or DT for short.

Figure 0.6 Communication within a Control Loop

If a control loop requires a measured variable to be updated on a cyclic basis,


the LAS instructs the source of the signal to transmit the variable by sending
a special message called the Compel Data (CD) token. On receiving this
message, the source transmits the variable on the bus. All devices on the bus
receive the message, but only those with a use for the information take a copy.
In Figure 16.6, the Process Variable (PV) sensor transmits the measured
variable when it receives the CD token. This is referred to as 'Publishing' the
data. The control algorithm in the control valve copies it, as it is a Subscriber
to this information. The HMI may also copy it for display and archiving
purposes, but only one transmission of the PV is required.
16.5.3 Acyclic/Unscheduled Communication

Apart from cyclic communications, requirements for acyclic communications


arise to handle sporadic process related events, such as,
 Alarm
 Operator Data Update
 Trend Data Update
 Set Point changes
 Controller Tuning

Once the requirements for cyclic data transmission have been met, the LAS
will issue a Pass Token (PT) to each device in turn, thereby allowing them

37
Session 16: Communication for distributed control systems

access to the bus to transmit data (a DT) or request data from another device,
utilizing the bus up to an allocated time limit.
16.5.4 Macro Cycle and Elementary Cycle

A basic requirement of process control applications is that precise cyclic


updates of process variables should be possible, to ensure good quality
continuous control. Generally, the number of such tasks in the system remains
more or less fixed. Apart from these, communication tasks related to sporadic
events, such as alarm reporting and operator changes of set points, must be
scheduled. The LAS therefore organizes its overall schedule communication
tasks in the system in “Macro Cycles”. The duration of each Macro Cycle is
further subdivided into a number of “Elementary Cycles”. This is shown in
Figure 16.7.

Figure 0.7 Macro Cycles and Elementary Cycles

Each EC within an MC begins with the set of periodic tasks that is to be


scheduled within that EC according to its update time period. The EC is
chosen to be of such a duration that even after processing of the periodic tasks
some time is left for servicing aperiodic tasks, should it be necessary, due to
the occurrence of some event in the system.

Figure 0.8 An example task schedule

This is shown in Figure 16.8 in the case of a simple example of a system


containing two devices requiring cyclic updates. The update requirements of
the two devices are 1 sec. and 0.5 sec. respectively. The LAS sets the EC
period as equal to the shortest update time requirement (0.5 Second. in this

38
Session 16: Communication for distributed control systems

case). Similarly, the longest update time sets the MC period (1 Second. in this
case).
The CD for Device 1 is generated at the beginning of each Elementary Cycle
and the CD for Device 2 at the second time slot of alternate Elementary
Cycles. In the 'free time' in each Elementary Cycle the LAS transmits PT's to
devices on the Fieldbus segment in turn, allowing them to transmit
unscheduled information. This is the unscheduled portion of the Elementary
Cycle. There may be insufficient free time for all Devices to receive a PT
before the end of the Elementary Cycle. In this case the LAS continues from
where it left off in subsequent cycles.
Note that the time available for unscheduled traffic varies from one
Elementary Cycle to another. For example, in the first EC of an MC in the
example, both periodic updates take place, while in the second EC only one
does, since the update requirement of device 2 is lower. Also, the CD's
requiring the shortest update intervals are dealt with first in each Elementary
Cycle. Thereby ensuring the interval between subsequent updates remains
constant.

16.6 The Application Layer

The objective of the Application Layer is to convert data and requests for
services coming from the User Application (Layer 8), into demands on the
communication system in the Layers below, and to provide the reverse service
for received messages. Thus, the application layer abstracts the technical
details of the network from the user layer which can view the network devices
to which communication is needed as if they are connected by virtual point
to point communication channels. The Application Layer is subdivided into
two sublayers namely the Fieldbus Access Sublayer (FAS) and the Fieldbus
Message Sublayer (FMS). These are described below.
16.6.1 Fieldbus Access Sublayer

The FAS sits in-between the FMS and the DLL. The FAS provides three
fundamental kinds of Version 2 EE IIT, Kharagpur 13 communications. The
services offered by the higher layers such as the FMS are realized by the FAS
using one of these modes of communication. They are described below.

39
Session 16: Communication for distributed control systems

16.6.1.1 One-to-one Bi-directional (QUB)


QUB is used for the communication between a device, which requests data
on Fieldbus, and a device which provides the data. A typical example of such
a communication is screen display updates and the change of setting of the
function block parameter, etc. through an operator's station. QUB is initiated
by a device (client) requesting read/write of parameters and is terminated
when another device (server) returns a response. Therefore, the
communication is of the bi-directional, and confirmed, that is with
acknowledgement from the server.
16.6.1.2 One-to-one Unidirectional 1 (BNU)
This type of communication is used for the distribution of data, which is
generated in one device (publisher) and is transmitted to one or more devices
(subscribers). The publishing application writes the data into a distributed
network buffer. The network is responsible for copying the data to
corresponding network buffers in subscriber devices. Subscribing
applications subscribe to published data asynchronously by opening buffers
for the receipt of the published data and identifying the associated publisher.
A typical example is a pressure transmitter sending measurement data as a
Process Variable (PV), and a valve positioner receiving it and using it to
modulate a valve.
Unlike QUB, BNU is initiated cyclically according to schedule, not by a
request for data. Neither does it involve a response from the server. This is an
unconfirmed communication service in single direction. BNU uses a
connection-oriented service in the data link layer.
16.6.1.3 One-to-one Unidirectional 2 (QUU)
With QUU, one device on Fieldbus generates data, and interested recipients
take a copy. The transmitter is referred to as the 'Source' and the recipients as
'Sinks'. This is typically for multicasting event reports and trend reports.
Unlike published data, reports are sent to preconfigured group addresses
when the bus is not scheduled for the transfer of published data. These virtual
communication relationships used connectionless transfers. Note that QUU
differs from BNU in that communication is unscheduled and new data does
not overwrite older data in the recipient devices.
16.6.2 The Fieldbus Message Sublayer (FMS)

The FMS acts as the interface between the User Layer and FAS. There is a
logical framework called Virtual Field Device (VFD), which manages various
functions and parameters at the user layer.

40
Session 16: Communication for distributed control systems

A Fieldbus Device must have at least two VFD's, one for administering the
network, the other for the control of the system or function blocks. The former
has the parameters related to setting up the communication, the latter has the
parameters related to Function Blocks defined by user layer and required by
the control application.
The process control oriented VFD in a Fieldbus device is its Function Block
Application Process (FBAP). Conceptually the Fieldbus specification allows
for the development of other Application Processes in the future, for example
a PLC Application Process might be defined.
In one field device, there are hundreds of parameters, such as the name of
apparatus, an address, status variables and operating modes, function blocks,
and those composed of data files. These parameters are defined as the objects
in a VFD. They can be treated systematically and are independent of the
specification of the physical device.
Each VFD is an "object" and within it there are other objects. An index of
these objects referred to as the Object Dictionary (OD) is provided within the
VFD. It details each object within the VFD, their data types and definition.
When another device, say an HMI host, wishes to access this data it can
interrogate the VFD to determine what is available, its format etc. This facility
aids interoperability as well as automated configurability.

16.7 Fieldbus Devices

Field devices are control devices connected to Fieldbus network. They


execute analog and discrete I/O functions plus the algorithms necessary for
closed loop distributed control.
From a communications perspective, field devices are composed of three
components, namely, the function block application, the system management
agent, and the communication stack, which includes the network management
agent. This architecture for a field device, and its components, is described in
detail by the Fieldbus Foundation Specifications. An overview is presented
below.
16.7.1 Communications Stack

The communication stack of a field device is a three-layer stack comprised of


the Fieldbus physical, data link, and application layer protocols described
above. The communication stack also contains a network management agent
that provides for the configuration and management of the stack.

41
Session 16: Communication for distributed control systems

16.7.2 Transducer Block

Transducer Blocks may be output, input or a combination of the two. They


interface between Fieldbus and the real world of sensors and actuators. An
input Transducer Block converts signals coming from the plant into Fieldbus
compatible variable and status messages. Output Transducer Blocks do the
reverse. The content of Transducer blocks implementations is specific to the
hardware technology they represent and consequently varies from vendors to
vendors. They insulate function blocks from these specifics, making it
possible to define and implement technology independent function blocks.
Thus, while standardization is achieved through function blocks described
below technological innovations in terms of electronics or signal processing
is not stifled.
16.7.3 Resource Block

Resource Block contains the resource information for hardware and software
within the Fieldbus device. For example, the device type, the manufacturer’s
name and data such as, serial number and available memory capacity - are
stored as parameters. Only one resource block exists in each Fieldbus device.
16.7.4 Function Block

The primary purpose of a field device is to perform low level I/O and control
operations. The Function Block Application Process (FBAP), as defined by
the Fieldbus Foundation, models these operations of a field device. The
structure of an FBAP is shown in Figure 16.8. The FBAP is composed of a
set of function blocks configured to communicate with each other. Outputs
from one function block are linked to the inputs of another through
configuration parameters called link objects. Function blocks may be linked
within a device, or across the network. Function blocks are scheduled to
execute their algorithms at predefined times that are coordinated with the
transfer of their inputs and outputs. During the execution of the function
block, the algorithm may detect events and trend parameter values (collect a
series of values for subsequent reporting). Reporting of events and trends can
be performed by multicasting them onto the bus to a group of devices. In
addition, some other types of objects such as View Objects, Alert Objects etc.
may also be associated with an FBAP. These objects perform typical tasks
related to the Function Blocks in the device. For example,
 The 'Alert Object' monitors the status of various kinds of blocks, and
reports to the upper system with a time stamp, if a configured alarm
or event is detected.

42
Session 16: Communication for distributed control systems

 The ‘Trend Object' stores trend data within a device and sends it in
one file upon request. This improves the communication efficiency.
 Similarly, the 'View Objects' construct dynamic files of variables,
status indications etc., collected from various blocks, and required by
external devices for monitoring and control purposes. By bundling
together, the required information it can be sent in a single
transmission, thereby saving communication time.
 Finally, the 'Link Object'. Interfaces the function blocks and the
objects to the FMS for implementing the configured Virtual
Communication Channels between FBAPs residing within network
devices
A function block is essentially a program that contains a set of network visible
input parameters, output parameters, internally contained parameters, and an
algorithm to process them. Parameters are identified by an index or a name
(not recommended) that locates them in the object dictionary associated with
the function block application. The object dictionary contains information
used to encode and decode parameters, such as type and length, and also is
used to map the parameter index to a local memory address. To promote
interoperability, interface devices can access the object dictionary.
Function blocks are connected to the physical hardware they represent
through transducer blocks. Devices can be configured across the network
through the use of contained block parameters. Contained block parameters
are those that can be written to the device by interface devices. Interface
devices are not able to write values to input and output parameters.
Figure 16.10 shows how a communication relating to a physical variable takes
place over the network. The value of the local physical variable is acquired
by the function block through the transducer block firmware. This is
processed by the function block and the output is communicated to another
field device with the help of the Link Object. The Link object locates, from
the object directory, the network address of the destination device as well as
the mode of communication service to be used for the communication task.
These are then realized by the FMS and the FAS sub-layers, in turn using the
lower layers.

43
Session 16: Communication for distributed control systems

Figure 0.9 Architecture of a Function Block Application Process

16.7.4.1 Realization of Distributed Control Functions using


Function Blocks in Fieldbus
Control functionality is realized over the network by a configured sequence
execution of function blocks and communication tasks among them. For
example, consider the control loop in Figure 16.6. The execution sequence is
shown in Fig. 16.11. Note that there are three function blocks involved,
namely, AI 101, PID 101 and AO 101. The first FB that executes is AI 101.
This is followed by a cyclic communication of the process variable value to
the PID 101 function block. The computation of the PID law in the FB PID
101 is followed by the computation of the valve stem position command to
the positioner in the FB AO 101. Note there is no communication involved
between these two FB executions, since both PID 101 and AO 101 are shown
to be residing on the same Fieldbus device. Finally, there is a communication
between AO 101 and the host for the HMI station. This basic execution cycle
repeats.

44
Session 16: Communication for distributed control systems

Figure 0.10 Access to function block parameters through the object dictionary

Figure 0.11 Function Block execution and communication sequence for a control loop.

45
Session 16: Communication for distributed control systems

16.8 Network Management Structure

Network management is the function of managing various parameters to carry


out Fieldbus communication. Generally, execution of the communication
function is performed by communications software that resides in a
communications ASIC.
The parameters for determining actual operation are called the Network
Management Information Base (NMIB) and are grouped as one object. These
parameters are accessed through Link Management Entity by the execution
software at each layer. This function is transparent to the End User of the
system.

16.9 System Management Functions

System Management (SM) performs the management of the parameters


needed for the construction of a functional control system, rather than
communication. The System Management Kernel is also modeled as an
FBAP.
The System Management Kernel performs two primary functions. The first is
to assign End User defined names, called tags, and data link layer addresses
to devices as they are added to the fieldbus. It contains an object dictionary
and can be configured and interrogated using FMS operating over
client/server virtual communication relationships.
The second is to maintain distributed application time so that function block
execution can be synchronized among devices. Fieldbus has a common clock
for called Link Schedule Time (LS-Time). The LAS uses this to synchronize
all devices on the bus frequently. Using LS-Time as a reference system
management FBAP triggers each function block and synchronizes operation
among Function Blocks in differing devices on the same bus. Furthermore,
system management provides a real time reference, called Application time
(AP-time). This time is used as the source of alarm or event timestamps.
To support these functions, the System Management Kernel communicates
directly with the data link layer.

46
Session 16: Communication for distributed control systems

16.10 Device Description

Electronic Device Descriptions (EDDs) created by Device Description


Language (EDDL) for a field device support the management of intelligent
field devices. Typical tasks such as operation, parameterizing and diagnostics
can thus be solved efficiently. EDD describes product features which serve
as a basis for the entire electronic product data management from planning to
engineering, set-up, maintenance and diagnostics and the disintegration of a
plant. EDDs are ASCII files. They primarily contain the description of all
device parameters and their attributes (e.g. lower/upper value range, default
value, write rights) and device functions, e.g. for the plausibility check,
scaling, mode changes or tank characteristics. EDDs also include a grouping
of device parameters and functions for visualization and a description of
transferable data records.
Electronic Device Description Language (EDDL) is the mechanism that
allows vendors to describe their products in a way that may be interpreted by
any compliant host system. Thereby enabling compatibility and
interoperability of devices. Also, the language allows vendors to include their
specific product features while remaining compatible. Furthermore, the use
of EDDL allows the development of new devices while still maintaining
compatibility.
The Device Description (DD) may be supplied with the device on a disk, or
downloaded from the Fieldbus Foundation web site, and loaded into the host
system.

47
Session 17: Communication protocols for building automation

Session 17
Communication protocols for building
automation
Content
17.1 Introduction .................................................................................... 49
17.2 Building services, automation, and integration ............................. 50
17.2.1 Building Services .................................................................... 52
17.2.2 System Integration .................................................................. 56
17.2.3 Automation and Control ......................................................... 57
17.2.4 Automation Hierarchy ............................................................ 59
17.3 Building automation and control networks .................................... 61
17.3.1 Basic Characteristics............................................................... 62
17.3.2 Application Model and Services............................................. 64
17.3.3 Network Architecture ............................................................. 68
17.3.4 Network Interconnection ........................................................ 70
17.4 Standards overview ........................................................................ 74
17.4.1 Subsystem Solutions ............................................................... 76
17.4.2 Open Management Integration ............................................... 78
17.5 Open system solutions ................................................................... 79
17.5.1 BACnet ................................................................................... 79
17.5.2 LonWorks ............................................................................... 85
17.5.3 EIB/KNX ................................................................................ 90

48
Session 17: Communication protocols for building automation

17.1 Introduction

Home and building automation systems are in the broadest sense concerned
with improving interaction with and between devices typically found in an
indoor habitat. As such, they provide a topic with many facets and range from
small networks with only a handful of devices to very large installations with
thousands of devices. This paper, however, narrows its focus on the
automation of large functional buildings, which in the following will be
referred to as “buildings” for simplicity. Examples include office buildings,
hospitals, warehouses, or department stores as well as large, distributed
complexes of smaller installations such as retail chains or gas stations. These
types of buildings are especially interesting since their size, scale, and
complexity hold considerable potential for optimization, but also challenges.
The key driver of the building automation market is the promise of increased
user comfort at reduced operation cost. To this end, building automation
systems (BAS) make use of optimized control schemes for heating,
ventilation, and air-conditioning (HVAC) systems, lighting, and shading.
Improvements in energy efficiency will also contribute to environmental
protection. For this reason, related regulations sometimes mandate the use of
BAS.
Costs can further be reduced by providing access to all building service
systems in a centralized monitoring and control center. This allows abnormal
or faulty conditions to be detected, localized and corrected at an early stage
and with minimum personnel effort. This is especially true when access to the
site is offered through a remote connection. A unified visualization scheme
for all systems further eases the task of the operator. Direct access to BAS
data from the corporate management level eases data acquisition for facility
management tasks such as cost allocation and accounting.
Besides the immediate savings, indirect benefits may be expected due to
higher expected workforce productivity or by the increased perceived value
of the automated building (the “prestige factor,” for both building owner and
tenant).
Although investment in building automation systems will result in higher
construction cost, their use is mostly eco-nomically feasible as soon as the
entire building life cycle is considered. Typically, the operational cost of a
building over its lifetime is about seven times the initial investment for
construction. Therefore, it is important to choose a building concept that
ensures optimal life-cycle cost, not minimum in-vestment cost. The
considerable number of available performance contracting offers strongly
emphasizes that advanced BAS are indeed economical. In these models, the
contractor takes the financial risk that prospective savings will offset the
investment within a given time.

49
Session 17: Communication protocols for building automation

Benefits both in terms of (life cycle) cost and functionality will be maximized
as more systems are combined. This requires that expertise from different
fields is brought together. Integrating fire alarm and security functions is
particularly challenging due to the high demands made on their depend-
ability. Engineers and consultants who used to work separately are forced to
collaborate with each other and the design engineer as a team.
Integration is obviously far easier when systems that shall be joined talk the
same language. For example, unified presentation is achieved at no additional
engineering effort this way, potentially reducing investment cost. Especially
large corporations with hundreds or thousands of establishments spread out
over large distances certainly would want to harmonize their building network
infrastructure by using a certain standard technology throughout. Yet this goal
is effectively out of reach as long as different manufacturers’ systems use
proprietary communication interfaces, with no manufacturer covering the
entire spectrum of applications. Here, open standards try to close the gap,
which step into the breach, which moreover help avoid vendor lock-in
situations.
In the past years’ LAN technologies have been pushing down the network
hierarchy from the management level while fieldbus technologies are pushing
upwards. This battle is still not over but what has already emerged from this
rivalry is a new trend of combining fieldbus protocols with LAN technologies
to better utilize an existing LAN infra-structure. Most approaches follow the
principle of running the upper protocol layers of the fieldbus protocol over
the lower layers of a typical LAN protocol such as IP over Ethernet. The
synergies arising out of this very attractive combination are manifold.
For example, most corporations have established their own Intranet and are
now able to leverage this infrastructure for managing their buildings. Still, all
the device profiles developed with great effort over many years can be reused.
Also, technicians trained on particular tools for many years do not find their
existing knowledge rendered worthless de-spite the switch to IP-based
building automation networks. IP-based communication also opens up new
dimensions in remote management and remote maintenance.

17.2 Building services, automation, and integration

Building automation (BA) is concerned with the control of building services.


Its historical roots are in the automatic control of HVAC systems, which have
been subject to automation since the early 20th century. The domain of indoor
climate control still is the main focus of this discipline due to its key role in
making buildings a comfortable environment.

50
Session 17: Communication protocols for building automation

Initially, controllers were based on pneumatics. These were replaced by


electric and analog electronic circuits. Finally, microprocessors were
included in the control loop. This concept was called direct digital control
(DDC), a term which is still widely used for programmable logic controllers
(PLCs) intended for building automation purposes.
The oil price shock of the early 1970s1 triggered interest in the energy savings
potential of automated systems, whereas only comfort criteria had been
considered before. As a con-sequence, the term “energy management system”
(EMS) appeared, which highlights automation functionality related to power-
saving operation, like optimum start and stop control.
Further, supervisory control and data acquisition (SCADA) systems for
buildings, referred to as central control and monitoring systems (CCMS),
were introduced. They extended the operator’s reach from having to handle
each piece of equipment locally over a whole building or complex, allowing
the detection of abnormal conditions without being on-site. Besides
environmental parameters, such conditions include technical alarms
indicating the need for repair and maintenance.
Also, the service of accumulating historical operational data was added. This
aids in assessing the cost of operation and in scheduling maintenance. Trend
logs provide valuable information for improving control strategies as well.
Often, BA systems with these capabilities were referred to as building
management systems (BMS).
Other building service systems benefit from automation as well. For example,
demand control of lighting systems can significantly contribute to energy
saving. Recognizing the head start of the BA systems of the HVAC domain
with regard to control and presentation, they provided the natural base for the
successive integration of other systems (some-times then termed “integrated
BMS” (IBMS)).
Today’s comprehensive automation systems generally go by the all-
encompassing name of BAS, although EMS, building EMS (BEMS), and
BMS/IBMS are still in use, sometimes intentionally to refer to specific
functional aspects, but often by habit. Fig. 17.1 illustrates these different
dimensions. The relevant international standard chooses building automation
and control systems (BACS) as an umbrella term.
Comprehensive automation is instrumental to the demands of an intelligent
building. This buzzword has been associated with various concepts over the
past 25 years (pro-vides a comprehensive review). Although there is still no
canonical definition, the current notion of intelligent buildings targets the
demands of users and investors alike. Buildings should provide a productive
and attractive environment to users while maintaining cost efficiency to
maximize the investors’ revenue over the whole life cycle. This specifically

51
Session 17: Communication protocols for building automation

includes management issues. As facility management has to become more


efficient, BAS services have to be tightly integrated into office and workflow
automation. As an example, consider conference rooms to be air conditioned
only (and automatically) when booked. Also, hotel management systems can
automatically adjust HVAC operation depending on whether a room is
currently rented or vacant. Cost allocation for climate control and lighting
with live metering data from the BAS and optimum scheduling of preventive
(or even predictive) maintenance based on automatic equipment monitoring
and service hour metering are possible as well.

Figure 0.1 Functional aspects of BAS.

Other dimensions of intelligent buildings are advanced infrastructures for


data communication and information sharing to promote productivity, but
also advanced structural design and innovative materials. For example,
mentions systems to improve the response of a building to earthquakes.
Intelligent buildings are also expected to easily adapt to changing user
requirements. Recent approaches even include the demand for them to
automatically learn the behavior of the tenants and adjust the system
performance accordingly.
17.2.1 Building Services

Buildings should provide supportive conditions for people to work and relax.
This means they will usually be tuned toward human comfort parameters
(comfort HVAC). Some-times, zones or entire buildings are optimized for the
particular demands of machines, processes, or goods, which may differ from
human comfort (industrial HVAC). In any case, the environment needs to be
safe, secure and provide the necessary infrastructure, including

52
Session 17: Communication protocols for building automation

supply/disposal, communication/data exchange and transportation. These


requirements vary significantly depending on the purpose of the building.
Buildings fulfill these demands through appropriate design of building
structure and technical infrastructure, the latter being known as building
services. For example, ventilation can be achieved through opening windows
(a structural de-sign measure) or forced ventilation (a mechanical building
service).
Building services include elements usually perceived as passive technical
infrastructure (such as fresh and wastewater management and power
distribution) as well as controllable, “active” systems such as HVAC. The
boundary is not clear-cut, however. For example, water supply may include
pressurization pumps, and power distribution may be extended with power
factor monitoring or on-site cogeneration.
Different building types will have different requirements regarding presence
and performance of these services. Table 17.1 highlights examples grouped
by building disciplines. For a comprehensive reference on building service
systems. The remainder of this section will highlight selected properties of
key domains where control is involved to a significant amount.
While the permissible environmental conditions for goods, machinery and
processes are usually clearly specified, ensuring human comfort is a more
complex affair. For example, thermal comfort does not only depend on air
temperature, but also air humidity, air flow, and radiant temperature. More-
over, the level of physical activity and the clothing worn have to be taken into
account. One and the same amount of air flow can be perceived as a pleasant
breeze as well as a draft depending on thermal sensation. Also, the amount of
control available to individuals influences whether they will consider
otherwise identical conditions as comfortable or not. This for instance applies
to the ability to open windows and having control over air delivery devices.
Still, the thermal regulation system of the human body ensures comfort over
a certain range of these parameters.
Space heating and cooling can be achieved in different ways. One possibility
is to install convectors fed with hot or chilled water. Cooling ceilings are a
special form of such convectors. Flow meters and valves are necessary to
measure and control the amount of energy distributed. Convection may be
fan-assisted, in which case the convector is referred to as a fan-coil unit
(FCU). The feed water is centrally prepared in boilers and chiller plants.
Electric heating elements are often substituted for hot water coils, especially
where oil or gas is not available.
When forced ventilation is used, heating and cooling is usually provided with
the supply air. In this case, central air handling units (AHUs) contain the
convector coils (or cooling coil and heating element) together with air filters

53
Session 17: Communication protocols for building automation

to remove dust and smoke particles, a humidifier and the necessary dampers
and pressure sensors to control the amount of air exchange with the outside.
With variable air volume (VAV) boxes instead of fixed outlets it is possible
to finely control the amount of air released into the conditioned space in
addition to its temperature, which allows saving energy.
The amount of air which needs to be exchanged to maintain proper air quality
varies with the number of people present. Most frequently, a static value is
assumed for smaller rooms and manual intervention is required for larger ones
like lecture halls. Nevertheless, air quality sensors (cf.) are available for
automation.
Not all sections of a building can (or need to) be treated equally with respect
to environmental conditioning. As an example, for access spaces like
stairways, thermal comfort parameters are relaxed in comparison with
habitable spaces.
Table 0.1 Building Service Domains

Also, the sunlit south side of a building may require different treatment than
the one facing north. Therefore, and for reasons of manageability in large
complexes, buildings are split into control zones. With room control, every
room forms a zone of its own. Conditions can then be optimized for taste or
presence, using presence buttons or detectors.
Good HVAC control strategies can optimize the consumption of primary
energy by capitalizing on information about thermal comfort conditions as
well as properties of the building structure (e.g., high or low thermal inertia)
and systems. Comprehensive sensor data and provisions for fine-grained
control also work toward this goal.
Lighting systems fall into two subdomains: artificial lighting, where
luminaires are switched and dimmed (by means of load switches,
incandescent dimmers, and controllable ballasts) and daylighting. The latter
is concerned with limiting the amount of daylight which enters the interior to
avoid excessive light intensity and glare. Motorized blinds allow automation
of this task. Lighting is traditionally dominated by simple open-loop control
relationships in response to manual switches. Only recently, complexity has
increased. Artificial light can be centrally switched off during nonoffice
hours, also automatically on a given schedule. In this period, a time-limited
mode of operation can be entered. Presence detector devices can be used to

54
Session 17: Communication protocols for building automation

automatically turn off the lights in unused rooms. Both luminaires and blinds
can be adjusted for the sun position according to the time of day. Advanced
daylighting systems follow the sun to adjust mirrors which reflect daylight
into interior zones. Also, luminaires and blinds can adapt to sky conditions to
yield constant lighting conditions with optimum energy efficiency. Lumen
maintenance can be achieved both in an open-loop (using a rooftop daylight
detector) or a closed-loop manner (with lighting sensors placed in the
interior). Anemometers and weathervanes allow determining when outside
blinds have to be retracted to avoid damage. Recently, electrochromic
windows have become available commercially. The translucence of
electrochromic glass is continuously adjustable by applying a low voltage.
In safety and security alarm systems, no closed control loops exist. Alarm
conditions have to be detected and passed on to appropriate receiving
instances. This includes local alarms as well as automatically alerting an
appropriate intervention force. Precisely distinguishing nonalarm from alarm
situations is essential. Example sensors are motion and glass break sensors
from the security domain; water sensors for false floors from the property
safety domain; and smoke detectors, heat detectors and gas sensors from the
life safety domain. Emergency communication can include klaxons or
playback of prerecorded evacuation messages. Emergency lighting is also
related to this field. Generally, high reliability is required in this domain, the
exact requirements depending on the precise application. The requirements
are highest for handling life-threatening conditions in the safety domain, most
notably fire alarms. Also, no system components can be allowed to fail
without being noticed. The inspections necessary to ensure this can be aided
by automatic monitoring.
Like BAS, alarm systems gradually have implemented communication
capabilities that reduce the cost of installation and operation. Traditionally,
sensors had their alarm limits preset in hardware and were daisy-chained into
loops. An alarm was triggered whenever a sensor broke the current loop, with
the precise location and reason unknown to the system. This technique is still
used in smaller systems. More recent systems allow communication with
individual sensors, which may provide even more detailed information about
the alarm condition this way, for example the gas concentration measured.
Provides an overview on safety and security system technologies. As a final
example, conveying systems are of significant complexity in their own right
regarding control. Yet there is no need for modification of most of their
parameters (like car speed or light level) in response to daily changes in
building use, like it is the case in HVAC. Therefore, control interaction occurs
on a high level only. Examples include putting the system into a reduced
operation mode during night hours or controlled shutdown in case of a fire
alarm. Additionally, signaling equipment on the landings (e.g., hall and
direction lanterns) could be accessed through an open interface.

55
Session 17: Communication protocols for building automation

17.2.2 System Integration

Building engineering disciplines have evolved separately and are traditionally


handled by independent contractors. Consequently, their respective
automation systems are still entirely separate in most buildings today.
Another good reason for this separation is that few companies currently cover
all domains. Yet there are benefits when information exchange between
building systems is possible. For ex-ample, window blinds have considerable
impact on HVAC control strategy, as incident solar radiation causes an in-
crease in air temperature as well as in immediate human thermal sensation.
Automatically shutting the blinds on the sunlit side of a building can
significantly decrease the energy consumption for cooling.
A second area of overlap comprises doors and windows. Their state is of
importance to both the HVAC system (to avoid heating or cooling leakage to
the outdoor environment) and the security system (to ensure proper intrusion
protection at night). The same holds true for motion or presence detectors.
Also, motion detectors can provide intrusion detection at night and automatic
control of lights during business hours. Such common use of sensors in
multiple control domains can reduce investment and operational cost. On the
other hand, it increases the complexity as different contractors needs to handle
the functional overlap in their engineering systems.
As an important step, building control systems also need to accept control
information from systems which are more closely related to the information
technology (IT) world. This especially concerns access control systems. Data
exchange is not limited to “pass/do not pass” signals sent to doors by RFID
readers, card readers, biometric authentication devices, or simple key
controls. Increasingly, scenarios such as lighting a pathway through the
building and controlling elevators based on card access control at the gate are
requested.
As a future prospect, data of multiple sensors may be fused for additional
benefit. As an example, consider using the data provided by indoor air quality
sensors for presence detection. Control information can also be derived from
CCTV imagery through image processing techniques. For example, presence
detection and people counting for better HVAC or elevator control can be
achieved this way. As another benefit, the state of doors and windows can be
detected.
Yet, in all cases, the benefits reached by tighter integration come with a
drawback. In an integrated system, examining groups of functionalities in an
isolated manner becomes more difficult. This introduces additional
challenges in fault analysis and debugging as well as functionality

56
Session 17: Communication protocols for building automation

assessment. Additionally, if multiple contractors are working on a single


integrated system, problems in determining liability may arise.
The assessment problem is of special concern where life safety is involved.
For this reason, fire alarm systems traditionally have been kept completely
separate from other building control systems. Although a considerable degree
of integration has been achieved in some projects, building codes still often
explicitly disallow BAS to assume the function of life safety systems. This of
course does not extend to less critical property safety alarms (e.g., water
leakage detection). Similar considerations apply to building security systems.
These issues need to be addressed by carefully selecting the points of
interaction between the different subsystems, with the general goal of making
the flow of control traceable. First, this requires limiting the number of such
points to the amount absolutely necessary to achieve a given task. Second,
interfaces have to be defined clearly to ensure that no repercussive influence
is possible. This may necessitate special measures to limit the direction of the
control flow (dry contacts and the 4–20 mA interface remain classic
examples). Third, points of contact have to be selected in a way that
reasonably self-contained subsystems emerge when the links between them
are cut. Such divisions may be vertical (e.g., separation into functional
domains) as well as horizontal (e.g., a building wing).
Considerable benefits can already be achieved by establishing a highly
limited number of interaction points at the highest system level. One prime
example is that elevators only need the information that an evacuation
condition is present a single bit transfer to be able to automatically stop loaded
elevator cabins at the next floor level and shut down in case of a fire alarm.
Integration at the device level, however, such as in the examples presented
above, intro-duces a level of complexity that still remains a challenge to be
handled.
It was stated above that the number of interaction points should be limited to
the necessary minimum. While this is correct, it is also necessary to keep the
system design flexible enough for future integration requirements. Since
building installations are long-lived, system evolution is an important issue.
A rigid system that solely satisfies the demands identified at design time often
makes future extensions or tighter integration impossible.
17.2.3 Automation and Control

Building automation can be regarded as a special case of process automation,


with the process being the building indoor environment (and its closer
surroundings).3 The process consists of numerous subprocesses, both discrete
and continuous. The most complex processes by far4 are present in the HVAC
domain. Since HVAC processes involve large (thermal) capacities, changes

57
Session 17: Communication protocols for building automation

in system parameters occur only gradually. Quick transients typically only


have to be detected when optimizing system behavior. Since the process
behavior is slow, requirements on controller response times are relaxed
compared to industrial control applications. Despite the general absence of
high-speed control loops, HVAC control is not without challenges. It has to
deal with disturbances, which change over time as a function of load, weather
conditions, and building occupancy. These influences are of stochastic nature
and therefore not exactly predictable, although certain assumptions can be
made.
Closed-loop control is barely present in other building systems. Interestingly
enough, timing constraints are tightest in certain open-loop control relations
(most notably simple light control functions), where the response time is put
in relation with the human perception time in the range of a few hundreds of
milliseconds.

Figure 0.2 Building automation, three-level functional hierarchy.

Regarding the required reliability (defined as the probability of a system to


perform as designed) and availability (defined as the degree to which a system

58
Session 17: Communication protocols for building automation

is operable at any given point in time), basic functions of automated systems


have to measure up against conventional installations. As with timing
constraints, demands are moderate, since the consequences for failing to meet
them are merely annoying in the vast majority of cases. Exceptions do exist
however, most notably in industrial HVAC (e.g., refrigerated warehouses)
and medical applications. Dependable operation is also re-quired when the
integration of safety and security functions is desired.
Definitely, one key challenge in BAS is that large areas need to be covered
especially in high-rise buildings or larger building complexes. Another
challenge is that the domain is highly cost sensitive when compared with
industrial automation. Also, systems have to be long-lived (at least in
comparison with the IT world). They are required to be “future proof,” which
favors proven, technologically conservative approaches. Hence, the domain
is very slow to accept and adopt new technological developments. Bid
invitations often require systems to adhere to international standards, which
lengthens the innovation cycle due to the delays inherent to such
standardization procedures.
Finally, operators will seldom receive intensive training, which is why ease
of use and robust operation are of significant importance. This is especially
the case for all system components which are meant to be operated by tenants.
17.2.4 Automation Hierarchy

A general system model designed to accommodate all kinds of BAS5 is


described. Key elements are shown in Fig. 17.2. In this model, aspects of
system functionality are broken up into three levels, presenting the
incarnation of the automation pyramid for BAS.
At the field level, interaction with the physical world takes place.
Environmental data are collected (measurement, counting, metering) and
transformed into a representation suitable for transmission and processing.
Likewise, parameters of the environment are physically controlled
(switching, setting, positioning) in response to commands received from the
system.
Automatic control, including all kinds of autonomously executed sequences,
is assigned to the automation level. It operates on data prepared by the field
level, establishing logical connections and control loops. Processing entities
may also communicate values of more global interest to each other, for
example the outside temperature or whether night purge is to be activated.
This type of process data exchange is referred to as horizontal
communication. In addition, the automation level prepares (possibly
aggregate) values for vertical access by the management level. This includes
the accumulation of historical data at specified rates (trending).

59
Session 17: Communication protocols for building automation

At the management level, information from throughout the entire system is


accessible. A unified interface is presented to the operator for manual
intervention. Vertical access to automation-level values is provided,
including the modification of parameters such as schedules. Alerts are
generated for exceptional situations like technical faults or critical conditions.
Long-term historical data storage with the possibility to generate reports and
statistics is also considered part of this level.
It is evident that the amount of (current and historical) data present for access
within a given device increases when ascending through the levels. The task
of the field level is a distributed one by nature. Automation is typically
handled in a distributed manner as well, with multiple processing units
responsible for locally contained (or functionally separate) subprocesses. The
benefits of distribution are manifold, such as reducing latencies in control
loops, avoiding single points of failure, reducing the risk of performance
bottlenecks and allowing for subsystems to be out of service due to failures
or scheduled maintenance without affecting other parts. Certainly, distributed
systems are harder to design and handle than centralized ones. Yet the
increase in complexity for the overall system will be mitigated when “divide
and conquer” is applied properly, with the added benefit of the resulting
subsystems being more transparent.
A BACS design could choose to actually distribute the functions described
above over separate devices. As illustrated in Fig. 17.2, sensors and actuators
are either directly connected to controllers via standard interfaces (like dry
contacts, 0–10 V, or 4–20 mA) or by means of a field network. Process control
is performed by DDC stations (unit controllers). A server station performs
supervisory control, logging and trending for a group of unit controllers (e.g.,
in the central plant room or a building wing). Supervisory and unit controllers
are connected via their own automation network. In addition, dedicated
special systems (DSS) can connect at this level. For instance, a fire/security
panel could put HVAC unit controllers into smoke extraction mode when a
fire alarm is raised on its line. An operator workstation uses the data prepared
by the server stations to present the user interface. DSS which are not to be
integrated into a tight automatic control scheme can be tied in at this level as
well. This can, on the one hand, be done with the goal of achieving single-
workstation visualization for all systems. On the other hand, metering and
other usage data can be transferred into enterprise-level databases such as
computer-aided facility management (CAFM) systems for predictive
maintenance and cost allocation. Remote stations are integrated into the
management network on demand via a dial-up connection (or other WAN
tunnel) when data exchange is required. Alert messages may be forwarded to
the operator via cellular short message gateways or electronic mail.

60
Session 17: Communication protocols for building automation

The system architecture of today’s BACS, however, seldom coincides so


closely with the functional architecture described by the three-level model.
For example, visualization software packages usually include soft PLC
functionality. This allows leveraging the integration effort spent on
integrating diverse systems to offer uniform visualization from a single
workstation, which is a standard requirement on many projects. Intelligent
field devices as those connected to a field network can easily perform simple
control functions as well.
A trend toward a flatter hierarchy can be observed. Automation-level
functions are being assumed by devices typically associated with the adjacent
levels: supervisory control and data aggregation are integrated with
management-level functions while continuous control is incorporated in field
devices. Still, dedicated controllers will help to address the complexity
inherent in larger installations or where special performance requirements
exist. Depending on the particular demands and structure of a project,
multiple approaches to distributing the necessary functionality are viable.

17.3 Building automation and control networks

In distributed control applications, there is an inherent need to communicate.


Actual and actuating values need to be transferred between sensors,
controllers and actuators. As building automation has changed over the years,
the exchange of control information did as well.
Pneumatic control systems transmitted information in the form of air pressure
levels, typically in the industry-standard 0.2 to 1 bar (3–15 lbf/in) pressure
range. In electrical and electronic systems, voltage or current levels, e.g., the
well-known 4–20 mA interface, served (and still serve) this purpose.
However, monitoring and control from a central lo-cation can only be
achieved for a limited number of values this way. To reduce the amount of
cabling necessary, CCMS used matrix multiplexing. Soon, wires were even
more efficiently used by data networking.
As a consequence of this otherwise desirable evolution, achieving
interoperability between controllers, sensors and actuators by different
manufacturers has become a significantly more complex issue than simply
setting up value range mappings in an identical way.
This section covers how the characteristics of building automation
applications translate into requirements on the underlying networks used for
this purpose. This encompasses quality-of-service aspects as well as
appropriate services and the standard “point” data model. It also touches
aspects of network architecture, integration through gateways and routers,
and the topic of open systems.

61
Session 17: Communication protocols for building automation

17.3.1 Basic Characteristics

General demands on a building automation system (whether in the traditional


sense or as an integrated system) were already discussed in Section II-C.
These are immediately related to the requirements on data networks within
such a system, which are either instrumental in achieving these objectives or
will improve the price/performance ratio in doing so.
Key criteria regarding the required quality-of-service are throughput,
timeliness, dependability and security. As for necessary throughput, BA
applications usually do not generate high traffic load at the field level due to
the absence of high-speed control loops. Also, event load from stochastic
sources (e.g., light switches) is low. Moreover, the spatial locality of control
relationships is high. Still, considerable amounts of traffic can accumulate
when data have to be collected in a central location from all over a large
system. Data, however, seldom need to be available with full spatial and
temporal resolution in real-time at a management workstation.
For example, it can be perfectly acceptable for the state of a luminaire to be
updated with the central monitoring application every two minutes. Proper
response time to tenants’ requests is ensured by the local unit controller.
Supervisory controllers can summarize the heating or cooling loads deter-
mined by subordinate HVAC zone controllers for the purpose of calculating
the necessary amount of primary energy conversion, but still log the
information in detail for future operator review. Nevertheless, as a general
rule, management and automation level functions are more demanding in
terms of network throughput to provide acceptable speeds for larger block
data transfers like trend logs or DDC program files.
The previous example already hints at the fact that timeliness is of different
concern for the three layers. Actually, real-time data is only exchanged on the
field and automation level. Here, moderate requirements apply to all time
constraints (periodicity, jitter, response time, freshness/promptness, time
coherence. No special mechanisms (e.g., dynamic scheduling) for handling
these constraints are necessary. It is sufficient even for more demanding
applications to be able to state certain upper bounds on transmission delays.
Dependability (robustness, reliability, availability, safety) translates into the
ability of the network to detect transmission errors, recover from any such
error or other equipment failure and meet time constraints. Guaranteed
performance (still with relaxed timing requirements) is only mandatory for
life-safety applications. Loss of control has no catastrophic consequences
otherwise. Still, a certain amount of fault tolerance is desirable on the field
and automation level in the sense that a single failing unit should not bring
down the whole system. As long as these layers remain operational, having
management functions unavailable for some time is usually acceptable.

62
Session 17: Communication protocols for building automation

The network should also provide appropriate noise immunity (and not
generate unacceptable levels of noise itself). Robustness in this respect is
desirable especially at the field level, where cables are laid in the immediate
vicinity of the mains wiring. Apart from this, the environment of BACS
networks is not particularly noisy, especially in office buildings.
Table 0.2 Selected Service Requirements and Related Mechanisms in Industrial and Building
Automation

Reviewing these requirements, peer-to-peer, event-driven communication


schemes appear well suited to BACS. Medium access control using
deterministic Carrier Sense, Multiple Access (CSMA) variants, possibly
supporting frame prioritization, will allow efficient use of the “raw”
throughput capacity available as well as fulfill timeliness requirements for the
lower levels.
This is different when compared to industrial automation, where high-speed
control loops favor time-driven master–slave approaches. Also, regarding
fault tolerance, the focus typically is on redundant design (if necessary) rather
than graceful degradation of functionality as systems need every sensor,
actuator or controller to be operational to fulfill their purpose.
Table 17.2 summarizes the main differences with respect to functions
involving real-time data. Management-level operations may use any “office-
type” network.
For managing the large scale of BA systems, network protocols need to
support hierarchical subdivisions and appropriate address spaces. Larger
installations will run into thousands of meters of network span as well as
thousands of nodes. Networks should also be able to transparently include
wide-area connections, possibly dial-on-demand.
Historically, the level of communications security provided by the variety of
proprietary, undocumented protocols mostly proved to be appropriate for
isolated building automation systems. Nowadays security concerns are
increasing rapidly, however. In part, this is due to the fact that more sensitive
systems like access control and intrusion alarm systems are being integrated.
Moreover, office networks are used to transport automation system data and
remote access is standard on present day systems (as will be discussed in more
detail below). Protection against denial-of-service attacks becomes more of
an issue as buildings get more dependent on automation systems. In any case,
the security focus is on authentication. For example, it is usually not a secret

63
Session 17: Communication protocols for building automation

that a door was unlocked; however, only a trusted entity should be able to do
so.
Securing connection points for remote access is of particular importance.
Since they often allow access to management level functions, attacks on them
will have a higher chance of global effect. BACS field networks are exposed
to (inside) attack as well, especially when run through publicly accessible
spaces.6 Open media such as wireless and powerline signaling further
increase vulnerability, since access to the medium can be gained in an
unobtrusive manner. Further, the shift to open systems reduces the knowledge
barrier for intruders.
Considering cost, many sensors and actuators (e.g., light switches or
controllable breakers) are cheap. Providing them with fieldbus connectivity
must not be inappropriately expensive. Costs are also an issue in manpower
involved. There-fore, installation and configuration have to be as simple as
possible.
Wiring can be significantly simplified when a network supports free
topology. One can think of free topology as increasing the stub length in a
bus topology until the bus character disappears. The two bus terminators in a
bus topology will be replaced by a single bus terminator for the free topology
network installed anywhere on the network. Cables should also be easy to run
through ducts. Supplying power to the nodes over the network cable (also
known as link power) both saves additional power wires and allows compact,
inexpensive power supplies. For inaccessible or hazardous areas or special
aesthetical requirements, wireless technologies are deployed. Wireless access
is also interesting for management functions like log file access for service
technicians or presenting user interfaces to tenants on their personal mobile
devices. Retrofit applications will also profit from the ability to use power-
line communication.
17.3.2 Application Model and Services

In the distributed system constituted by a building automation and control


application, a number of nodes (sensors, actuators and controllers) are
connected over a network and communicate through a certain protocol. The
data trans-ported are values from the sensors, which are processed and sent
to the actuators (horizontal communication). In addition, some nodes also
send data directly to actuators (e.g., a manual override or set point change
from an operator workstation), or only consume data from sensors (e.g., for
trend logging; vertical communication).

To the application developer the network represents itself as a set of


elementary data elements, called data points (or simply points). These data

64
Session 17: Communication protocols for building automation

points are the logical representation of the underlying physical process, which
control net-work nodes drive or measure. Each node can be associated with
one or more data points. In the logical view each data point represents a single
datum of the application. It can correspond to an aspect of the real world (such
as a certain room temperature or the state of a switch) or be of more abstract
nature (e.g., a temperature set point).
The data points are connected through a directed graph, distinguishing output
points and input points. The application is defined by this graph and a set of
processing rules describing the interactions caused by the change of a point
value. The logical links which this graph defines can be entirely different
from the physical connections between the nodes.
The main characteristic of a data point is its present value. How the digital
value is represented is determined by the basic point type, such as integer,
floating point, Boolean or enumeration types. To further qualify their value,
data points are associated with additional meta data (attributes), which are
important in the context of the control application.
A unit attribute adds a semantic meaning to the present value by describing
the engineering unit of the value. This attribute is often implied by a certain
complex point type de-fined for a specific application, such as
“Temperature.” These type attributes are often used to ensure compatible
connections between data points.
A precision attribute specifies the smallest increment that can be represented.
Attributes such as minimum value, maximum value, and resolution may
describe the observable value range of the data point more precisely. The
resolution can be the actual resolution of the physical sensor and may be less
than the precision.
A key attribute is the location of a point, which is often correlated to a name.
Building planners may de-sign the point name space according to
geographical aspects, such as building, floor or room and/or according to
functional domain aspects, such as air conditioning or heating. The name
space hierarchy need not correspond with the network topology (although it
often does, especially with a geographic hierarchy). An example pattern is
Facility/System/Point, e.g., “Depot/Chiller1/Flow Temperature.”
Often, alarm indicator attributes are used. By presetting certain bounds on a
data point value, the data point can switch from normal mode to alarm mode,
e.g., when a temperature limit has been exceeded. This attribute can be
persistent so that it can be used to detect alarms also after the value has
returned to be in bounds again.

65
Session 17: Communication protocols for building automation

An additional, important concept for data points are point priorities. In


building automation applications, it is common that multiple output points are
associated with a single input point. If the output point values are in conflict
with each other the more prioritized one succeeds, e.g., a window contact
overrides the air-conditioning thermostat.
Typically, points in the data point graph can be logically grouped to describe
specific functions of the system. Such groups forming a coherent subset of
the entire application (both data points and the processing rules that belong to
them) are referred to as functional blocks. While these profiles do not
influence the graph as such, they allow a functional breakdown of the system
and aid in the planning and design process by giving the planner a set of
building blocks for the distributed application. Functional blocks can also be
grouped to form larger functional blocks.
This concept is illustrated in Fig. 17.3. The vertices in the graph represent the
data points, the thin-line edges network connections and the bold edges
processing data flow connections in a field unit. By grouping certain points
by their functional relation, the functional blocks FB1 and FB2 are formed.
These may or may not coincide with actual physical nodes. At higher levels
of abstraction, the application engineer may work with aggregates of
functional blocks. The aggregate behaves like its own functional entity with
the bold vertices being the interfacing data points. Using this technique
planners can construct templates of complex functionality and instantiate
them multiple times without repeated engineering effort.

Figure 0.3 An example for a data point graph using different functional blocks and aggregation.

This high-level view, which is an accepted standard for BA applications,


should be reflected by the data model and services of the network protocol.
Data points then serve as the external interface for accessing device
functionality. Establishing the communication links for horizontal
communication to build the application graph at installation time is known as
binding.

66
Session 17: Communication protocols for building automation

Since the time characteristics of horizontal traffic are known at the design
stage, the application can be subjected to a priori performance analysis
discusses how to quantify the amount of such identified data in building
automation applications.
As one output data point will often be bound to multiple input data points,
horizontal communication benefits from protocol support for multicast
relationships. Such support also may facilitate obtaining set coherence
(which, for example, can be of interest when switching groups of luminaires).
Since the multicast destination groups will be static as well, labeling them
with logical identifiers can simplify addressing. This also enables a
publisher–subscriber style of communication where producers and consumers
of information need not be aware of each other.
Generally, producer consumer relationships seem most suitable for horizontal
communication. For the node application programmer, a shared-variable
model is particularly convenient. On every change of a specially designated
variable (possibly holding the present value of an output data point), the
nodes’ system software will automatically initiate the necessary message
exchange to propagate the updated value to the appropriate receivers.
When bindings can be defined without changing the node application, the
latter can be created independent of its particular use in the overall system.
This is especially useful for smart field devices. Unlike DDC stations, their
programming is more or less fixed due to resource limitations.
Standardization of the functional blocks they represent (“device profiles”) is
instrumental for enabling interworking between such nodes.
Table 0.3 Horizontal Versus Vertical Communication

Vertical communication can be divided into services related to accessing and


modifying data from within the application, for example adjusting a set point
or retrieving trend logs (frequently referred to as management
communication), and others concerned with modifying the application itself,
for example changing binding information or program transfer (engineering
communication).
While horizontal communication only involves the ex-change of present
values (or alarm indicators) since a consistent interpretation of their semantics
by all communication partners was ensured at setup time, for both
management and engineering tasks access to the meta data (descriptive
names, units, limits, ) pertaining to a data point is relevant as well.

67
Session 17: Communication protocols for building automation

Vertical communication typically relates to information stored within a single


node, which suggests unicasting as the prevalent mode of communication.
Engineering communication is supported by the availability of reliable point-
to-point connections. Still, broadcasts are needed to support functions like
device or service discovery and clock synchronization.
Vertical communication is initiated on-demand, i.e., the communication
targets are chosen ad hoc. Related services therefore most often follow a
client-server model. Table 17.3 compares the different properties of
horizontal and vertical communication.
Data points which need continuous monitoring can be polled cyclically.
Additionally, more elaborate protocols provide an event-based mode of
communication. In such a model, services exist for clients to subscribe to (and
unsubscribe from) change-of-value (COV) notifications, which are generated
when selected point values change by a specified amount. Alternatively,
notifications may only be generated when the value exceeds or falls below
certain limits (coming/going alarms).
For engineering tasks, it is desirable that services are pro-vided which allow
devices present on the network to be dis-covered automatically. They should
also be able to provide descriptive information pertaining to the data points
(and possibly functional blocks) they provide. Configuration in-formation
(e.g., binding information or the device location) should be retrievable as well
to minimize dependence on ex-ternal, possibly inaccurate databases.
In addition to the manual configuration of bindings, system concepts may
include support for devices to provide self-binding capability. Usually, the
system integrator is responsible that the processing rules associated with data
points bound to each other yield a sensible combination. Automatic binding
schemes may use standardized identifiers for particular functional blocks to
replace this knowledge. This necessarily reduces flexibility as it requires a
stringent high-level application model. As an aside, such self-binding
capabilities are a prime example of “vertical” communication between
devices of the same stratum, illustrating that the three-level model can be
considered a functional classification only.
17.3.3 Network Architecture

Although the three-level model from Fig. 17.2 suggests a matching three-
level hierarchical network architecture, strictly implementing this concept is
not appropriate in many cases. It was already discussed that devices
implement a mix of appropriate functionality from all three levels. Network
architectures have to cater to this mix of services and appropriate
requirements.

68
Session 17: Communication protocols for building automation

In particular, intelligent field devices incorporating controller functionality


render the notion of a separate automation network absurd. A strictly three-
tier network would also unnecessarily complicate sharing devices (like
sensors in particular) between functional domains.
Still, cost-efficient device networking technologies cannot accommodate the
throughput requirements created by log file transfers or central real-time
monitoring of numerous event sources. Therefore, a two-tier architecture has
become popular where local control networks are interconnected by a high-
performance network backbone.
A typical building network infrastructure consists of in-dependent control
networks on every floor, which connect sensors and actuators at the room
level. Control networking technologies are geared toward cost-efficient
implementation of field-level and automation level tasks where throughput is
less an issue than timeliness.
These networks are connected through a backbone channel for central
monitoring and control, remote maintenance and diagnostics, which may also
span building complexes. Plant networks may use a separate controller
network, although DDC stations will often connect to the backbone directly.
While BACS traditionally use dedicated transmission media, most modern
buildings are also equipped with structured cabling for office data networking
throughout the building. The IT infrastructure has become an integral part in
modern buildings. The attempt to leverage this infrastructure for automation
purposes is a natural consequence.
Since management level services do not impose any timeliness constraints
worth mentioning, office networks will al-ways be able to assume functions
of this level.8 Given the fact that BA applications are not exceedingly
demanding in terms of timeliness and reliability, IT technology is also in a
position to handle automation level services. Extending the unification
process to the field level is still a theoretical possibility though, as cost
efficiency, robustness and ease of installation cannot yet match dedicated
solutions.
It should be noted that adopting “office-standard” technologies need not
necessarily mean having office and automation traffic use the same wires.
Adopting IT networks for control purposes is actually a three-fold decision.
First, one can employ IT technology at the physical and data link layer only,
running custom protocols above. In this case, mainly questions of design
performance have to be considered. Second, one can choose to adopt standard
office networking protocols. This facilitates integration, but already has
security implications, as standard protocols and especially their off-the-shelf
implementations provide a broader area for attack; the ability to make use of
approved and tested security measures is generally considered to offset this

69
Session 17: Communication protocols for building automation

disadvantage.9 Third, control and IT communications can be actually run


over the same network. This makes an integrated assessment of network
quality of service necessary. They may or may not use the same upper-layer
protocols in this case, although adopting standard IT practice will certainly
make administration easier.
Today, “IT network” has effectively become a synonym for “IP network.”
Making use of the associated standard application-related protocols as well
holds considerable potential for building integrated systems, including greatly
facilitating remote connections via the Internet.
Although IP networks cannot fulfill the quality of ser-vice requirements of
more demanding control applications yet, since delay cannot be fully
controlled. They are suitable (and also applied in practice) for use as a
backbone network in building automation systems. Still, individual control
networks should depend on the backbone just as little as unit controllers
should on a central station. To provide additional reliability (for example, for
safety-related functions), an additional control backbone (possibly using a
fiber optic ring network) may be installed in parallel to the common office
network backbone.
17.3.4 Network Interconnection

Building automation systems may span a variety of different networks, which


again may or may not share a common notion of their distributed application
(i.e., re-source models, services, and namespaces). Discontinuities especially
occur when integrating special-purpose systems, no matter whether
centralized or distributed.
In the general case, gateways are needed to handle the interconnection.
Gateways effectively need to maintain a database of mappings between
network entities on either side. This translation does not only introduce
considerable engineering effort, but also has to be provided with a multitude
of application-related parameters to fill the gaps which will necessarily occur
in mapping protocol constructs between both sides. Also, it uses considerable
processing power.
Therefore, gateway functionality is usually integrated in nodes which are
designed to perform customizable processing anyway. Traditionally, this
applies to controllers and server stations (which therefore also handle network
transitions in the classic three-tier model).

70
Session 17: Communication protocols for building automation

Figure 0.4 System with control network islands and horizontal proxy nodes/gateways.

In the two-tier model, gateways are in an ideal position to assume additional


tasks as well. At the intersection of control network and backbone, they can
for example perform trend logging, thus freeing the backbone network from
real-time concerns and taking load off the control network. They could also
perform logic control. Dynamic application frameworks such as OSGi allow
providing gateways with the necessary flexibility.
With the gateway approach, control applications on every network use their
native protocols to communicate with each other, with the gateway
establishing the semantic connection. No half needs to deal with any protocol
specifics of the other half. All the intricacies of the specific protocol can be
abstracted and hidden behind the gateway.
This approach is desirable when applications need to be working across the
boundaries of different control network systems and can operate with the
common denominator of the present services. In building automation,
gateways typically operate on the abstraction level of data points, which
represent the common denominator regarding the application model thanks to
the real-world orientation of their concept. This is especially the case for data
point connectivity during regular operation. The gateway functions needed
for this type of integration are limited to a small set of services, such as read
value, write value and change-of-value subscription.
Gateways can directly translate between two control networks, providing
horizontal connections from data points in one system to data points in
another system as depicted in Fig. 17.4. This is especially appropriate for
decentralized control tasks. As an example, consider a lighting system using
control network A using information from presence detectors connected to
the HVAC system using control network B. This is, however, a less
commonly used technique.

71
Session 17: Communication protocols for building automation

More frequently, both control networks use gateways for vertical connection
to a third, common standard. This may be the backbone network or also a
software platform on a management server. This is more frequently done,
since accepted common standards for integration exist. Thus, different control
networks only need to provide one mapping to the common standard each
instead of multiples to each other to achieve integration. Fig. 17.5 illustrates
this concept.

Figure 0.5 System with control network islands and a common backbone.

A key limitation of the gateway approach is that mapping all intricacies of a


protocol is extremely hard (and thus often a theoretical possibility only).
While the data point abstraction will serve as the common denominator for
the ex-change of process-related data, most engineering services are
impossible to translate because these services are usually highly technology
specific. Actually, they typically already require the communication partners
on both ends to know the protocol in full detail. The only problem which
remains is that the intermediate network does not support those services
natively. A beneficial approach in this case is to transfer all protocol layers of
the control network over the intermediate (backbone) network. The
intermediate network basically functions as a transmission medium for the
control network protocol. This method is known as the tunneling of a control
network protocol over an intermediate network (e.g., an IP backbone). The
devices at the boundaries of the inter-mediate network, which build the ends
of the tunnel between two or more control network segments, are called
tunneling routers.

72
Session 17: Communication protocols for building automation

Figure 0.6 System with tunneling routers.

The main advantage of the tunneling approach is the transparent connection


of control nodes over IP networks. This is especially convenient for two
purposes. First, separate control network segments may be connected over a
higher performance backbone network. Second, for remote administration, a
system’s native tools can be run on a node on the host network to manage
(e.g., commission, monitor, control, visualize) the control network. The
software in this case is specifically written for a given control network
protocol. In this case, the host network node usually implements the tunneling
router functionality itself. Fig. 17.6 depicts a system with tunneling routers
and IP-based control nodes.
Technically, the tunneling approach principally has to overcome the problem
that packets on an arbitrary packet-switched network may be reordered,
delayed, du-plicated, or dropped. A number of techniques have been
standardized to address these problems for all field network protocols of
relevance in building automation.
With the tunneling approach, control network segments are not decoupled.
They have to be considered as a whole both for troubleshooting and
functionality assessment. When gateways are used, the coupled systems stay
independent. They can be commissioned (and, if necessary, repaired)
separately. As an important side note, this independence is sometimes a
desirable property (cf. the discussion regarding “loose coupling” in Section
II-B). Therefore, even connections between systems with identical network
stacks may be established on the application layer in certain cases by proxy
nodes. A proxy node is also included in Fig. 17.4.

73
Session 17: Communication protocols for building automation

Gateways and tunneling routers are no panacea, how-ever. For high-level


integration, the common semantics of data points suffice for integration.
Intelligent buildings and reaping the benefits of sensor synergy, however,
demand deeper integration on the device level. Obviously, it is not feasible to
integrate complex gateway functionality into every device.
Also, customers want to mix and match components from different vendors
to build best-of-breed systems and realize hitherto unattained levels of
functionality. Escaping vendor lock-in is especially significant given the fact
that BACS have high life expectancy and need to be capable of continued
evolution. Not being bound to one original vendor can significantly lower the
total cost of ownership.
To achieve this, all aspects of interfacing with a system have to be open. Very
different notions exist concerning the meaning of the word “open.” For the
purposes of this discussion, a system technology is considered open if its full
specifications are made available to the general public and can be
implemented at nondiscriminatory conditions. Such systems can be repaired,
modified, and extended by everyone with the necessary basic qualifications
without having to rely on the original manufacturer. Unlike gateways, which
need only ex-pose data points defined at contract time, open systems are
indeed future-proof. Besides the specification of the network stack with its
protocols and services, data point attributes and functional profiles have a key
role in the specification of open systems whose parts will interwork and
interoperate, respectively.
The effort to engineer an open system is still considerable, since many
parameters still have to be aligned to achieve interoperability. “Open” does
not mean “plug and play”; it merely ensures that interoperability can be
achieved without further involving equipment manufacturers. To the end
user, a system must always appear homogeneous, no matter how complex the
interplay of its components may be.
Therefore, the benefits of open systems are not free. The reduction in lifecycle
cost thanks to the flexibility gained, however, is generally considered to offset
the initial additional hardware and engineering cost.

17.4 Standards overview

The field of building automation has been dominated by a plethora of


proprietary solutions for a long time. Its moderate performance requirements
still encourage ad hoc approaches. Yet pushed by market demand for open
systems, even market leaders are gradually abandoning proprietary de-signs.

74
Session 17: Communication protocols for building automation

Official standards bodies ensure that the standards they maintain and publish
fulfill the conditions of open systems as outlined, i.e., nondiscriminatory
access to specification and licensing. Hence, adherence of equipment to such
formal standards is required in an increasing number of tenders. Standards
directly related to building automation system technology are created in the
United States10 and in a number of European and international standards
bodies.
ISO TC 20511 (Building Environment Design) is publishing a series of
international standards under the general title of Building Automation and
Control Systems (BACS). The series includes a generic system model
describing hardware, functions, applications and project
specification/implementation of a BACS (the latter parts still to appear). It
also contains the BACnet standard discussed in the following section.
CEN12 TC 247 (Building Automation, Controls, and Building Management)
is responsible for paving the way in European BA protocol standardization
through cumulative prestandards of industry-standard protocols for the
automation and field level [18], [19] which also included a collection of
standardized object types for the field level. TC 247 also made significant
contributions.
CENELEC13 TC 205 (Home and Building Electronic Systems, HBES)
oversees the EN 50090 series, a standard for all aspects of HBES tightly
coupled to KNX (which will also be presented in the following section). Its
scope is the integration of a wide spectrum of control applications and the
control and management aspects of other applications in and around homes
and buildings, including the gateways to different transmission media and
public networks.14 Moreover it takes into account all matters of EMC and
electrical and functional safety.
ISO/IEC JTC1 SC25 WG115 (Information Technology, Home Electronic
System) focuses on the standardization of control communication within
homes. Its work specifically includes residential gateways between the
internal Home Electronic System network and external wide-area networks
such as the Internet. Despite its focus on the home environment, the work of
WG1 may be relevant since it also looks at similar management functions in
commercial buildings.
A number of standards—closed and open company standards as well as
formal ones—further contribute to the overall picture by providing important
directions for BACS subsystems. These will be covered in the remainder of
this section.

75
Session 17: Communication protocols for building automation

17.4.1 Subsystem Solutions

On the management level, IT standards prevail for connectivity, as was


already discussed. Application level issues will be covered in the next
subsection. On the automation level, EIA-485 is very popular, with many
(proprietary) protocols variants on top. The most notable example which also
provides a certain degree of openness is Johnson Controls Metasys N2.
Fieldbuses which are well-established in factory and process automation (like
Interbus, CAN-based protocols as Devicenet or CANOpen, and Profibus
DP16) are largely irrelevant in BA, except for occasional use in “plant room
network” controller-to-controller communication (specif-ically including
variable frequency drives for fans and pumps).
Although never formally standardized, Modbus can definitely be regarded as
an open protocol. This protocol was de-signed in the late 1970s and is
currently supported by most programmable logic controllers (PLCs) in some
form. Implementation of the Modbus protocol is license-free, which makes it
especially interesting for integration and interfacing between BAS and other
systems. It still is supported to some extent by numerous BA controllers,
especially for the purpose of HVAC controller-to-controller-communication
(e.g., with chillers). Moreover, Modbus is also present in devices belonging
to other building disciplines, like electricity meters or fire alarm systems.
The Modbus application layer is basically confined to reading and writing of
register values using a simple re-quest/response protocol. This yields a very
flexible/versatile application layer, but causes high engineering effort, since
even the format of primitive data types has to be coordinated. Modbus
supports serial communication using a simple master–slave protocol over
EIA-485. A total of 247 different slaves can be addressed. The typical data
rate is 19.200 b/s.17 A mode of transmission over TCP/IP is also defined, in
which every node can be both client and server.
At the field level, wireless technologies hold great promises for reducing the
effort spent on sensor cabling and installation. Yet to realize this benefit,
nodes have to run on batteries for months, or even better years. Control
applications in BA do not require high bandwidth, but still demand reasonably
low latency. Support for large device arrays is an added benefit. Popular
office wireless standards like IEEE 802.11 are obviously not optimized for
these requirements. Even Bluetooth is designed for being embedded in
devices which consume more power. Therefore, these technologies are better
suited to management-level functions.

IEEE 802.15.4 defines physical and medium access layers for low-rate
wireless personal area networks. It contains methods to provide

76
Session 17: Communication protocols for building automation

(cumulatively) long periods of deep sleep, which are necessary to save power
(making use of the quick transitions between sleep mode and active state
possible with current silicon). A coordinator periodically can transmit beacon
frames, which among other things are used to synchronize attached devices.
Devices which expect data (periodically, at an application-defined rate, e.g.,
sensors) can wake up only for the beacon frame, which indicates whether data
is actually available for them. Devices which only intermittently have data to
transmit (at an applica-tion/external stimulus defined rate, e.g., light switches)
can wake up, synchronize with the beacon, transmit and go to sleep again.
Small packets and CSMA ensure that nodes only transmit when necessary.
The Zigbee alliance adds additional layers (whose specification is not openly
available) to IEEE 802.15.4. They provide network layer functionality with
additional security including AES (Advanced Encryption Standard) and
routing functionality for extending the typical 50 m range of a “segment” by
supporting mesh topologies for dynamic creation, consolidation and splits.
Zigbee also adds an application support layer with discovery and binding plus
“application objects” (functional blocks), which currently cover building
automation, plant control and home control applications. Latencies of 15 ms
from sleep to actual transmit are achieved and a significantly smaller and less
re-source-consuming stack than with Bluetooth are advertised.
As for standards covering specific building service do-mains only, Digital
Addressable Lighting Interface (DALI) is an IEC standard and widely
accepted for lighting applications. Its primary focus is on replacing the
traditional 0–10 V interface for dimmable electronic ballasts. A DALI loop
can contain up to 64 individually addressable devices. Addition-ally, each
device can be a member of 16 possible groups. De-vices can store lighting
levels for power-on, system failure and 16 scene values, plus fading times.
There are also immediate commands (without store functionality) and
commands for information recall (like lamp status). Loops can be up to 300
m long, with free topology. The data rate is 2400 b/s using a master–slave-
based protocol.
DALI also accommodates operation buttons, light and presence detectors.
Addresses and all other settings are assigned over the bus. The necessary
functionality can be provided by hand-held programming devices, gateways,
or wall-box controllers which add it to their operation button functionality.
Finally, for remote meter reading, M-Bus has gained a certain degree of
importance in Europe. Its application layer supports various metering
applications and includes support for advanced functionality like multiple
tariffs. It operates on low-cost twisted pair cabling, with the data link layer
based on the IEC 870-5-1/-5-2 standard for telecontrol transmission
protocols. A serial master–slave protocol with data rates be-tween 300 and
9600 b/s is used. A segment can contain up to 250 devices and cover a
maximum distance of 1000 m (multiple segments are possible). In the master-

77
Session 17: Communication protocols for building automation

to-slave direction, data is transmitted using voltage modulation, while in the


re-verse direction, current modulation signaling is used.
17.4.2 Open Management Integration

At the management level, office network and automation standards prevail.


Mapping BA functionality and system states to protocols and representation
formats used in the IP-dominated IT networks is of particular interest.
A variety of Web servers for BACS visualization and control are available.
For user interfaces, HTML/Java Applet user interfaces are especially
convenient in office environments when light walls are used, reconfiguration
is frequent and room control is desired as they eliminate the need for room
controllers. Details how rights management on a per workstation basis (for
functions with local scope) as well as on a per-user basis for administrative-
level functions can be implemented.
Protocols like HTML are designed for operator-machine communication, not
for transmitting information from one machine to another. For integration of
BACS with other enterprise computing applications such as, for example,
facility scheduling, maintenance management, and energy accounting, a
suitable data model and corresponding ser-vices are needed.
For manipulating single control variables over a gateway, the use of the
Simple Network Management Protocol (SNMP) proved to be a practical
approach. In this case, the control variables are mapped to management
information base (MIB) variables that can be accessed over the Internet via
SNMP. While this method illustrates the gateway concept, it has less practical
relevance in building automation.
Today, it is common practice to model data structures as objects, including
those in the control domain. Several standards for distributed object-oriented
systems are commonly used in the Internet and thus are candidates for usage
in ap-plication layer gateways. Object access protocols over IP net-works are
provided by the Common Object Request Broker Architecture (CORBA), the
Java Remote Method Invocation (RMI) interface, the Microsoft Distributed
Component Object Model (DCOM), or the Simple Object Access Protocol
(SOAP) using XML notation [27], [28]. All these technologies are found in
proprietary gateway solutions.

One of the first open standards for accessing process data using an object-
oriented approach which found broader acceptance by different vendors is
open process control (OPC). OPC, which is based on DCOM, is also widely
used in building automation. The OPC gateway acts as a server providing data
from the control network to the client. The namespace is organized as a tree.

78
Session 17: Communication protocols for building automation

Services implemented are not limited to data access and exchange, but also
include alarms and events and historical data access. A number of PC-based
OPC servers and clients (e.g., visualization tools) for BAS are available.
The main disadvantage of plain OPC is its tight relation to Windows-based
systems. Because of this platform-dependence a new trend of standardization
focuses on SOAP/XML to access the data objects in the building. In the recent
past a number of initiatives are producing platform-independent gateway
standards based on XML/SOAP.
For the OPC data access services the OPC XML/DA standard enables the
access of data on an OPC server through web services. Two upcoming
standards designed specifically for the building automation domain are of
particular interest and find support by important manufacturers in the area:
oBIX [33] and BACnet/WS (covered in the following section).

17.5 Open system solutions

BACnet, LonWorks and EIB/KNX are open systems claiming the ability to
cover BA applications in their entirety. They all have achieved considerable
significance in the worldwide market (in case of BACnet and LonWorks) or
in the European market (in the case of EIB/KNX) and are often chosen by
both customers and system integrators for complete system solutions.
This section introduces the following aspects of these systems:
standardization and certification; physical characteristics including supported
media and network topologies; communication paradigms; application data
model; and services. In addition, standard hardware components and
commissioning tools are discussed where appropriate.
17.5.1 BACnet

The Building Automation and Control Networking Protocol (BACnet) was


developed specifically to address the needs of building automation and
control systems of all sizes and types. Capabilities vital to BA applications
were built into BACnet from the beginning in order to ensure the highest
possible level of interoperability in an environment possibly involving
multiple vendors and multiple types of building systems.

The development of BACnet began in 1987, when an American Society of


Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) project
committee could not find an existing protocol that satisfactorily met all of the
criteria the committee members had in mind for a suitable standard
communication protocol for building automation applications.

79
Session 17: Communication protocols for building automation

The development effort was finally completed in 1995, when BACnet was
first published as an ANSI/ASHRAE standard. In 2003 BACnet was adopted
as both a CEN and ISO standard and has been, or will be, adopted as a national
standard by the 28 member countries of the EU pursuant to CEN regulations.
It has also been adopted as a national standard by Korea and is presently under
active consideration in many other countries including Russia, China and
Japan.
The BACnet specification is under continuous maintenance and further
development. It is maintained by
ASHRAE Standing Standard Project Committee (SSPC) 135. SSPC members
represent all sectors of the industry. Additionally, participation in the
development process is completely open. Any interested parties are actively
encouraged to provide comments and suggested changes.
Based on surveys conducted in the United States, Europe, and Japan in 2003,
there are now more than 28 000 installations in 82 countries and on all
continents. The use of BACnet is free of any licenses or fees.
“BACnet Interest Groups” (BIGs) exist in Europe (BIG-EU), North America
(BIG-NA), AustralAsia (BIG-AA) and the Middle East (BIG-ME).
Additional BIGs are in various stages of development in China, Japan, Russia
and Sweden. The Swedish group may expand to include the other
Scandinavian countries. Each BIG has its own unique character: while the
majority of BIG-EU members represent corporations, for example, almost all
members of BIG-NA come from colleges and universities. In the United
States, BACnet manufacturers have formed the BACnet Manufacturers
Association (BMA) which, in turn, operates the BACnet Testing Laboratories
(BTL). All these organizations are, to varying degrees, active in promotional
activities, educational programs, the exchange of practical field experiences,
interoperability issues, testing and certification and, last but not least,
standards development.
While BACnet messages can, in principle, be conveyed over any network, a
small number of network types were standardized for BACnet’s use in order
to maximize the probability that any two devices of comparable functionality
would use the same type. The network types chosen to cover a range of speed
and throughput. They are Ethernet, ARCNET, Master–Slave/Token-Passing
(MS/TP), LonTalk, and Point-to-Point (PTP). Each local area network type,
except MS/TP and PTP, is a standard, off-the-shelf technology. MS/TP
addresses connectivity over twisted pairs using EIA-485 signaling while PTP
supports dial-up communications and other point-to-point applications using
EIA-232 and, possibly, modems or other data communication equipment.
Note that the use of the LonTalk protocol is limited to transporting BACnet-
specific messages. In particular, BACnet does not make use of the LON
standard network variable type (SNVT) concept. An analysis of MS/TP

80
Session 17: Communication protocols for building automation

performance is provided. The determination of the optimum packet length


and buffer sizes for BACnet on Ethernet.
The desire to be able to make use of the Internet Protocol (IP) was recognized
early on and in 1999 “BACnet/IP” was finalized. The protocol stack was
extended with a “BACnet Virtual Link Layer” (BVLL) which allows
underlying protocols, such as the User Datagram Protocol over IP, to be used
as if they were in themselves a datalink layer. Thus, IP networks are now
natively supported by the existing BACnet network layer which allows
BACnet devices to communicate using IP directly, rather than via tunneling
routers, as had been specified in the original standard. The MS/TP EIA-485
medium provides a low-cost, well-established means for communication up
to 78.4 kb/s and is useful with traffic loads such as would typically be
experienced with unitary or application specific controllers. BACnet/IP and
BACnet over Ethernet are more suited to communications involving higher
data volumes. ARCNET is also widely employed for controller-to-controller
communication in the United States and Asia due to the advent of the low-
cost and relatively high-speed (156 kb/s) twisted pair version. PTP is still
occasionally used but has been largely superseded by the Internet, at least for
workstation traffic. Only two companies are known to have offered BACnet
over LonTalk simply because more cost-effective alternatives, such as twisted
pair MS/TP or ARCNET, are readily available. As for wireless
communications, a transparent bridging solution based on IEEE 802.11 has
been recently presented [39] while BACnet over wireless Ethernet has been
around for years, proven largely at trade show exhibitions.
The base element in the BACnet network topology is the segment. Segments
are physical runs of cable, which can be coupled using repeaters and bridges
to form a network. BACnet networks (of possibly different media types) are
connected by routers to form a BACnet internetwork. Only one path may exist
between any two devices on an internetwork. BACnet also provides support
for intermittent connections (like PTP) managed by half-routers.
A BACnet network address consists of a 2-byte BACnet network number and
a local address of up to 255 bytes. The local address is specific to the link
layer medium, e.g., an IP address for BACnet/IP or a MAC address for LANs.
The BACnet routers which connect the individual networks route packets
based on the network numbers. These routers are re-quired to be self-learning.
Provided with the network numbers for each of their ports, they are able to
learn the topology by using appropriate router network management services,
such as Who-Is-Router-To-Network.
BACnet represents the functionality of a BACS as a set of objects. Each
BACnet object is a collection of data elements, all of which relate to a
particular function. These objects correspond to the data points of the control
application. The individual data elements are called the properties of the
object. For example, an analog input object that re-ports room temperature

81
Session 17: Communication protocols for building automation

will first of all have a “present-value” property (which is associated with the
actual space temperature read from the physical input). Other properties
describe the sensor, minimum and maximum values of the input, resolution
and engineering units of the value, and indicate the reliability status of the
sensor. The definition of each object type indicates via a “conformance code”
whether a given property is required or optional, read-only, or required to be
writable.
BACnet presently defines 25 different object types. They include simple
object types such as binary input and output, analog input and output,
multistate inputs and outputs, as well as a number of more complex (yet still
generic) types related to scheduling, trending, alarming, and life safety
capabilities.
Any given building automation device may have zero, one, or many objects
of each object type with the exception of the “Device” object, which must be
present in every device. This object is used to present or control various
characteristics of the device and, among other things, contains an enumeration
of all other objects existing in the device. The properties “object-identifier”
(unique to each object in a given BACnet device), “object-name” and “object-
type” have to be present in every object. Nearly two hundred standard
properties, and their use in each of the standard object types, are currently
defined.
The BACnet object model can be easily extended to include new objects or
properties as needed. This can be done by any implementer without obtaining
anyone’s approval and such new capabilities will not interfere with similar
extensions made by others provided the implementer makes use of its “vendor
ID,” freely available from ASHRAE.
While objects provide an abstract representation of the “network-visible”
portion of a building automation device, BACnet services provide messages
for accessing and manipulating this information as well as providing
additional functionality. Communication follows a client/server model.
BACnet currently defines 40 application services which are grouped into five
categories: Alarm and Event, File Access, Object Access, Remote Device
Management, and Virtual Terminal, although these latter services have
largely been supplanted by web-based tools.
Among the Object Access services are ReadProperty (the only service
mandatory for all devices), WriteProperty, ReadPropertyMultiple and
WritePropertyMultiple, which collectively can read or manipulate any
individual or group of property values.
BACnet provides three distinct, but complementary, methods for handling
“events,” including those considered important enough to be designated as
“alarms.” The first is called “Intrinsic Reporting” and makes use of

82
Session 17: Communication protocols for building automation

parameters embedded in individual objects. Intrinsic re-porting makes use of


standardized event type algorithms (nine are currently defined, such as
“OUT_OF_RANGE,” “CHANGE_OF_STATE,” etc.) but applies them
rigidly to specified properties of the standard objects.
“Algorithmic Change Reporting” makes use of the same algorithms but
allows them to be more broadly applied to any property of any object. The
parameters associated with the selected algorithm (e.g., high limit, low limit,
dead band, time delay, etc.) are contained in an Event Enrollment object,
rather than “intrinsically” in the referenced object, thus allowing different
algorithms to be applied, if needed, to the same property. Both intrinsic and
algorithmic change reporting can make use of a Notification Class object
which contains information on how event notifications, either con-firmed
(acknowledged) or unconfirmed (unacknowledged) are to be distributed. This
combination of capabilities allows for extremely powerful alarm and event
recognition and distribution: notifications can be tailored to different
recipients at different times of the day or week, assigned varying priorities,
and so on. A life safety alarm, for example, could be directed to specific
workstations during the workday but cause a dial-out procedure to be invoked
after working hours or on the weekend.
The third type of reporting is called “Change of Value” (COV). It causes a
COV notification to be sent when a particular property changes by a
predefined amount or, at the discretion of the COV server, at predefined time
intervals. Clients may use the SubscribeCOV service to register for
notifications of the default properties of standard objects, or the
SubscribeCOVProperty service to request COV notification for any property
of any object with any desired COV increment. “Unsubscribed” COV
notifications, usually broadcast, provide a mechanism to distribute the current
value of glob-ally significant information, such as a site’s outside air
temperature or occupancy status, at a repetition rate determined by the server.
Of the eleven BACnet services dedicated to alarm and event handling, only
the AcknowledgeAlarm service is aimed exclusively at the human operator:
it provides a means to convey to an alarm-originating device that a human has
actually seen, and is responding to, an alarm event. The originating device
may use the receipt of such an acknowledgment, or the lack thereof within a
specified time interval, to invoke other application-specific logic to deal with
the alarm condition, such as initiating a precautionary system shutdown,
performing a dial-out notification to some additional recipients, and so forth.
Remote device management services include Who-Is/I-Am and Who-Has/I-
Have for dynamically discovering the network addressing information of peer
devices and particular objects by way of their object names and/or object
identifiers. Other services in this category allow for time synchronization,
reinitialization of devices, and the suppression of spurious communications
due to hardware or software malfunctions.

83
Session 17: Communication protocols for building automation

The BACnet application layer also directly supports other services relevant
to BA tasks such as the prioritized writing of start/stop commands, setpoint
changes, time of day scheduling, and trend log processing.
While the BACnet standard defines a sizable set of ser-vices, only a subset is
necessary for most devices. Requiring them all to be implemented would
unnecessarily increase complexity and cost without providing any particular
benefit. In order to be able to concisely describe the capabilities offered by,
or required in, a particular device, the concept of BIBBs (BACnet
Interoperability Building Blocks) was introduced to the standard18 in 2000.
A BIBB describes a particular functional capability in one of five
interoperability areas: data sharing, alarm and event management, scheduling,
trending and device and network management.
BIBBs come in client/server pairs (designated A and B), allowing the precise
specification of whether a given device functions as the initiator of a service
request, the responder to a service request, or both. A BIBB may also require
the presence of one or more objects, or that specific properties be supported.
For example, the BIBB “Trending-Viewing and Modifying Trends Internal-
B” requires that the server side of the ReadRange-Service be implemented
and that a Trend Log object be provided.
To ease the work of specifiers, several standard BACnet device profiles have
been defined. Each profile is a collection of BIBBs that is intended to map to
commonly available BA equipment: operator workstations; building
controllers; advanced application controllers; application specific controllers;
smart actuators; and smart sensors. The BIBBs were selected to serve as a
baseline for the given type of device. In order to claim conformance to a given
profile, a manufacturer must offer at least the capabilities contained in the
profile—but is free to add any additional functionality that is appropriate to
the intended application of the device. De-tails about the portions of BACnet
that are implemented in a device are documented in its protocol
implementation conformance statement (PICS). This includes the precise set
of services implemented in client or server role, proprietary and optional
objects and properties, supported network media, and support for the dynamic
creation and deletion of objects, among other things.
Interoperability testing and certification programs have been pursued by both
the BMA and BIG-EU. The BMA’s testing program began in 2002. BIG-EU
followed two years later with its own test lab. Both testing programs have
been harmonized and test results are expected to be mutually recognized.
The main focus of both groups has been to develop suitable software tools to
test BACnet products and the procedures that will be used for specific kinds
of devices. The procedures are mostly based on the companion testing
standard to BACnet. In an effort to go beyond simply verifying that a device
has implemented its BACnet capabilities correctly and to actually improve

84
Session 17: Communication protocols for building automation

“interoperability,” a BTL working group was established that has developed


a set of guidelines for implementers to help them avoid problems discovered
in the course of early testing or the “interoperability work-shops” that the
BMA has sponsored since 2000.
The most recent addition to BACnet are proposed annexes that describe the
use of XML and Web services for the integration of BACS with other
management-level enterprise systems (BACnet/WS). BACnet/WS will be
protocol neutral, and thus equally applicable to non-BACnet systems
(although a comprehensive mapping between BACnet and BACnet/WS
services is included in the draft standard).
Fig. 17.7 shows an example of a possible BACnet/IP configuration that
illustrates the use of a Web server for both a graphical user interface and Web
services along with a workstation that contains a traditional BACnet client
application.
17.5.2 LonWorks

The LonWorks19 system has been originally designed by Echelon Corp. as


an event-triggered control network system. The system consists of the
LonTalk communication protocol, a dedicated controller (Neuron Chip) and
a network management tool. In 1999 the LonTalk protocol was published as
a formal standard, ANSI/EIA-709. While it has already been included in a
European prestandard, it is planned to be published as a separate European
standard in 2005. Below, the term EIA-709 is used to refer to the standardized
communication protocol.

85
Session 17: Communication protocols for building automation

Figure 0.7 BACnet/IP configuration example.

EIA-709 supports a variety of different communication media and different


wiring topologies. Since it was designed as a generic control network, many
protocol parameters are free to choose for the designer. To achieve
interoperability (see below), a number of communication channel profiles
were defined. These still include a variety of twisted-pair (TP), powerline
(PL), and fiber optic (FO) channels. RF (radio frequency) solutions are
available as well, albeit no standard interoperability profile exists for these.
The most popular channel for building automation purposes is the 78.1 kb/s
free topology TP profile (FT-10), which allows physical segments of up to
500 m using low-cost TP cable. A variant providing link power (LP-10) is
also available. Often the 1.25 Mb/s bus topology TP (TP-1250) is used as a
backbone to connect the lower speed FT-10 buses. FO is sometimes used for
backbones as well.
For the TP medium, a unique medium access mechanism labeled predictive
p-persistent CSMA is used. Its key mechanism is that when confirmed
multicast services are used, a certain prediction on the future network load
(i.e., the confirmations to be expected) can be made. The length of the
arbitration phase is modified accordingly. Thus, the rise of the collision ratio
with increasing load is mitigated. This helps to ensure an acceptable minimum
packet rate even under heavy load, unlike in Ethernet-style networks using
CSMA/CD, where the network load has to be kept well below 50%. At the
start of the arbitration phase priority time slots are avail-able for urgent
messages. The mechanism, its properties and effectiveness are further
discussed in [44]– [46].

86
Session 17: Communication protocols for building automation

More recently, building backbones turn from TP-1250 to IP tunneling


mechanisms. Standardized in ANSI/EIA-852 (also known as LonWorks/IP),
IP tunneling is readily supported as a standard channel for EIA-709. Both
tunneling routers and fully IP-based LonWorks/IP nodes are used. Channel
configuration data including channel membership are managed by a central
configuration server on the IP channel.
The entire routable address space of an EIA-709 network is referred to as the
domain. Domains are identified by an ID whose length can be chosen up to
48 bit corresponding to requirements (as short as possible, since it is included
in every frame; as long as necessary to avoid logical interference, especially
on open media). A domain can hold up to 255 sub-nets with a maximum of
127 nodes each. Hence, up to 32 385 nodes can be addressed within a single
domain. A subnet will usually correspond to a physical channel, although it
is both possible for multiple physical channels to be linked into a subnet by
bridges or repeaters as well as for multiple subnets to coexist on the same
physical segment. Routing is performed between different subnets only. In
particular, do-main boundaries can be crossed by proxy nodes only (which
transfer the information on the application layer). Subnets are usually
arranged in a tree hierarchy as shown in Fig. 17.8.
Every domain can host up to 256 multicast groups. Groups can include nodes
from any subnet. Broadcasts can be directed to a single subnet or the entire
domain. Each node carries a world-wide unique 48-bit identification, the
Node ID. It can be used for addressing individual nodes for management and
configuration purposes, while regular unicast communication is handled
through logical subnet and node addresses.

Figure 0.8 Logical segmentation in EIA-709.

87
Session 17: Communication protocols for building automation

For both unicast and multicast, a reliable transmission mode (acknowledged)


with end-to-end acknowledgments can be selected. In addition to the “one-
shot” unacknowledged mode, an unacknowledged-repeated mode is
provided, where every transmission is automatically repeated a fixed number
of times. When acknowledged multicast is used, groups are limited to a
maximum of 64 members each. Otherwise, they can contain an arbitrary
number of nodes from the domain.
For acknowledged transmissions, a challenge-response authentication
mechanism is provided. The challenge con-sists of enciphering a 64-bit
random number using a 48-bit shared secret. The usefulness of this
mechanism is limited since the algorithm is not published, 48-bit keys are not
considered to be strong enough for attacks on high-bandwidth channels and
the integrity of the message is not protected.20 LonWorks/IP allows
calculating a secure message authentication code for each encapsulated
message (MD5 digest with a 128-bit key). Although this mechanism also does
not encrypt the transmitted data, it protects the system from the injection of
tampered messages.
The EIA-709 application layer allows generic application-specific
messaging, but offers particular support for the propagation of network
variables. Network variables are bound via 14-bit unique identifiers
(selectors). The management and diagnostic services include querying the
content type of the network variables (self-identification), the node status,
querying and updating addressing information and network variable bindings,
reading and writing memory, device identification and configuring routers.
Network nodes can be based on a chip from the Neuron series by Echelon or
other embedded controllers like the LC3020 controller by Loytec. A typical
network node architecture is shown in Fig. 17.9. The controller exe-cutes the
seven OSI protocol layers and the application pro-gram, which interfaces with
sensors and actuators connected through the I/O interface. A derivative of
ANSI C called Neuron C is used to program the Neuron chips, whereas
standard ANSI C can be used to program controllers like the LC3020.
Both provide implicit language support for network variables. Network
variables are represented as standard C variables with the unique property that
a data packet is automatically created and transmitted whenever the value of
the C variable changes. Likewise, the value of the C variable will
automatically be updated whenever a data packet has been received from the
network.

A variety of installation and management tools are avail-able for EIA-709


networks. The wide majority, however, is based on the LonWorks Network
Operating System (LNS) management middleware by Echelon. Besides APIs

88
Session 17: Communication protocols for building automation

for commissioning, testing, and maintaining, LNS provides a common project


database, avoiding vendor lock-in of these valuable data. For configuration
of vendor-specific parameters LNS provides a plug-in interface.

Figure 0.9 Typical EIA-709 node architecture.

For performance analysis and troubleshooting, various protocol analyzers are


available, including remote logging over IP networks. Modern network
infrastructure components also have built-in statistics and diagnostics
capabilities to allow remote monitoring and maintenance.
Some approaches exist for the automatic configuration of messaging
relationships (“self-binding” or “auto-binding”). They are, however, confined
to applications of limited complexity (single-vendor systems or very basic
functionality only).

It should be noted that entirely non open systems can be (and are being) built
using LonWorks technology. The LonMark Interoperability Association
(now LonMark International) founded in 1994 defines guidelines to
manufacture and to integrate interoperable devices. These guidelines shall

89
Session 17: Communication protocols for building automation

guarantee a smooth integration and operation of devices designed, produced,


and installed by different manufacturers. They include LonTalk channel pro-
files, standard network variable types (SNVT) and functional profiles. A
SNVT comprises syntactic as well as semantic information, like the
associated engineering unit. Over 60 functional profiles have already been
published. They relate to a number of application domains, most of them with
a strong relation to building automation. Examples include “VAV
Controller,” “Constant Light Controller,” “Scheduler,” “Variable Speed
Motor Drive” and “Occupancy Sensor.” Al-though freely available, the
LonMark guidelines and profiles are not part of any formal standard.
Interoperability certification is provided on the basis of inspection of resource
description files only, no laboratory tests are performed. In the most recent
past, LonMark provides a self-certification tool, which LonMark members
can use over the Web to certify their products.
17.5.3 EIB/KNX

The European Installation Bus (EIB) is a fieldbus designed to enhance


electrical installations in homes and buildings of all sizes by separating the
transmission of control information from the traditional mains wiring. EIB is
based on an open specification maintained until recently by EIB Association
(EIBA). Key parts of it were included in [18] and [53]. In 2002, EIB was
merged with Batibus and EHS (European Home System). The new KNX
standard seeks to combine their best aspects. The target of this merger was to
create a single European home and building electronic system standard.
Likewise, EIBA joined forces with the European Home Systems Association
and Batibus Club International to form Konnex Association. Still, the EIB
system technology continues to exist unchanged as a set of profiles within
KNX, frequently referred to as EIB/KNX.

Regarding physical media, EIB already provided the choice of dedicated


twisted-pair cabling and powerline transmission as well as a simple form of
IP tunneling. RF communication and advanced IP tunneling were added
under the KNX umbrella (albeit are not yet published within the context. The
KNX specification also includes additional TP and PL variants which could
be used for future devices.
The main EIB/KNX medium is the twisted-pair cabling variant now known
as KNX TP1. The single twisted pair carries the signal as well as 29 V DC
link power. Data is trans-mitted using a balanced base band signal with 9600
b/s. TP1 allows free topology wiring with up to 1000 m cable length per
physical segment. Up to four segments can be concatenated using bridges
(called line repeaters), forming a line. CAN-like, medium access on TP1 is

90
Session 17: Communication protocols for building automation

controlled using CSMA with bit-wise arbitration on message priority and


station address. Four priority levels are provided.
KNX RF uses a subband in the 868 MHz frequency band reserved for short-
range devices (telecommand, -control, telemetry and alarms) by European
regulatory bodies which is limited by a duty cycle requirement of less than
one per-cent. Particular attention was given to minimizing hardware
requirements. To this end, KNX RF does not only support bidirectional
communication, but transmit-only devices as well. This reduces cost for
simple sensors and switches without status indicators. KNX RF devices
communicate peer-to-peer.
EIBnet/IP addresses tunneling over IP networks. Its core framework supports
discovery and self-description of EIBnet/IP devices. It currently
accommodates the specialized “Service Protocols” Tunneling and Routing.
Actually, both of them follow the tunneling principle as presented earlier but
differ in their primary application focus. EIBnet/IP Tunneling is to provide
remote maintenance access to EIB/KNX installations in an easy-to-use
manner and therefore restricted to point-to-point communication. EIBnet/IP
Routing allows the use of an IP backbone to connect multiple EIB/KNX
subin-stallations. Routers using this protocol are designed to work “out-of-
the-box” as far as possible. They communicate using UDP (User Datagram
Protocol) multicast. Group management relies on IGMP (Internet Group
Management Protocol). No central configuration server is necessary.

91
Session 17: Communication protocols for building automation

Figure 0.10 EIB/KNX network topology.

As outlined above, the basic building block of an EIB network is the line,
which holds up to 254 devices in free topology. Following a three-level tree
structure, sublines are connected by main lines via routers (termed line
couplers) to form a zone. Zones can in turn be coupled by a backbone line, as
illustrated in Fig. 17.10. Network partitions on open media are typically
linked into the topology as a separate line or zone. IP tunneling is typically
used for main lines and the backbone, with EIBnet/IP routers acting as
couplers. Overall, the network can contain roughly 60 000 devices at
maximum.

92
Session 17: Communication protocols for building automation

Every node in an EIB/KNX network is assigned an individual address which


corresponds to its position within the topological structure of the network
(zone/line/device). This address is exclusively used for unicast
communication. Re-liable connections are possible. Multicast addressing is
implemented in the data link layer. For this purpose, nodes are assigned
additional nonunique MAC addresses (group addresses). The group
addressing and propagation mechanism is thus extremely efficient. Yet
acknowledgment is provided on layer 2 (i.e., within an electrical segment)
only. The entire group answers at once, with negative acknowledgments over-
riding positive ones. Group addresses are routed through the whole network.
Routers are preprogrammed with the necessary tables. Broadcasts always
span the entire network.
EIB/KNX uses a shared variable model to express the functionality of
individual nodes and combine them into a working system. Although this
model uses state-based semantics, communication remains event-driven. Net-
work-visible variables of a node application are referred to as group objects.
They can be readable, writable or both (although the latter is discouraged to
better keep track of communication dependencies). Each group of
communication objects is assigned a unique group address. This address is
used to handle all network traffic pertaining to the shared value in a peer-to-
peer manner. Group membership is defined individually for each group object
of a node, which can belong to multiple groups.
Usually, data sources will actively publish new values, al-though a query
mechanism is provided as well. Since group addressing is used for these
notifications, the publisher subscriber model applies: the group address is all
a node needs to know about its communication partners. Its multicast nature
also means, however, that no authentication or authorization can take place
this way.
Horizontal communication using shared variables between EIB/KNX nodes
exclusively uses group addressing. Individual addressing is reserved for
client-server style communication supporting vertical access. System
management data like network binding information or the loaded application
program are accessible through the properties of system interface objects. In
addition, every device can provide any number of application interface
objects related to the behavior of the user application. On the one hand, their
properties can hold application parameters that are normally modified during
setup time only. On the other hand, they can contain run-time values normally
accessed through group objects. Basic engineering functions like the
assignment of individual addresses are handled by dedicated services.

The specification also encompasses standard system com-ponents, the most


important being the bus coupling units (BCUs). BCUs provide an

93
Session 17: Communication protocols for building automation

implementation of the complete network stack and application environment.


They can host simple user applications, supporting the use of group objects
in a way similar to local variables. Application modules can connect via a
standardized 10-pin external interface (PEI), which can be configured in a
number of ways. Simple application modules such as wall switches may use
it for parallel digital I/O or ADC input. More complex user applications will
have to use a separate microprocessor since the processing power of the
MC68HC05 family microcontroller employed in BCUs is limited. In this
case, the application processor can use the PEI for high-level access to the
net-work stack via a serial protocol. As an alternative, TP based device
designs can opt for the so-called TP-UART IC. This IC handles most of the
EIB/KNX data link layer. Unlike the transceiver ICs used in BCUs, it relieves
the attached host controller from having to deal with network bit timings.
These design options are illustrated in Fig. 17.11.

94
Session 17: Communication protocols for building automation

Figure 0.11 Typical EIB/KNX node architectures.

For commissioning, diagnosis and maintenance of EIB/KNX installations, a


single PC-based software tool called ETS (Engineering Tool Software) which
can handle every certified EIB/KNX product is maintained by EIBA. KNX
devices may support additional setup modes defined by the standard which
do not require the use of ETS. A-Mode devices are preconfigured to
automatically connect to each other (“plug and play”). In E-Mode, devices
whose group objects are to be bound together are designated by either pushing
special buttons, assigning identical code numbers via DIP switches or code
wheels, or via a handheld configuration device.
When using ETS, group objects are bound individually. The EIB
Interworking Standard (EIS) merely defined a standardized bit-level
representation for various types of shared variables. Functional blocks were

95
Session 17: Communication protocols for building automation

provided for dimming and control of motorized blinds to ensure a base level
of inter-changeability. Such an approach is no longer viable with the E- and
A-Modes. Therefore, numerous semantic data type and functional block
definitions for various application domains are being added to the KNX
specification. E- and A-Modes always link group objects and interface object
properties at the granularity of these functional blocks (or channels of
multiple blocks).

96
Session 18: Overview of Programmable controllers and PLCs

Session 18
Overview of Programmable controllers and
PLCs
Content
18.1 What is Sequence and Logic control? ............................................ 98
18.2 Industrial Example of Discrete Sensors and Actuators ................. 99
18.3 Comparing Logic and Sequence Control with Analog Control ... 102
18.4 Programmable Logic Controllers (PLC)...................................... 102
18.5 Evolution of the PLC ................................................................... 103
18.6 Application Areas ........................................................................ 104
18.7 Architecture of PLCs ................................................................... 105
18.8 Central controller ......................................................................... 106
18.9 Central Processing units............................................................... 106
18.10 Communications processors..................................................... 107
18.11 Program and Data memory ...................................................... 107
18.12 Expansion units ........................................................................ 107
18.13 Input/Output Units.................................................................... 108
18.14 Programmers ............................................................................ 109
18.15 Components of PLC ................................................................. 109
18.16 Processor Module ..................................................................... 110
18.17 Input Module ............................................................................ 111
18.18 Analog input modules .............................................................. 112
18.19 Digital Input Modules .............................................................. 113
18.20 Output Modules ........................................................................ 113
18.21 Analog Output Module............................................................. 114
18.22 Digital Output Module ............................................................. 115
18.23 Function Modules..................................................................... 116
18.24 Count Module........................................................................... 116

97
Session 18: Overview of Programmable controllers and PLCs

18.1 What is Sequence and Logic control?

Many control applications do not involve analog process variables, that is, the
ones which can assume a continuous range of values, but instead variables
that are set valued, that is they only assume values belonging to a finite set.
The simplest examples of such variables are binary variables, that can have
either of two possible values, (such as 1 or 0, on or off, open or closed etc.).
These control systems operate by turning on and off switches, motors, valves,
and other devices in response to operating conditions and as a function of
time. Such systems are referred to as sequence/logic control systems. For
example, in the operation of transfer lines and automated assembly machines,
sequence control is used to coordinate the various actions of the production
system (e.g., transfer of parts, changing of the tool, feeding of the metal
cutting tool, etc.).
Typically, the control problem is to cause/ prevent occurrence of

 particular values of outputs process variables


 particular values of outputs obeying timing restrictions
 given sequences of discrete outputs
 given orders between various discrete outputs
Note that some of these can also be operated using analog control methods.
However, in specific applications they may be viewed as discrete control or
sensing devices for two reasons, namely,
A. The inputs to these devices only belong to two specific sets. For
example, in the control of a reciprocating conveyor system, analog
motor control is not applied. Simple on-off control is adequate.
Therefore, for this application, the motor-starter actuation system may
be considered as discrete.
B. Often the control problem considered is supervisory in nature, where
the problem is provide different types of supervisory commands to
automatic control systems, which in turn carry out analog control
tasks, such that overall system operating modes can be maintained and
coordinated to achieve system objectives.
Examples of some such devices is given below.

98
Session 18: Overview of Programmable controllers and PLCs

18.2 Industrial Example of Discrete Sensors and Actuators

There are many industrial sensors which provide discrete outputs which may
be interpreted as the presence/absence of an object in close proximity, passing
of parts on a conveyor, For example, tables 18.1 and 18.2 below show a set
of typical sensors which provide a discrete set of output corresponding to
process variables.
Table 0.1 Discrete Sensors

99
Session 18: Overview of Programmable controllers and PLCs

Figure 0.1 Example Industrial Discrete Input and Sensing Devices

Table 0.2 Example Industrial Discrete Output and Actuation Devices

Below we provide an industrial example of Industrial Sequence Control.

100
Session 18: Overview of Programmable controllers and PLCs

Figure 0.2 An Industrial Logic Control Example

The die stamping process is shown in figure below. This process consists of
a metal stamping die fixed to the end of a piston. The piston is extended to
stamp a work piece and retracted to allow the work piece to be removed. The
process has 2 actuators: an up solenoid and a down solenoid, which
respectively control the hydraulics for the extension and retraction of the
stamping piston and die. The process also has 2 sensors: an upper limit switch
that indicates when the piston is fully retracted and a lower limit switch that
indicates when the piston is fully extended. Lastly, the process has a master
switch which is used to start the process and to shut it down.
The control computer for the process has 3 inputs (2 from the limit sensors
and 1 from the master switch) and controls 2 outputs (1 to each actuator
solenoid).
The desired control algorithm for the process is simply as follows. When the
master switch is turned on the die-stamping piston is to reciprocate between
the extended and retracted positions, stamping parts that have been placed in
the machine. When the master switch is switched off, the piston is to return
to a shutdown configuration with the actuators off and the piston fully
retracted.

101
Session 18: Overview of Programmable controllers and PLCs

18.3 Comparing Logic and Sequence Control with Analog


Control

The salient points of difference between Analog Control and Logic/Sequence


control are presented in the table below.
Table 0.3 A Comparison of Continuous Variable (Analog) and Discrete Event (Logic/Sequence)
Control

18.4 Programmable Logic Controllers (PLC)

A modern controller device used extensively for sequence control today in


transfer lines, robotics, process control, and many other automated systems is
the Programmable Logic Controller (PLC). In essence, a PLC is a special
purpose industrial microprocessor based real-time computing system, which
performs the following functions in the context of industrial operations
 Monitor Input/Sensors
 Execute logic, sequencing, timing, counting functions for
Control/Diagnostics
 Drives Actuators/Indicators
 Communicates with other computers
Some of the following are advantages of PLCs due to standardized hardware
technology, modular design of the PLCs, communication capabilities and
improved development program development environment:
 Easy to use to simple modular assembly and connection.
 Modular expansion capacity of the input, outputs and memory.
 Simple programming environments and the use of standardized task
libraries and debugging aids.

102
Session 18: Overview of Programmable controllers and PLCs

 Communication capability with other programmable controllers and


computers

18.5 Evolution of the PLC

Before the advent of microprocessors, industrial logic and sequence control


used to be performed using elaborate control panels containing
electromechanical or solid-state relays, contactors and switches, indicator
lamps, mechanical or electronic timers and counters etc., all hardwired by
complex and elaborate wiring. In fact, for many applications such control
panels are used even today. However, the development of microprocessors in
the early 1980’s quickly led to the development of the PLCs, which had
significant advantages over conventional control panels. Some of these are:
 Programming the PLC is easier than wiring physical components; the
only wiring required is that of connecting the I/O terminals.
 The PLC can be reprogrammed using user-friendly programming
devices. Controls must be physically rewired.
 PLCs take up much less space.
 Installation and maintenance of PLCs is easier, and with present day
solid-state technology, reliability is greater.
 The PLC can be connected to a distributed plant automation system,
supervised and monitored.
 Beyond a certain size and complexity of the process, a PLC-based
system compares favorably with control panels.
 Ability of PLCs to accept digital data in serial, parallel and network
modes imply a drastic reduction in plant sensor and actuator wirings,
since single cable runs to remote terminal I/O units can be made.
Wiring only need to be made locally from that point.
 Special diagnostic and maintenance modes for quick troubleshooting
and servicing, without disrupting plant operations.
However, since it evolved out of relay control panels the PLCs adopted legacy
concepts, which were applicable to such panels. To facilitate maintenance and
modification of the physically wired control logic, the control panel was
systematically organized so that each control formed a rung much like a rung
on a ladder. The development of PLCs retained the ladder logic concept
where control circuits are defined like rungs on a ladder where each rung
begins with one or more inputs and each rung usually ends with only one
output. A typical PLC ladder structure is shown below in Fig.18.4.

103
Session 18: Overview of Programmable controllers and PLCs

Figure 0.3 The structure of Relay Logic Circuits

Figure 0.4 The structure of Relay Ladder Logic Programs for PLCs

18.6 Application Areas

Programmable Logic Controllers are suitable for a variety of automation


tasks. They provide a simple and economic solution to many automation tasks
such as
 Logic/Sequence control
 PID control and computing
 Coordination and communication
 Operator control and monitoring
 Plant start-up, shutdown

104
Session 18: Overview of Programmable controllers and PLCs

Any manufacturing application that involves controlling repetitive, discrete


operations is a potential candidate for PLC usage, e.g. machine tools,
automatic assembly equipment, molding and extrusion machinery, textile
machinery and automatic test equipment. Some typical industrial areas that
widely deploy PLC controls are named in Table 18.4. The list is only
illustrative and by no means exhaustive.
Table 0.4 Some Industrial Areas for Programmable Controller Applications

18.7 Architecture of PLCs

The PLC is essentially a microprocessor-based real-time computing system


that often has to handle significant I/O and Communication activities, bit-
oriented computing, as well as normal floating-point arithmetic. A typical set
of components that make a PLC System is shown in Fig. 18.5 below.

Figure 0.5 Conventional PLC Architecture

The components of the PLC subsystem shown in Fig. 18.5 are described
below.

105
Session 18: Overview of Programmable controllers and PLCs

18.8 Central controller

The central controller (CC) contains the modules necessary for the main
computing operation of the Programmable controller (PC). The central
controller can be equipped with the following:
 Memory modules with RAM or EPROM (in the memory sub
modules) for the program (main memory).
 Interface modules for programmers, expansion units, standard
peripherals etc.
 Communications processors for operator communication and
visualization, communication with other systems and configuring of
local area networks.
A bus connects the CPUs with the other modules.

18.9 Central Processing units

The CPUs are generally microprogrammed processors sometimes capable of


handling multiple data width of either 8, 16 or 24 bits. In addition, sometimes
additional circuitry, such as for bit processing is provided, since much of the
computing involves logical operations involving digital inputs and auxiliary
quantities. Memory with battery backup is also provided for the following:
 Flags (internal relays), timers and counters.
 Operating system data
 Process image for the signal states of binary inputs and outputs.
The user program is stored in memory modules. During each program scan,
the processor reads the statement in the program memory, executes the
corresponding operations. The bit processor, if it exists, executes binary
operations. Often multiple central controllers can be configured in hot
standby mode, such that if one processor fails the other can immediately pick
up the computing tasks without any failure in plant operations.

106
Session 18: Overview of Programmable controllers and PLCs

18.10 Communications processors

Communications processors autonomously handle data communication with


the following:
 Standard peripherals such as printers, keyboards and CRTs,
 Supervisory Computer Systems,
 Other Programmable controllers,
The data required for each communications processor is stored in a RAM or
EPROM sub module so that they do not load the processor memories. A local
area network can also be configured using communications processors. This
enables the connection of various PLCs over a wide distance in various
configurations. The network protocols are often proprietary. However, over
the last decade, interoperable network protocol standards are also supported
in modern PLCs.

18.11 Program and Data memory

The program and data needed for execution are stored in RAM or EPROM
sub modules. These sub modules are plugged into the processors. Additional
RAM memory modules can also be connected.

18.12 Expansion units

Modules for the input and output of signals are plugged into expansion units.
The latter are connected to the central controller via interface modules.
Expansion units can be connected in two configurations.
A. Centralized configuration
The expansion units (EU) are located in the same cabinet as the central
controllers or in an adjacent cabinet in the centralized configuration, several
expansion units can be connected to one central controller. The length of the
cable from the central controller to the most distant expansion unit is often
limited based on data transfer speeds.
B. Distributed configuration
The expansion units can be located at a distance of up to 1000 m from the
central controller. In the distributed configuration, up to 16 expansion units
can be connected to one central controller. Four additional expansion units

107
Session 18: Overview of Programmable controllers and PLCs

can be connected in the centralized configuration to each distributed


expansion unit and to the central controller.

18.13 Input/Output Units

A host of input and output modules are connected to the PLC bus to exchange
data with the processor unit. These can be broadly categorized into Digital
Input Modules, Digital Output Modules, Analog Input Modules, Analog
Output Modules and Special Purpose Modules.
Digital Input Modules
The digital inputs modules convert the external binary signals from the
process to the internal digital signal level of programmable controllers.
Digital Output Modules
The digital output modules convert the internal signal levels of the
programmable controllers into the binary signal levels required externally by
the process.
Analog Input Modules
The analog input modules convert the analog signals from the process into
digital values which are then processed by the programmable controller.
Analog Output Modules
The analog output modules convert digital values from the programmable
controller into the analog signals required by the process.
Special Purpose Modules
These may include special units for:

 High speed counting


 High accuracy positioning
 On-line self-optimizing control
 Multi axis synchronization, interpolation
These modules contain additional processors, and are used to relieve the main
CPU from the high computational loads involved in the corresponding tasks.
These are discussed in detail in Lesson 22

108
Session 18: Overview of Programmable controllers and PLCs

18.14 Programmers

External programming units can be used to download programs into the


program memory of the CPU. The external field programmers provide several
software features that facilitate program entry in graphical form. The
programmers also provide comprehensive aids for debugging and execution
monitoring support logic and sequence control systems. Printer can be
connected to the programmers for the purpose of documenting the program.
In some cases, special programming packages that run on Personal
Computers, can also be used as programming units. There are two ways of
entering the program:
A. Direct program entry to the program memory (RAM) plugged into the
central controller. For this purpose, the programmer is connected to
the processor or to the programmer interface modules.
B. Programming the EPROM sub modules in the programmer without
connecting it to the PC (off-line). The memory sub modules are then
plugged into the central controller.

18.15 Components of PLC

In this section the hardware characteristics of the components of a PLC


system and their physical organization are discussed in some detail. PLC
systems are available in many hardware configurations, even from a single
vendor, to cater to a variety of customer requirements and affordability.
However, there are some common components present in each of these. These
components are:
A. Power Supply - This module can be built into the PLC processor
module or be an external unit. Common voltage levels required by the
PLC are 5Vdc, 24Vdc, 220Vac. The voltage lends are stabilized and
often the PS monitors its own health.
B. Processor - This is the main computing module where ladder logic and
other application programs are stored and processed.
C. Input/Output - A number of input/output modules must be provided
so that the PLC can monitor the process and initiate control actions as
specified in the application control programs. Depending on the size
of the PLC systems the input-output subsystem can either span across
several cards or even be integrated on the processor module. Some of
their input-output
Input/output cards generates/accept TTL level, clean signals. Output
‘modules’ provide necessary power to the signals. Input ‘modules’
converts voltage levels, cleans up RF noise and isolates it from

109
Session 18: Overview of Programmable controllers and PLCs

common mode voltages. I/O modules may also prevent over voltages
to reach the CPU or low level TTL.
D. Indicator lights - These indicate the status of the PLC including power
on, program running, and a fault. These are essential when diagnosing
problems.
E. Rack, Slot, Backplane – These physically house and connect the
electronic components of a PLC.

Figure 0.6 Typical Subsystems for a PLC system

18.16 Processor Module

A wide range of processor modules, scalable in terms of performance and


capacity, are available to meet the different needs of users. Processors manage
the whole PLC station consisting of discrete input/output modules, analog
modules and application-specific function modules (counting, axis control,
stepper control, communication, etc.) located on one or more racks connected
to the backplane. In terms of hardware, besides a CPU and possible co-
processor, each processor module typically includes:
 a protected internal RAM memory which can take the application
program and can be extended by memory extension cards (RAM or
Flash EPROM)
 a real time clocks
 ports for connecting several devices simultaneously for purposes such
as programming, human-machine interface etc.
 communication cards for various industrial communication standards
such as, Modbus+ or Fieldbus, as well as serial links and Ethernet
links
 Display block with LEDs, RESET button, used to activate a cold
restart of the PLC system.

110
Session 18: Overview of Programmable controllers and PLCs

Typical specifications for a high end and a low end PLC processor module
for a rack-based PLC system are given below.
Table 0.5 Typical Features of high end and low-end processor modules

Processor modules contain function block libraries, which can be configured


to work with other modules, to realize various automation related
functionality, such as,
 Counting up to 10 – 100 KHz
 PID Control with algorithms realized in different forms
 Controlled positioning for manufacturing by CNC machines with
stepper / servo drives, and features such as rapid traverse / creep speed
for high accuracy positioning of point to point axes, interpolation and
multi axis synchronization for contouring axes
 Input/output: These may be categorized as digital / analog depending
on the nature of the signal or as local/remote/networked, depending
on the interface through which it is acquired. These are described in
detail below.

18.17 Input Module

Input modules convert process level signals from sensors (e.g. voltage face
Contacts, 0-24v Dc, 4– 20mA), to processor level digital signals such as 5V
or 3.3 V. They also accept direct inputs from thermocouples and RTDs in the
analog case, and limit switches or encoders in the digital case. Naturally,
therefore these modules include circuitry for galvanic isolation, such as those
using optocouplers.

Galvanic isolation-

111
Session 18: Overview of Programmable controllers and PLCs

18.18 Analog input modules

Analog input modules convert analog process level signals to digital values,
which are then processed by the digital electronic hardware of the
programmable controller. A set of typical parameters that define an analog
input module are shown in Table 22.2. The analog modules sense 8/16 analog
signals in the range ± 5 V, ± 10 V or 0 to 10 V. Each channel can either be
single-ended, or differential. For single ended channels only one wire is
connected to a channel terminal. The analog voltage on each channel terminal
that is sensed is referred to a common ground. In the case of differential
channels, each channel terminal involves two wires and the voltage between
the pair of wires is sensed. Thus, both the wires can be at different voltages
and only their difference is sensed and converted to digital. Differential
channels are more accurate but consume more electronic resources of the
module for their processing. Often these modules also house channels that
output analog/digital signals, as well as excitation circuitry for sensors such
as RTDs.
An analog module typically contains:
 ¾ Analog to digital (A/D) converters
 ¾ Analog multiplexers and simultaneous sample-hold (S/H)
 ¾ Analog Signal termination
 ¾ PLC bus ports
 ¾ Synchronization

Figure 0.7 Analog IO Module

112
Session 18: Overview of Programmable controllers and PLCs

Table 0.6 Typical Parameters for an Analog Input Module

18.19 Digital Input Modules

The digital inputs modules convert the external binary signals from the
process to the internal digital signal level of programmable controllers.
Digital input channel processing involves isolation and signal conditioning
before inputting to a comparator for conversion to a 0 or a 1. The typical
parameters that define a digital input module are shown in tabular form along
with typical values in Table.
Table 0.7 Typical Parameters for a Digital Input Module

18.20 Output Modules

Outputs to actuators allow a PLC to cause something to happen in a process.


Common actuators include:
1. Solenoid Valves - logical outputs that can switch a hydraulic or
pneumatic flow.
2. Lights - logical outputs that can often be powered directly from PLC
boards.

113
Session 18: Overview of Programmable controllers and PLCs

3. Motor Starters - motors often draw a large amount of current when


started, so they require motor starters, which are basically large relays.
4. Servo Motors - a continuous output from the PLC can command a
variable speed or position to a servo motor drive system.
The outputs from these modules may be used to drive such actuators.
Consequently, they include circuitry for current / power drive using solid-
state electronics such as transistors for DC outputs or triacs for AC outputs.
Continuous outputs require output cards with D/A converters. Sometimes
they also provide potential free relay contacts (NO/NC), which may be used
to drive higher power actuators using a separate power source. Since these
modules straddle across the processor and the output power circuit, these must
provide isolation. However, most often, output modules act as modulators of
the actuator power, which is actually applied to the equipment, machine or
plant. External power supplies are connected to the output card and the card
will switch the power on or off for each output. Typical output voltages are
120V ac, 24V dc, 12-48V ac/dc, 5V dc (TTL) or 230V ac. These cards
typically have 8 to 16 outputs of the same type and can be purchased with
different current ratings. A common choice when purchasing output cards is
relays, transistors or triacs. Relays are the most flexible output devices. They
are capable of switching both AC and DC outputs. But, they are slower (about
10ms switching is typical), they are bulkier, they cost more, and they wear
out after a large number of cycles. Relays can switch high DC and AC voltage
levels while maintaining isolation. Transistors are limited to DC outputs, and
triacs are limited to AC outputs. Transistor and triac outputs are called
switched outputs. In this case, a voltage is supplied to the PLC card, and the
card switches it to different outputs using solid-state circuitry (transistors,
triacs, etc.). Triacs are well suited to AC devices requiring less than 1A.
Transistor outputs use NPN or PNP transistors up to 1A typically. Their
response time is well under 1ms.

18.21 Analog Output Module

Analog output modules convert digital values from the PLC processor module
into an analog signal required by the process. These modules therefore require
a D/A converter for providing analog outputs. However, typically, servo-
amplifiers for power amplification, required for driving high current loads
directly, are not integrated on-board. Front connectors are used for
terminating the signal cables. Modules and front connectors may be inserted
and removed under power. The output signals can be disabled by means of an
enable input. The last value then remains latched. Typical parameters that
define an analog output module are shown in Table 18.8 along with typical
values.

114
Session 18: Overview of Programmable controllers and PLCs

18.22 Digital Output Module

Digital output modules convert internal signal levels of the programmable


controllers into the binary signal levels required externally by the process.
Output can be DC or AC. Up to 16 outputs can be connected in parallel.
Indication for short-circuits fuse blowing etc. are often provided.
Table 0.8 Typical Parameters for an Analog Output Module

The typical parameters that define a digital output module are shown in Table
18.9 along with typical values.
Table 0.9 Typical Parameters for a Digital Output Module

115
Session 18: Overview of Programmable controllers and PLCs

18.23 Function Modules

For high speed i/o tasks such as one required to measure speed by counting
pulses from shaft angle encoders, or for precision position control
applications, independent i/o modules that execute tasks independently of the
central processor are required to meet the timing requirements of the i/o. The
signal preprocessing “intelligent” I/O- modules make it possible to count fast
impulse trains, to acquire and process travel increments, speed and time
measure etc., i.e. they take on the critical timing control tasks which normally
can’t be carried out fast enough by the central processor with its
programmable logic control, as well as its primary logic control functions.
These modules not only relieve the central processor of additional tasks, but
they also provide fast and specialized solutions to some common control
problems. The processing of the signals is carried out primarily by the
appropriate I/O- modules, which frequently operate with their own processor.
Below we discuss two such modules, which are used with PLCs to handle
specific high-performance automation functions.
A. A Count Module is employed where pulses at high frequency have
to be counted, i.e. when machines run fast. It can also be applied to
output fast pulse trains or realize accurate timing signals.
B. A Loop Controller Module is primarily used where high speed
closed loop control is required, such as with controlled drives. The
preprogrammed, parameterized functions available with the module
(e.g. for ramp-function generation, speed regulation, signal limit
monitoring) can be easily parameterized via graphical interfaces by a
programmer.

18.24 Count Module

A count module senses fast pulses, from sources such as shaft angle encoders,
through several input ports. Counting frequency can be as high as 2 MHz and
a typically, a counter of length 16 bit or more can count up and down. Counter
modules can often also be applied for time and frequency measurement and
as a frequency divider.

116
Session 18: Overview of Programmable controllers and PLCs

Figure 0.8 A high speed counter module

Typical counter module hardware contains, among possible other things, an


interface to the processor through the system bus, a counter electronics block,
a quartz controlled frequency generator and a frequency divider. For example,
it may contain, say, 5 counters with, say, 16 bits, each of which are
cascadable. In this way, up to 80 bit can be counted in various codes. Thus,
decimals up to about 1024 can be counted. Each port input can be switched
on to the counter at random. It is possible to place a frequency divider from 1
to 16, between the port input and the counter. The frequency of an internal
frequency generator can be directed either straight to a counter or via the
frequency divider to a port input. On reaching the terminal count, the counter
outputs a level or edge signal.
For each counter there are a number of different operating modes, which can
be set by a user program. With a comparator and an alarm register, a number
of count values can be compared and under defined conditions configured to
turn on a process alarm. A counter can be programmed in many ways, such
as:
 ¾ Count mode binary or BCD coded
 ¾ Count once or cyclically
 ¾ Count on rising or falling edge
 ¾ Count up or down
 ¾ Counting of internal clock or external pulses

117
Session 19: PLC hardware selection and Programming

Session 19
PLC hardware selection and Programming
Content
19.1 Structure of a PLC Program......................................................... 119
19.2 Program Execution ...................................................................... 120
19.3 Interrupt Driven and Clock Driven Execution Modes ................. 121
19.4 The Relay Ladder Logic (RLL) Diagram .................................... 121
19.5 RLL Programming Paradigms: Merits and Demerits .................. 122
19.5.1 Example: Forward Reverse Control ..................................... 123
19.6 Inputs I, Output Q ........................................................................ 124
19.7 Internal Variable Operands or Flags ............................................ 125
19.8 Timer ............................................................................................ 125
19.9 Counter......................................................................................... 131
19.10 Addressing ................................................................................ 132
19.11 IEC 1131-3: The International Programmable Controller
Language Standard ................................................................................. 132
19.12 Major Features of IEC 1131-3 ................................................. 133
19.13 IEC 1131-3 Programming Languages ...................................... 133
19.14 Function Block Diagram (FBD) ............................................... 134
19.15 Structured Text (ST)................................................................. 134
19.16 Instruction List (IL) .................................................................. 135
19.17 Sequential Function Chart (SFC) ............................................. 135
19.18 Steps ......................................................................................... 136
19.19 Transitions ................................................................................ 137
19.20 Simple Sequence ...................................................................... 138
19.21 SFC-based Implementation of the Stamping Process Controller
139

118
Session 19: PLC hardware selection and Programming

19.1 Structure of a PLC Program

There are several options in programming a PLC, as discussed earlier. In all


the options the common control of them is that PLC programs are structured
in their composition. i.e. they consist of individual, separately defined
programs sections which are executed in sequence. These programs sections
are called ‘blocks”. Each program section contains statements. The blocks are
supposed to be functionally independent. Assigning a particular (technical)
function to a specific block, which has clearly defined and simple interfaces
with other blocks, yields a clear program structure. The testing of such
programs in sections is substantially simplified.
Various types of blocks are available according to the function of the program
section.
In general a major part of the program is contained in blocks that contain the
program logic graphically represented. For improved modularity, these
blocks can be called in a sequence or in nested configurations.
Special Function Blocks, which are similar to application library modules, are
used to realize either frequently reoccurring or extremely complex functions.
The function block can be “parameterized”.
The interface to the operating system of the PLC, which are similar to the
system calls in application programming for Personal Computers, are defined
in special blocks. They are only called upon by the system program for
particular modes of execution and in the case of the faults.
Function blocks are also used where the realization of the logic control STEP
5 statements can’t be carried out graphically. Similarly, individual steps of a
control sequence can be programmed into such a block and reused at various
points in a program or by various programs. PLC manufacturers offer
standard functions blocks for complex functions, already tested and
documented. With adequate expertise the user can produce his own function
blocks. Some very common function blocks (analog input put, interface
function blocks for communication processors and others) may be integrated
as standard function blocks and supported by the operating system of the
PLC.
Users can also define separate data blocks for special purposes, such as
monitoring, trending etc., and perform read/write on such areas.
Such facilities of structured programming result in programs, which are easier
to read, write, debug and maintain.

119
Session 19: PLC hardware selection and Programming

19.2 Program Execution

There are different ways and means of executing a user program. Normally a
cyclic execution program is preferred and this cyclic operators are given due
priorities. Program processing in a PLC happens cyclically with the following
execution:
1. After the PLC is initialized, the processor reads the individual inputs.
This status of the input is stored in the process- image input table (PII).
2. This processor processes the program stored in the program memory.
This consists of a list of logic functions and instructions, which are
successively processed, so that the required input information will
already be accessed before the read in PII and the matching results are
written into a process-image output table (PIQ). Also, other storage
areas for counters, timers and memory bits will be accessed during
program processing by the processor if necessary.
3. In the third step after the processing of the user program, the status
from the PIQ will transfer to the outputs and then be switched on
and/or off. Afterwards it begins the execution of the next cycle from
step 1.
The same cyclic process also acts upon an RLL program.
The time required by the microprocessor to complete one cycle is known as
the scan time. After all rungs have been tested, the PLC then starts over again
with the first rung. Of course, the scan time for a particular processor is a
function of the processor speed, the number of rungs, and the complexity of
each rung.

120
Session 19: PLC hardware selection and Programming

Figure 0.1 The cyclic execution of PLC Programs

19.3 Interrupt Driven and Clock Driven Execution Modes

A cyclically executing program can however be interrupted by a suitably


defined signal resulting in an interrupt driven mode of program execution
(when fast reaction time is required). If the interrupting signal occurs at fixed
intervals, we can also realize time synchronous execution (i.e. with closed
loop control function). The cyclic execution, synchronized by a real time
clock is the most common program structure for a PLC.
Similarly, programmers can also define error-handling routines in their
programs. Specific and defined error procedures are then invoked if the PLC
operating system encounters fault of given types during execution.

19.4 The Relay Ladder Logic (RLL) Diagram

A Relay Ladder Logic (RLL) diagram, also referred to as a Ladder diagram


is a visual and logical method of displaying the control logic which, based on
the inputs determine the outputs of the program. The ladder is made up of a
series of “rungs” of logical expressions expressed graphically as series and
parallel circuits of relay logic elements such as contacts, timers etc. Each rung

121
Session 19: PLC hardware selection and Programming

consist of a set of inputs on the left end of the rung and a single output at the
right end of each rung. The structure of a rung is shown below in Fig. 19.2(a)
& (b). Fig. 19.2 shows the internal structure of a simple rung in terms its
element contacts connected in a series parallel circuit.

Figure 0.2 (a) The structure of Relay Ladder Logic Programs for PLCs, (b) The internal structure of a
simple Rung

19.5 RLL Programming Paradigms: Merits and Demerits

For the programs of small PLC systems, RLL programming technique has
been regarded as the best choice because a programmer can understand the
relations of the contacts and coils intuitively. Additionally, a maintenance
engineer can easily monitor the operation of the RLL program on its graphical
representation because most PLC manufacturers provide an animated display
that clearly identifies the states of the contacts and coils. Although RLL is
still an important language of IEC 1131 -3, as the memory size of today's PLC
systems increases, a large-sized RLL program brings some significant
problems because RLL is not particularly suitable for the well-structured
programming: It is difficult to structure an RLL program hierarchically.

122
Session 19: PLC hardware selection and Programming

19.5.1 Example: Forward Reverse Control

Figure 0.3 RLL Diagram for the Forward Reverse Control Problem

This example explains the control process of moving a motor either in the
forward direction or in the reverse direction. The direction of the motor
depends on the polarity of the supply. So in order to control the motor, either
in the forward direction or in the reverse direction, we have to provide the
supply with the corresponding polarity. The Fig 19.3 depicts the procedure to
achieve this using Relay Ladder Logic. Here, the Ladder consists of two rungs
corresponding to forward and reverse motions.
The rung corresponding to forward motion consists of
1. A normally closed stop push-button (IN001),
2. A normally opened forward run push-button (IN002) in parallel with
a normally opened auxiliary contact (OP001),
3. A normally closed auxiliary contact (OP002) and
4. The contactor for coil (OP001).
Similarly, the rung corresponding to reverse motion consists of
1. A normally closed stop push-button (IN001),
2. A normally opened forward run push-button (IN003) in parallel with
a normally opened auxiliary contact (OP002),
3. A normally closed auxiliary contact (OP001) and
4. The contactor for coil (OP002).
Operation: The push- buttons (PB) represented by IN--- are real input push-
buttons, which are to be manually operated. The auxiliary contacts are
operated through program. Initially the machine is at standstill, no voltage
supply is present in the coils, and the PBs are as shown in the fig. The stop
PB is initially closed, the motor will not move until the forward run
PB/reverse run PB is closed. Suppose we want to run the motor in the forward
direction from standstill, the outputs of the coil’s contactors have logic ‘0’

123
Session 19: PLC hardware selection and Programming

and hence both the auxiliary contacts are turned on. If we press and release
the forward run PB, the positive voltage from the +ve voltage rail is passed
to the coil. Once the coil contactor gives the logic ‘1’, the following
consequences takes place simultaneously
A. The auxiliary contact OP001 in the second rung becomes opened,
which stops the voltage for reverse motion of the motor. At this stage,
the second rung is not turned on even the reverse run PB is pressed by
mistake.
B. The auxiliary contact OP001 is the first rung is on, which provides the
path for the positive voltage until the stop PB is pressed. Here the
auxiliary contact OP001 acts as a ‘latch’, which facilitates even to
remove the PB IN002 once the coil OP001 is on.
If we want to rotate the motor in the reverse direction, the stop PB is to be
pressed so that no voltage in the coil is present, then we can turn on the PB
corresponding to reverse run. This is a simple example of ‘interlocking’,
where each rung locks the operation of the other rung.
There are several other programming paradigms for PLCs. Two of them are
mentioned here for briefly.

19.6 Inputs I, Output Q

Generally, the operands of a PLC program can be classified as inputs (I) and
outputs (Q). The input operands refer to external signals of the controlled
system, whose values are acquired from the input signal modules. The
operating system of the PLC assigns the signal status of the input and output
modules into the process image of the inputs and outputs at the beginning of
the program. The operand area is located within the process image of the
central controller RAM. Within the program, the signal status of the operand
area is scanned and processed into logic functions in accordance with the user
program; the individual bits within the process output image are fixed. When
the program has been executed the operating system transfers the signal status
of the process image independently to the output modules. This method
enables faster program execution because access to the process image is
executed much faster than access to the I/O – modules.
In RLL Programs, inputs are represented as contacts. Two types of contacts
are used, namely, normally open and normally closed contacts. The difference
in the sense of interpretation between these contacts is shown below.
The switch shown here is a NO contact, i.e. it is closed when it is active.

124
Session 19: PLC hardware selection and Programming

The switch shown here is a NC contact. i.e. it is closed when it is not active.

Figure 0.4 (b) NC Contact interpretation

19.7 Internal Variable Operands or Flags

In addition to the inputs and outputs, which correspond to physical signals in


the controlled systems internal variables are required to save the intermediate
computational values of the program. These are referred to as Flags, or the
Auxiliary Contacts in Relay Ladder Logic parlance. The number of such
variables admissible in a program may be limited. Such auxiliary contacts
correspond to output values and are assumed to be activated by the
corresponding output values. They may be either of an NO or an NC type.
Therefore, an NO auxiliary contact would be closed if the corresponding
output is active i.e. has value “1”.

19.8 Timer

These are special operands of a PLC, which represent a time delay relay in a
relay logic system. The time functions are a fixed component of the central
processing unit. The number of these varies from manufacturer to
manufacturer and from product to product. It is possible to achieve time
delays in the range of few milliseconds to few hours.

125
Session 19: PLC hardware selection and Programming

Figure 0.5 Structure of a Typical Timer

Representation for timers is shown in Fig. 19.5. Timers have a preset register
value, which represent the maximum count it can hold and can be set using
software/program. The figure shown below has a ‘enable reset logic’ and ‘run
logic’ in connection with the timer. The counter does not work and the register
consists of ‘zero’ until the enable reset logic is ‘on’. Once the ‘enable reset
logic’ is ‘on’, the counter starts counting when the ‘run logic’ is ‘on’. The
output is ‘on’ only when the counter reaches the maximum count.
Various kinds of timers are explained as follows.
On delay timer: The input and output signals of the on-delay timer are as
shown in the Fig. 19.7. When the input signal becomes on, the output signal
becomes on with certain delay. But when the input signal becomes off, the
output signal also becomes off at the same instant. If the input becomes on
and off with the time which less than the delay time, there is no change in the
output and remains in the ‘off’ condition even the input is turned on and off
i.e., output is not observed until the input pulse width is greater than the delay
time.

Figure 0.6 A Typical Input output waveform for an On-Delay Timer

126
Session 19: PLC hardware selection and Programming

Figure 0.7 The realization of an On-delay timer from a general timer.

Realization of on-delay timer: The realization of on-delay timer using the


basic timer shown in the previous fig is explained here. The realization is as
shown in the Fig. 19.7, which shows a real input switch (IN001),
coil1(OP002), two normally opened auxiliary contacts (OP002),
coil2(OP002). When the real input switch is ‘on’ the coil (OP002) is ‘on’ and
hence both the auxiliary switches are ‘on’. Now the counter value starts
increasing and the output of the timer is ‘on’ only after it reaches the
maximum preset count. The behavior of this timer is shown in figure, which
shows the on-delay timer. The value in the counter is ‘reset’ when the input
switch (IN001) is off as the ‘enable reset logic’ is ‘off’. This is a non-retentive
timer.
Off delay timer: The input and output signals of the off-delay timer are as
shown in the Fig. 19.8. When the input signal becomes on, the output signal
becomes on at the same time. But when the input signal becomes off, the
output signal becomes ‘off’ with certain delay. If the input becomes on and
off with the time which less than the delay time, there is no change in the
output and remains in the ‘on’ condition even the input is turned on and off
i.e., the delay in the output is not observed until the input pulse width is
greater than the delay time.

127
Session 19: PLC hardware selection and Programming

Figure 0.8 A Typical Input output waveform for an Off-Delay Timer

Realization of off-delay timer: The realization of on-delay timer using the


basic timer shown in the previous fig is explained here. The realization is as
shown in the Fig. 19.9, which shows a real input switch (IN001),
coil1(OP002), two normally closed input contacts (IN001), output contacts
(OP002, OP003). When the real input switch is ‘on’, the coil (OP002) is ‘on’
and both the auxiliary input switches are ‘off’. Now the output contact
(OP002) becomes ‘off’ which in turn makes the auxiliary contact (OP002) in
the third rung to become ‘on’ and hence the output contact (OP003) is ‘on’.
When the real input switch is ‘off’, the counter value starts increasing and the
output of the contact becomes ‘on’ after the timer reaches the maximum
preset count. At this time the auxiliary contact in the third rung becomes ‘off’
and so is the output contact (OP003). The input and output signals are as
shown in the figure, which explain the off-delay timer.

128
Session 19: PLC hardware selection and Programming

Figure 0.9 The realization of an Off-delay timer from a general timer.

Fixed pulse width timer: The input and output signals of the fixed pulse
width timer are as shown in the Fig 19.10. When the input signal becomes on,
the output signal becomes on at the same time and remains on for a fixed time
then become ‘off’. The output pulse width is independent of input pulse
width.

Figure 0.10 A Typical Input output waveform for a Fixed Width Timer

129
Session 19: PLC hardware selection and Programming

Figure 0.11 The realization of a Fixed width timer from a general timer.

Retentive timer: The input and output signals of the retentive timer are as
shown in the 19.12. This is also implemented internally in a register as in the
previous case. When the input is ‘on’ , the internal counter starts counting
until the input is ‘off’ and at this time, the counter holds the value till next
input pulse is applied and then starts counting starting with the value existing
in the register. Hence it is named as ‘retentive’ timer. The output is ‘on’ only
when the counter reaches its ‘terminal count’.

Figure 0.12 A Typical Input output waveform for a Retentive Timer

Non-retentive timer: The input and output signals of the non-retentive timer
are as shown in the Fig. 19.12. This is implemented internally in a register.
When the input is ‘on’, the internal counter starts counting until the input is
‘off’ and at this time the value in the counter is reset to zero. Hence it is named

130
Session 19: PLC hardware selection and Programming

as non-retentive timer. The output is ‘on’ only when the counter reaches its
‘terminal count’.

Figure 0.13 A Typical Input output waveform for a Fixed Width Timer

19.9 Counter

The counting functions (C) operate as hardware counters but are a fixed
component of the central processing unit. The number of these varies for each
of the programmable controllers. It is possible to count up as well as to count
down. The counting range is from 0 to 999. The count is either dual or BCD
coded for further processing.

Figure 0.14 Structure of a Typical Counter

131
Session 19: PLC hardware selection and Programming

19.10 Addressing

The designation of a certain input or output within the program is referred to


as addressing. Different PLC manufacturers adopt different conventions for
specifying the address of a specific input or output signal. A typical
addressing scheme adopted in PLCs manufacturers by Siemens is illustrated
in the sequel.
The inputs and outputs of the PLCs are mostly defined in groups of eight on
digital input and/or digital output devices. These eight units is called a byte.
Every such group receives a number as a byte address. Each in/output byte
is divided into 8 individual bits, through which it can respond with. These
bits are numbered from bit 0 to bit 7. Thus, one receives a bit address. For
example, in the address I0.4, I denotes that the address type is specified as
Input, 0 is the byte address and 4 the bit address. Similarly, in the address
Q5.7, Q denotes that the address type is specified as Output, 5 is the byte
address and 7 is the bit address.

19.11 IEC 1131-3: The International Programmable


Controller Language Standard

IEC 1131 is an international standard for PLCs formulated by the


International Electrotechnical Commission (IEC). As regards PLC
programming, it specifies the syntax, semantics and graphics symbols for the
following PLC programming languages:
 Ladder diagram (LD)
 Sequential Function Charts (SFC)
 Function Block Diagram (FBD)
 Structured Text (ST)
 Instruction List (IL)
IEC 1131 was developed to address the industry demands for greater
interoperability and standardization among PLC hardware and software
products and was completed in 1993. A component of the IEC 1131, the IEC
1131-3 define the standards for data types and programming. The goal for
developing the standard was to propose a programming paradigm that would
contain features to suit a large variety of control applications, which would
eliminate proprietary barriers for the customer and their associated training
costs. The language specification takes into account modern software
engineering principles for developing clean, readable and modular code. One
of the benefits of the standard is that it allows multiple languages to be used

132
Session 19: PLC hardware selection and Programming

simultaneously, thus enabling the program developer to use the language best
suited to each control task.

19.12 Major Features of IEC 1131-3

The following are some of the major features of the standard.


1. Multiple Language Support: One of the main features of the standard
is that it allows multiple languages to be used simultaneously, thus
enabling the program developer to use the language best suited to each
control task.
2. Code Reusability: The control algorithm can include reusable entities
referred to as "program organization units (POUs)" which include
Functions, Function Blocks, and Programs. These POUs are reusable
within a program and can be stored in user-declared libraries for
import into other control programs.
3. Library Support: The IEC-1131 Standard includes a library of pre-
programmed functions and function blocks. An IEC compliant
controller supports these as a "firmware" library, that is, the library is
pre-coded in executable form into a prom or flash ram on the device.
Additionally, manufacturers can supply libraries of their own
functions. Users can also develop their own libraries, which can
include calls to the IEC standard library and any applicable
manufacturers' libraries.
4. Execution Models: The general construct of a control algorithm
includes the use of "tasks", each of which can have one or more
Program POUs. A task is an independently schedulable software
entity and can be assigned a cyclic rate of execution, can be event
driven, or be triggered by specific system functions, such as startup.

19.13 IEC 1131-3 Programming Languages

IEC 1131-3 defines two graphical programming languages (Ladder Diagram


and Function Block Diagram), two textual languages (Instruction List and
Structured Text), and a fifth language (Sequential Function Chart) that is a
tool to define the program architecture and execution semantics. The set of
languages include assembly-like low-level language like the Instruction List,
as well as Structured Text having features similar to those of a high-level
programming language. Using these, different computational tasks of a
control algorithm can be programmed in different languages, then linked into
a single executable file. Below we first describe FBD, IL and ST in brief. The
SFC is discussed in greater detail later in this lesson.

133
Session 19: PLC hardware selection and Programming

19.14 Function Block Diagram (FBD)

The function block diagram is a key product of the standard IEC 1131-3. FBD
is a graphical language that lets users easily describe complex procedures by
simply wiring together function blocks, much like drawing a circuit diagram
with the help of a graphical editor. Function blocks are basically algorithms
that can retain their internal state and compute their outputs using the
persistent internal state and the input arguments. Thus, while a static
mathematical function will always return the same output given the same
input (e.g. sine, cosine), a function block can return a different value given
the same input, depending on its internal state (e.g. filters, PID control). This
graphical language clearly indicates the information or data flow among the
different computational blocks and how the overall computation is
decomposed among smaller blocks, each computing a well-defined operation.
It also provides good program documentation. The IEC 1131 standard
includes a wide range of standard function blocks for performing a variety of
operations, and both users and vendors can create their own. A typical simple
function block is shown below in Fig. 19.15.

Figure 0.15 Combinational logic programmed with function blocks

19.15 Structured Text (ST)

Structured Text (ST) is a high-level structured programming language


designed for expressing algorithms with complex statements not suitable for
description in a graphical format. ST supports a set of data types to
accommodate analog and digital values, times, dates, and other data. It has
operators to allow logical branching (IF), multiple branching (CASE), and
looping (FOR, WHILE…DO and REPEAT…UNTIL). Typically, a
programmer would create his own algorithms as Functions or Function
Blocks in Structured Text and use them as callable procedures in any

134
Session 19: PLC hardware selection and Programming

program. A typical simple program segment written in Structured Text is


shown below in Fig.19.16.

Figure 0.16 Simple program segment written in Structured Text.

19.16 Instruction List (IL)

A low-level assembly-like language, IL is useful for relatively simple


applications, and works on simple digital data types such as Boolean, integer.
It is tedious and error prone to write large programs in such low-level
languages. However, because complete control of the implementation,
including elementary arithmetic and logical operations, rests with the
programmer, it is used for optimizing small parts of a program in terms of
execution times and memory. A typical simple program segment written in
Instruction List is shown below in Fig. 19.17.

Figure 0.17 Simple program segment written in Instruction List

19.17 Sequential Function Chart (SFC)

SFC is a graphical method, which represents the functions of a sequential


automated system as a sequence of steps and transitions.

135
Session 19: PLC hardware selection and Programming

SFC may also be viewed as an organizational language for structuring a


program into well-defined steps, which are similar conceptually to states, and
conditioned transitions between steps to form a sequential control algorithm.
While an SFC defines the architecture of the software modules and how they
are to be executed, the other four languages are used to code the action logic
that exercises the outputs, within the modules to be executed within each step.
Similar modules are used for computation of the logical enabling conditions
for each transition.
Each step of SFC comprises actions that are executed, depending on whether
the step is active or inactive. A step is active when the flow of control passes
from one step to the next through a conditional transition that is enabled when
the corresponding transition logic evaluates to true. If the transition condition
is true, control passes from the current step, which becomes inactive, to the
next step, which then becomes active. Each control function can, therefore,
be represented by a group of steps and transitions in the form of a graph with
steps labeling the nodes and transitions labeling the edges. This graph is
called a Sequential Function Chart (SFC).

19.18 Steps

Each step is a control program module which may be programmed in RLL or


any other language. Two types of steps may be used in a sequential function
chart: initial and regular. They are represented graphically as shown below in
Fig. 19.18.

Figure 0.18 An initial step (a) and a regular step (b) in an SFC

The initial step is executed the first time the SFC block is executed or as a
result of a reset operation performed by a special function named
SFC_RESET. There can be one and only one initial step in an SFC. The initial
step cannot appear within a simultaneous branch construct, (which is
described later in this section) but it may appear anywhere else.
A regular step is executed if the transitional logic preceding the step makes
the step active. There can be one or many regular steps in an SFC network,

136
Session 19: PLC hardware selection and Programming

one or more of which may be active at a time. Only the active steps are
evaluated during a scan.
Each step may have action logic consisting, say, of zero or more rungs
programmed in Relay Ladder Diagram (RLD) logic language. Action Logic
is the logic associated with a step, i.e., the logic, programmed by RLL or any
other logic, which is executed when the step is active. When a step becomes
inactive, its state is initialized to its default state. A collection of steps may be
labeled together as a macro-step.

Figure 0.19 A step with action logic (a) and a macro-step (b) in an SFC

19.19 Transitions

Each transition is a program module like a step that finally evaluates a


transition variable. Once a transition variable evaluates to true the step(s)
following it are activated and those preceding it are deactivated. Only
transitions following active states are considered active and evaluated during
a scan. Transitions can also be a simpler entity such as a variable value whose
value may be set by simple digital input. Transition logic can be programmed
in any language. If programmed in RLL, each transition must contain a rung
that ends with an output coil to set its transition variable.

Figure 0.20 Transitions connect steps in an SFC

137
Session 19: PLC hardware selection and Programming

The SFC in Fig. 19.20 shows how the transitions connect steps in an SFC.
Initially, step S1 is active. Thus, transition T1 is also active. When the
transition variable T1 becomes true, immediately, S1 becomes inactive, S2
becomes active while T1 becomes inactive and T2 becomes active.

19.20 Simple Sequence

In a simple sequence, control passes from step S2 to step S3 only if step S2 is


active and transition T2 evaluates true.

Figure 0.21 A simple sequence in an SFC (a) and its execution over scans (b)

The table in Fig. 19.21 (b) indicates the status (A: active; I: inactive) of the
steps and transitions over scan cycles.

Figure 0.22 The State Diagram for the industrial stamping press

138
Session 19: PLC hardware selection and Programming

Figure 0.23 The Output Table for the industrial stamping press

19.21 SFC-based Implementation of the Stamping Process


Controller

The SFC for the industrial stamping process control problem is shown in Fig.
19.24. It is seen to be identical to the state diagram drawn for the process in
Fig. 19.22, except possibly for graphical syntax used in the two diagrams.
Thus, for strictly sequential processes the SFC is nothing but the state
diagram. However, it can also model concurrency, which cannot be captured
in a simple FSM. There exist many DES modelling formalisms that capture
concurrent FSM dynamics (such as State charts, widely used to model
software dynamic modelling under Unified Modelling Language Formalism).
One can now develop the logic for the step’s transitions and actions. In this
lesson RLL is used for this purpose. Some of the ladder logic for the SFC is
shown in Figure 19.25.

139
Session 19: PLC hardware selection and Programming

Figure 0.24 SFC for Controlling a Stamping Press

140
Session 19: PLC hardware selection and Programming

Figure 0.25 Sample Ladder Logic for a Graphical SFC Program.

Note the following distinctions of the SFC based implementation with that of
one the state-based implementation with pure RLL.
A. The initialization logic can now be organized within the initial step of
the SFC, which is explicitly meant for this purpose. Note that in the
RLL implementation this had to be included within the logic for State
1. The SFC-based implementation is cleaner in this respect.
B. The order of execution of the state and transition logic is determined
by SFC execution semantics. Thus, there is no need to worry about
the order in which these program segments are physically ordered
within the overall program.
C. In the RLL implementation each and every step, transition and action
logic is evaluated in every scan cycle. In SFC only the active states,
their action logic and the active transitions are evaluated. This results
in significant saving in processor time, which may be utilized in the
system for other purposes.
D. The ladder logic includes a new instruction, EOT, which will tell the
PLC when a transition has completed. When the rung of ladder logic
with the EOT output becomes true the SFC will move to the next step
or transition.

141
Session 20: Industrial controllers - selection of hardware

Session 20
Industrial controllers-selection of
hardware
Content
20.1 Fans: Characteristics and Operation ............................................ 143
20.2 On-Off Control ............................................................................ 146
20.3 Outlet Dampers ............................................................................ 147
20.4 Variable Speed Drive ................................................................... 148
20.5 Energy Savings by Different Flow Control Methods Outlet
Damper.................................................................................................... 149
20.6 Variable Speed Drive ................................................................... 150
20.7 Pumps: Characteristics and Operation ......................................... 151
20.8 Flow Control ................................................................................ 153
20.9 Throttling ..................................................................................... 153
20.10 Variable Speed Drive ............................................................... 153
20.11 Static Head ............................................................................... 155
20.12 Introduction to Electrical Actuators ......................................... 157
20.12.1 DC Servomotors ................................................................... 158
20.12.2 Mechanical Construction ...................................................... 158
20.13 Equivalent Circuit .................................................................... 160
20.14 Torque-Speed Curve ................................................................ 161
20.15 Speed Control ........................................................................... 162
20.16 Variable-Voltage, Constant-Frequency Operation................... 162
20.17 Variable-Frequency Operation ................................................. 163
20.18 Variable Voltage Variable Frequency Supply ......................... 164
20.19 Voltage-source Inverter-driven Induction Motor ..................... 166
20.20 Square wave inverters .............................................................. 166
20.21 PWM Principle ......................................................................... 169
20.22 Sinusoidal PWM ...................................................................... 169
20.23 Implementation of a constant voltage/constant frequency
strategy 170

142
Session 20: Industrial controllers - selection of hardware

Introduction
The AC induction motor is the major converter of electrical energy into
mechanical and other useable forms. For this purpose, about two thirds of the
electrical energy produced is fed to motors. Much of the power that is
consumed by AC motors goes into the operation of fans, blowers and pumps.
It has been estimated that approximately 50% of the motors in use are for
these types of loads. These particular loads — fans, blowers and pumps, are
particularly attractive to look at for energy savings. Several alternate methods
of control for fans and pumps have been advanced recently that show
substantial energy savings over traditional methods.
Basically, fans and pumps are designed to be capable of meeting the
maximum demand of the system in which they are installed. However, quite
often the actual demand could vary and be much less than the designed
capacity. These conditions are accommodated by adding outlet dampers to
fans or throttling valves to pumps. These control methods are effective,
inexpensive and simple, but severely affect the efficiency of the system.
Others forms of control are now available to adapt fans and pumps to varying
demands, which do not decrease the efficiency of the system as much. Newer
methods include direct variable speed control of the fan or pump motor. This
method produces a more efficient means of flow control than the existing
methods. In addition, adjustable frequency drives offer a distinct advantage
over other forms of variable speed control.

20.1 Fans: Characteristics and Operation

Large fans and blowers are routinely used in central air conditioning systems,
boilers, drives and the chemical operations. The most common fan is the
centrifugal fan that imparts energy into air by centrifugal force. This results
in an increase in pressure and produces air flow at the outlet of the fan.

143
Session 20: Industrial controllers - selection of hardware

Figure 0.1 Fan curve

Fig. 20.1 is a plot of outlet pressure versus the flow of air of a typical
centrifugal fan at a given speed. Standard fan curves usually show a number
of curves for different fan speeds and include the loci of constant fan
efficiencies and power requirements on the operating characteristics. These
are all useful for selecting the optimum fan for any application. They also are
needed to predict fan operation and other parameters when the fan operation
is changed. Appendix 1 gives an example of a typical fan curve for an
industrial fan.
Fig. 20.2 shows a typical system pressure-flow characteristics curve
intersecting a typical fan curve.

144
Session 20: Industrial controllers - selection of hardware

Figure 0.2 System curve

The system curve shows the requirements of the vent system that the fan is
used on. It shows how much pressure is required from the fan to overcome
system losses and produce airflow. The fan curve is a plot of fan capability
independent of a system. The system curve is a plot of "load" requirement
independent of the fan. The intersection of these two curves is the natural
operating point. It is the actual pressure and flow that will occur at the fan
outlet when this system is operated. Without external control, the fan will
operate at this point.
Many systems however require operation at a wide variety of points. Fig. 20.3
shows a profile of the typical variations in flow experienced in a typical
system.

145
Session 20: Industrial controllers - selection of hardware

Figure 0.3 Typical system flow-duration profile

There are several methods used to modulate or vary the flow to achieve the
optimum points. Apart from the method of cycling, the other methods affect
either the system curve or the fan curve to produce a different natural
operating point. In so doing, they also may change the fan's efficiency and the
power requirements. Below these methods are explained in brief.

20.2 On-Off Control

This is typically done in home heating systems and air conditioners. Here,
depending on temperature of the space in question and the desired
temperature setting, the fan is switched on and off, cyclically. Although the
average temperature can be maintained by this method, this produces erratic
airflow, causes temperature to oscillate and is generally unacceptable for
commercial or industrial use.

146
Session 20: Industrial controllers - selection of hardware

20.3 Outlet Dampers

The outlet dampers affect the system curve by increasing the resistance to air
flow. The system curve is a simple function that can be stated as
P=K×Q2,
where P is the pressure required to produce a given flow Q in the system. K
is a characteristic of the system that represents the resistance to airflow. For
different values of outlet vane opening, different values of K are obtained.
Fig. 20.5 shows several different system curves indicating different outlet
damper positions.

Figure 0.4 Fan with outlet damper

Figure 0.5 Variations of system pressure - Flow characteristics with outlet dampers

The power requirement can be derived from computing the rectangular areas
shown in Fig. 20.5 at any operating point. Figure 20.7 shows the
corresponding variations in power requirement for this type of operation.

147
Session 20: Industrial controllers - selection of hardware

From the figure it can be seen that the power decreases gradually as the flow
is decreased.

Figure 0.6 Outlet damper - power requirements

20.4 Variable Speed Drive

This method changes fan curve by changing the speed of the fan. For a given
load on the fan the pressures and flows at two different speeds, N1 and N2
are given as follows:

where N= Fan Speed, Q= Volume Flow Rate, P= Pressure, W= Power. Note


that eliminating speed from the above equations gives back the law, P = K×Q2
for the system. Thus, by this method, the operating point for a given system
load shifts along the system characteristic curve as the speed of the fan is
varied. Fig. 20.7 is a representation of the variable speed method.

148
Session 20: Industrial controllers - selection of hardware

Figure 0.7 Variable speed method

Fig. 20.7 shows the significant reduction in horsepower achieved by this


method. Thus, in this method a desired amount of flow is achieved with the
minimum of input power. The other two methods modify some system
parameter, which generally results in the reduction of efficiency of the fan.
This is why the power demand is greater than the variable speed method.

20.5 Energy Savings by Different Flow Control


Methods Outlet Damper

In this section the power consumption for two of the above methods, namely
the variable speed method and the outlet damper method, and their associated
costs of operation are estimated for a given load profile and a fan curve
(shown in Appendix 1). Assume the fan selected has a rated speed of 300
RPM and 100% flow is to equal 100,000 CFM as shown on the chart. Assume
the following load profile.

149
Session 20: Industrial controllers - selection of hardware

For each operating point, one can obtain the required power from the fan
curve by locating the corresponding fan pressure. Note that, at all these
operating points the speed is assumed constant. This power is multiplied by
the fraction of the total time, for which the fan operates at this point. These
"weighted horse powers" are then summed to produce an average horsepower
that represents the average energy consumption of the fan.

20.6 Variable Speed Drive

To assess the energy savings, similar calculations are to be carried out to


obtain an average horsepower for variable speed operation. The fan curve
does not directly show the operating characteristics at varying speeds.
However, these can be obtained using the laws of variation of pressure and
flow with speed.
From the fan curve, 100% flow, (say Q1) at 100% speed, (say N1) requires 35
HP. Now since, Q2/Q1 = N2/N1, the new value of speed N2 required to
establish Q2 can be obtained easily. This value of N2, substituted into the
power formula, (P2/ P1) = (N2/N1)3 would then yield the new value of P2
needed to establish Q2 at speed N2. When Q1 = 100% and W1 = 35 HP, the
values of W2 for various values of Q2 are shown below.

These calculated values do match the points available on the fan curve. Now
it is possible to calculate the average horsepower.

150
Session 20: Industrial controllers - selection of hardware

Comparing the above figure with that calculated for the outlet damper method
indicates the difference in energy consumption. The variable speed method
requires less than half the energy of the outlet damper method (based on the
typical duty cycle).
As an example of the cost difference between these methods, let's assume the
system operates twenty-four hours per day (730 hours per month), and the
cost of electricity is Rs. 2.00 per kilowatt-hour.
The cost of electricity is determined in terms of the energy in kilowatt hour
per month.

There is over a 20,000 Rs. per month savings available by using the variable
speed method in-stead of the outlet damper method.
This example only takes into account the operation of the fan. In practice the
motor efficiency and the drive efficiency should also be taken into account.
It is impractical to list or chart the motor and drive efficiencies in this lesson
for all possible load conditions that could occur. However, since the same
motor would be used in both examples shown here, the difference in motor
efficiencies would be minimized and not significantly affect the results
shown.

20.7 Pumps: Characteristics and Operation

Pumps are generally grouped into two broad categories, positive displacement
pumps and centrifugal pumps. Positive displacement pumps use mechanical
means to vary the size (or move), the fluid chamber to cause the fluid to flow.
Centrifugal pumps impart a momentum in the fluid by rotating impellers
immersed in the fluid. The momentum produces an increase in pressure or

151
Session 20: Industrial controllers - selection of hardware

flow at the pump outlet. The vast majority of pumps used today are of the
centrifugal type. Only centrifugal pumps are discussed here.
Fig. 20.8 again shows two independent curves. One is the pump curve which
is solely a function of the physical characteristics of the pump. The other
curve is the system curve. This curve is completely dependent on the size of
pipe, the length of pipe, the number and location of elbows, etc.
Where these two curves intersect is called the natural operating point. That is
where the pump pressure matches the system losses, and everything is
balanced. Note that this balance only occurs at one point (or at least should
for stable system operation). If that point does occur at or at least come close
to the desired point of operation, then the system is acceptable. If it does not
come close enough then either the pump or the system physical arrangement
has to be altered to correct to the desired point.
The following laws govern, similar to the case of fans, the operation of
centrifugal pump characteristics at various pump speeds.

Figure 0.8 Pump and system curves

152
Session 20: Industrial controllers - selection of hardware

20.8 Flow Control

Similar to the case of fans, there are two main methods of flow control in
pumps, namely, use of a control or throttling valve and variable speed control
of the pump. In view of the similarity, these are described in brief below.

20.9 Throttling

Consider a throttling system shown in Fig. 20.9. Two conditions of the system
curve are shown, one with the valve open and the other with the valve
throttled or partially closed. The result is that when the flow in the system is
decreased, the pump head increases.

20.10 Variable Speed Drive

In comparison, the variable speed method takes advantage of the change in


pump characteristics that occur when the pump impeller speed is changed.
The new pump characteristics can be predicted from the laws stated earlier.
In this method is that the pump head decreases as the flow is decreased. Fig.
20.10 on the following page gives an example of the system controlled from
a variable speed pump.

153
Session 20: Industrial controllers - selection of hardware

Figure 0.9 Throttling system for pump

154
Session 20: Industrial controllers - selection of hardware

Figure 0.10 Throttling system - Variable speed pump control

20.11 Static Head

Static head is the pressure required to overcome an elevation change in the


system. To get water from the base to a spout at the top of a vessel would
cause a static head on this pump. A system with a static head does change the
system curve and the horsepower requirements will change from that shown
previously. Fig. 20.11 shows the system curves for systems with different
static heads. Fig. 20.12 shows the horsepower requirements for each system.
The system corresponding to the curve A is without static head. The system
corresponding to the curve B requires a static head. The system corresponding
to the curve C requires a double the static head and still has the same
operations with the pump. Fig. 20.12 shows the power curves for the variable
speed operations for the three systems A, B, and C. Curve D corresponds to
throttle control. The dynamic operation of the throttling system does not

155
Session 20: Industrial controllers - selection of hardware

change with static head. The static head does part of the work for the throttling
valve. However, note that the horsepower requirement for this method
remains above that of the variable speed method.

Figure 0.11 Pump operation for a system with varying static head

156
Session 20: Industrial controllers - selection of hardware

Figure 0.12 Power - Flow characteristics for pumps

20.12 Introduction to Electrical Actuators

Motion control and drives are very important actuation subsystems for
Process and Discrete Manufacturing Industries. The motion control systems
are critical for product quality in discrete manufacturing, while variable speed
drives lead to significant energy savings in common industrial loads such as
pumps, compressors and fans.
Variable speed drives can be categorized into Adjustable Speed Drives and
Servo Drives. In adjustable speed drives the speed set points are changed
relatively infrequently, in response to changes in process operating pints.
Therefore, transient response of the drive system is not of consequence. In
servo drives, as in CNC machines, set points change constantly (as in
contouring systems).
While ac motors have replaced dc motors in most of the adjustable speed drive
applications. For servo drive applications, dc motors are still used, although
they are also being replaced by BLDC motors. In this lesson we discuss speed

157
Session 20: Industrial controllers - selection of hardware

and position control with dc motors. The next lesson discusses adjustable
speed drives using induction motors, while Lesson 35 discusses BLDC servo
drives.
20.12.1 DC Servomotors

Direct current servomotors are used as feed actuators in many machine tool
industries. These motors are generally of the permanent magnet (PM) type in
which the stator magnetic flux remains essentially constant at all levels of the
armature current and the speed-torque relationship is linear.
Direct current servomotors have a high peak torque for quick accelerations.
A cross-sectional view of a typical permanent magnet dc servomotor is shown
in Fig. 20.13.
20.12.2Mechanical Construction

Stator consists of Yoke and Poles and provides mechanical support to the
machine. The yoke provides a highly permeable path for magnetic flux. It is
made of cast steel. Field poles are made of thin laminations stacked together.
This is done to minimize the magnetic losses due to the armature flux. The
cross-sectional area of the field pole is less than that of the pole shoe.
The pole shoe helps to establish a uniform flux density around the air gap.
Field winding: DC excitations are provided to field windings wound on pole
shoes to create electromagnetic poles of alternating polarity. Depending on
the connections of field windings DC motors may be termed as shunt, series,
compound or separately excited. Shunt motors have field winding connected
in parallel with the armature winding while series motors have the field
winding connected in series with the armature winding. A compound dc
machine may have both field windings wound on the same pole. Smaller DC
servomotors generally have permanent magnets for poles.
Armature: The rotating part of a dc machine is called the armature. The
length of the armature is usually the same as that of the pole. It is made of
thin, highly permeable, and electrically insulated circular steel laminations
that are stacked together and rigidly mounted on the shaft. The laminations
have axial slots on their periphery to house the armature coils. Insulated
copper wires are typically used for the armature coils to achieve a low
armature resistance.
Commutator: The commutator is made of wedge – shaped hard-drawn
copper segments. Sheets of mica insulate the copper segments from one
another. One end of the armature coil is electrically connected to a copper
segment of the commutator. The commutators rotate with the armature
keeping a sliding contact with the brushes, which remain stationary.

158
Session 20: Industrial controllers - selection of hardware

Brushes: Brushes are held in a fixed position by means of brush holders and
remain in sliding contact with the commutator segments. An adjustable spring
inside the brush holder exerts a constant pressure on the brush in order to
maintain a proper contact between the brush and the commutator. The brushes
are connected to the armature terminals of the machine. The material for the
brush is normally carbon or carbon-graphite.

Figure 0.13 Cross-section of a permanent magnet-excited dc servomotor

Figure 0.14 Diagrammatic sketch of a D.C. machine.

For adjustable speed applications, the induction machine, particularly the


cage rotor type, is most commonly used in industry. These machines are very
cheap and rugged and are available from fractional horsepower to multi-

159
Session 20: Industrial controllers - selection of hardware

megawatt capacity, both in single -phase and poly-phase versions. In this


lesson, the basic fundamentals of construction, operation and speed control
for induction motors are presented.
In cage rotor type induction motors the rotor has a squirrel cage-like structure
with shorted end rings. The stator has a three-phase winding and embedded
in slots distributed sinusoidally. It can be shown that a sinusoidal three-phase
balanced set of ac voltages applied to the three-phase stator windings creates
a magnetic field rotating at angular speed ωs = 4πfs /P where fs is the supply
frequency in Hz and P is the number of stator poles.
If the rotor is rotating at an angular speed ωr , i.e. at an angular speed (ωs - ωr)
with respect to the rotating stator mmf, its conductors will be subjected to a
sweeping magnetic field, inducing voltages and current and mmf in the short-
circuited rotor bars at a frequency (ωs - ωr)P/4π, known as the slip speed. The
interaction of air gap flux and rotor mmf produces torque. The per unit slip ω
is defined as
A. = ωs − ωr

20.13 Equivalent Circuit

Figure 20.15 shows the equivalent circuit with respect to the stator, where Ir
is given as

and parameters Rr and Llr stand for the resistance and inductance parameters
referred to the stator.
Since the output power is the product of developed electrical torque Te and
speed ωm, Te can be expressed as

160
Session 20: Industrial controllers - selection of hardware

Figure 0.15 Approximate per phase equivalent circuit

In Figure 20.15, the magnitude of the rotor current Ir can be written as

This yields that,

20.14 Torque-Speed Curve

The torque Te can be calculated as a function of slip S from the equation 1.


Figure 20.16 shows the torque-speed (ωr /ωe = 1− S) curve. The various
operating zones in the figure can be defined as
plugging (1.0 < S < 2.0), motoring (0 < S < 1.0), and regenerating (S< 0). In
the normal motoring region, Te = 0 at S = 0, and as S increases (i.e., speed
decreases), Te increases in a quasi-linear curve until breakdown, or maximum
torque Tem is reached. Beyond this point, Te decreases with the increase in S.

161
Session 20: Industrial controllers - selection of hardware

Figure 0.16 Torque-speed curve of induction motor

In the regenerating region, as the name indicates, the machine acts as a


generator. The rotor moves at super synchronous speed in the same direction
as that of the air gap flux so that the slip becomes negative, creating negative,
or regenerating torque (Teg). With a variable-frequency power supply, the
machine stator frequency can be controlled to be lower than the rotor speed
(ωe < ωr) to obtain a regenerative braking effect.

20.15 Speed Control

From the torque speed characteristics in Fig. 20.16, it can be seen that at any
rotor speed the magnitude and/or frequency of the supply voltage can be
controlled for obtaining a desired torque. The three possible modes of speed
control are discussed below.

20.16 Variable-Voltage, Constant-Frequency Operation

A simple method of controlling speed in a cage-type induction motor is by


varying the stator voltage at constant supply frequency. Stator voltage control
is also used for “soft start” to limit the stator current during periods of low
rotor speeds.
Figure 20.17 shows the torque-speed curves with variable stator voltage.
Often, low-power motor drives use this type of speed control due to the
simplicity of the drive circuit.

162
Session 20: Industrial controllers - selection of hardware

20.17 Variable-Frequency Operation

Figure 20.18 shows the torque-speed curve, if the stator supply frequency is
increased with constant supply voltage, where ωe is the base angular speed.
Note, however, that beyond the rated frequency ωb , there is fall in maximum
torque developed, while the speed rises.

Figure 0.17 Torque-speed curves at variable supply voltage

Figure 0.18 Torque-speed curves at variable stator frequency

Variable voltage variable frequency operation with constant V/f

Figure 20.19 shows the torque-speed curves for constant V/f operation. Note
that the maximum torque Tem remains approximately constant. Since the air

163
Session 20: Industrial controllers - selection of hardware

gap flux of the machine is kept at the rated value, the torque per ampere is
high. Therefore, fast variations in acceleration can be achieved by stator
current control. Since the supply frequency is lowered at low speeds, the
machine operates at low slip always, so the energy efficiency does not suffer.

Figure 0.19 Torque-speed curves at constant V/f

Majority of industrial variable-speed ac drives operate with a variable voltage


variable frequency power supply.

20.18 Variable Voltage Variable Frequency Supply

Figure 0.20 PWM inverter fed induction motor drive

The variable voltage variable frequency supply for an induction motor drive
consists of a uncontrolled (Fig. 20.20) or controlled rectifier (Fig. 20.21)
(fixed voltage fixed frequency ac to variable/fixed voltage dc) and an inverter
(dc to variable voltage/variable frequency ac). If rectification is uncontrolled,
as in diode rectifiers, the voltage and frequency can both be controlled in a
pulse-width-modulated (PWM) inverter as shown in Figure 20.20. The dc link
filter consists of a capacitor to keep the input voltage to the inverter stable
and ripple-free.

164
Session 20: Industrial controllers - selection of hardware

Figure 0.21 Variable-voltage, variable-frequency (VVVF) induction motor drive

On the other hand, a controlled rectifier can be used to vary the dc link
voltage, while a square wave inverter can be used to change the frequency.
This configuration is shown in Fig. 20.21.

Figure 0.22 Regenerative voltage-source inverter-fed ac drive.

To recover the regenerative energy in the dc link, an antiparallel-controlled


rectifier is required to handle the regenerative energy, as shown in Fig. 20.22.
The above are basically controlled voltage sources. These can however be
operated as controlled current sources by incorporating an outer current
feedback loop as shown in Fig. 20.23.

165
Session 20: Industrial controllers - selection of hardware

Figure 0.23 Current-controlled voltage-source-driven induction motor drive

20.19 Voltage-source Inverter-driven Induction Motor

A three-phase variable frequency inverter supplying an induction motor is


shown in Figure 20.24. The power devices are assumed to be ideal switches.
There are two major types of switching schemes for the inverters, namely,
square wave switching and PWM switching.

20.20 Square wave inverters

The gating signals and the resulting line voltages for square wave switching
are shown in Figure 20.25. The phase voltages are derived from the line
voltages assuming a balanced three-phase system.

166
Session 20: Industrial controllers - selection of hardware

Figure 0.24 A schematic of the generic inverter-fed induction motor drive.

167
Session 20: Industrial controllers - selection of hardware

Figure 0.25 Inverter gate (base) signals and line-and phase-voltage waveforms

The square wave inverter control is simple and the switching frequency and
consequently, switching losses are low. However, significant energies of the
lower order harmonics and large distortions in current wave require bulky
low-pass filters. Moreover, this scheme can only achieve frequency control.
For voltage control a controlled rectifier is needed, which offsets some of the
cost advantages of the simple inverter.

168
Session 20: Industrial controllers - selection of hardware

20.21 PWM Principle

It is possible to control the output voltage and frequency of the PWM inverter
simultaneously, as well as optimize the harmonics by performing multiple
switching within the inverter major cycle which determines frequency. For
example, the fundamental voltage for a square wave has the maximum
amplitude (4Vd/π) but by intermediate switching, as shown in Fig. 20.26, the
magnitude can be reduced. This determines the principle of simultaneous
voltage control by PWM. Different possible strategies for PWM switching
exist. They have different harmonic contents. In the following only a
sinusoidal PWM is discussed.

Figure 0.26 PWM principle to control output voltage.

20.22 Sinusoidal PWM

Figure 20.27(a) explains the general principle of SPWM, where an isosceles


triangle carrier wave of frequency fc is compared with the sinusoidal
modulating wave of fundamental frequency f, and the points of intersection
determine the switching points of power devices. For example, for phase-a,
voltage (Va0) is obtained by switching ON Q1 and Q4 of half-bridge inverter,
as shown in the figure 20.27. Assuming that f << fc, the pulse widths of va0
wave vary in a sinusoidal manner. Thus, the fundamental frequency is
controlled by varying f and its amplitude is proportional to the command
modulating voltage. The Fourier analysis of the va0 wave can be shown to be
of the form:
va0 = 0.5mVd sin (2πft + φ) + harmonic frequency terms

169
Session 20: Industrial controllers - selection of hardware

Figure 0.27(b) Line voltage waves of PWM inverter

where m = modulation index and φ = phase shift of output, depending on the


position of the modulating wave. The modulation index m is defined as

where Vp = peak value of the modulating wave and VT = peak value of the
carrier wave. Ideally, m can be varied between 0 and 1 to give a linear relation
between the modulating and output wave. The inverter basically acts as a
linear amplifier. The line voltage waveform is shown in Fig. 20.27(b).

20.23 Implementation of a constant voltage/constant


frequency strategy

An implementation of the constant volts/Hz control strategy for the inverter-


fed induction motor in close loop is shown in Figure 20.28. The frequency
command fs* is enforced in the inverter and the corresponding dc link voltage
is controlled through the front-end converter.

170
Session 20: Industrial controllers - selection of hardware

Figure 0.28 Closed-loop induction motor drive with constant volts/Hz control strategy.

An outer speed PI control loop in the induction motor drive, shown in Figure
20.28 computes the frequency and voltage set points for the inverter and the
converter respectively. The limiter ensures that the slip-speed command is
within the maximum allowable slip speed of the induction motor. The slip-
speed command is added to electrical rotor speed to obtain the stator
frequency command. Thereafter, the stator frequency command is processed
in an open-loop drive. Kdc is the constant of proportionality between the dc
load voltage and the stator frequency.

171
Session 21: Overview of DCS

Session 21
Overview of DCS
Content
Introduction ............................................................................................. 173
21.1 Historical Review ........................................................................ 173
21.2 Modes of Computer control ......................................................... 174
21.3 Computer Control Networks ........................................................ 175
21.3.1 Small Computer Network ..................................................... 175
21.3.2 Programmable Logic Controllers ......................................... 176
21.3.3 Commercial Distributed Control Systems ............................ 177
21.4 Description of the DCS elements ................................................. 179
21.5 The advantages of DCS systems .................................................. 180
21.6 Important consideration regarding DCS systems. ....................... 180
21.6.1 The control loops .................................................................. 180
21.6.2 The basic units of a digital computer ................................... 181
21.6.3 Digital control software ........................................................ 185
21.7 Conclusion ................................................................................... 186

172
Session 21: Overview of DCS

Introduction

Generally, the concept of automatic control includes accomplishing two


major operations: the transmission of signals (information flow) back and
forth and the calculation of control actions (decision making). Carrying out
these operations in real plant requires a set of hardware and instrumentation
that serve as the platform for these tasks. Distributed control system (DCS) is
the most modern control platform. It stands as the infrastructure not only for
all advanced control strategies but also for the lowliest control system. The
idea of control infrastructure is old. The next section discusses how the
control platform progressed through time to follow the advancement in
control algorithms and instrumentation technologies.

21.1 Historical Review

To fully appreciate and select the current status of affairs in industrial practice
it is of interest to understand the historical perspective on the evolution of
control systems implementation philosophy and hardware elements. The
evolution concerns the heart of any control system which is how information
flow and decision making advanced.
1. Pneumatic Implementation: In the early implementation of automatic
control systems, information flow was accomplished by pneumatic
transmission, and computation was done by mechanical devices using
bellows, spring etc. The pneumatic controller has high margin for
safety since they are explosion proof. However, there are two
fundamental problems associated with pneumatic implementation:
 Transmission: the signals transmitted pneumatically (via air
pressure) are slow responding and susceptible to interference.
 Calculation: Mechanical computation devices must be
relatively simple and tend to wear out quickly.
2. Electron analog implementation: Electrons are used as the medium of
transmission in his type of implementation mode. Computation
devices are still the same as before. Electrical signals to pressure
signals converter (E/P transducers) and vice versa (P/E transducers)
are used to communicate between the mechanical devices and electron
flow. The primary problems associated with electronic analog
implementation are:
 Transmission: analog signals are susceptible to contamination
from stray fields, and signal quality tends to degrade over long
transmission line.

173
Session 21: Overview of DCS

Calculation: the type of computations possible with electronic


analog devices is still limited.
3. Digital Implementation: the transmission medium is still electron, but
the signals are transmitted as binary numbers. Such digital signals are
far less sensitive to noise. The computational devices are digital
computers. Digital computers are more flexible because they are
programmable. They are more versatile because there is virtually no
limitation to the complexity of the computations it can carry out.
Moreover, it is possible to carry out computation with a single
computing device, or with a network of such devices.
Many field sensors naturally produce analog voltage or current signals. For
this reason, transducers that convert analog signals to digital signals (A/D)
and vice versa (D/A) are used as interface between the analog and digital
elements of the modern control system. With the development of digital
implementation systems, which DCS are based on, it is possible to implement
many sophisticated control strategies on a very fast timescale.

21.2 Modes of Computer control

Computer control is usually carried out in two modes: supervisory control or


direct digital control. Both are shown in Figure 21.1. Supervisory control
involves resetting the set point for a local controller according to some
computer calculation. Direct digital control, by contrast, requires that all
control actions be carried out by the digital computer. Both modes are in wide
use in industrial applications, and both allow incorporating modern control
technologies. Measurements are transmitted to computer and control signals
are sent from computer to control valves at specific time interval known as
sampling time. The latter should be chosen with care.

Figure 0.1 Computer control modes.

174
Session 21: Overview of DCS

21.3 Computer Control Networks

The computer control network performs a wide variety of tasks: data


acquisition, servicing of video display units in various laboratories and
control rooms, data logging from analytical laboratories, control of plant
processes or pilot plant, etc. The computer network can be as simple as an
array of inexpensive PC's or it could be a large commercial distributed control
system (DCS).
21.3.1 Small Computer Network

In small processes such as laboratory prototype or pilot plants, the number of


control loops is relatively small. An inexpensive and straightforward way to
deal with the systems is to configure a network of personal computers for data
acquisition and control. An example configuration of a PC network control
system is depicted in Figure 21.2 the network consists of a main computer
linked directly to the process in two-way channels. Other local computers are
linked to the main computer and are also connected to the process through
one-way or two-way links. Some of these local computers can be
interconnected. Each of the local computers has a video display and a specific
function. For example, some local computers are dedicated for data
acquisition only, some for local control only and some other for both data
acquisition and local control. The main computer could have a multiple
display.
All computers operate with a multitasking operating system. They would be
normally configured with local memory, local disk storage, and often have
shared disk storage with a server.

175
Session 21: Overview of DCS

Figure 0.2 PC network

21.3.2 Programmable Logic Controllers

Programmable logic controller (PLC) is another type of digital technology


used in process control. It is exclusively specialized for non-continuous
systems such as batch processes or that contains equipment or control
elements that operate discontinuously. It can also be used for many instants
where interlocks are required; for example, a flow control loop cannot be
actuated unless a pump has been turned on. Similarly, during startup or
shutdown of continuous processes many elements must be correctly
sequenced; that is, upstream flows and levels must be established before
downstream pumps can be turned on.
The PLC concept is based on designing a sequence of logical decisions to
implement the control for the above-mentioned cases. Such a system uses a
special purpose computer called programmable logic controllers because the
computer is programmed to execute the desired Boolean logic and to
implement the desired sequencing. In this case, the inputs to the computer are
a set of relay contacts representing the state of various process elements.
Various operator inputs are also provided. The outputs from the computer are
a set of relays energized (activated) by the computer that can turn a pump on
or off, activate lights on a display panel, operate solenoid valve, and so on.

176
Session 21: Overview of DCS

PLCs can handle thousands of digital I/O and hundreds of analog I/O and
continuous PID control. PLC has many features besides the digital system
capabilities. However, PLC lacks the flexibility for expansion and
reconfiguration. The operator interface in PLC systems is also limited.
Moreover, programming PLC by a higher-level language and/or capability of
implementing advanced control algorithms is also limited.
PLCs are not typical in a traditional process plant, but there some operations,
such as sequencing, and interlock operations, that can use the powerful
capabilities of a PLC. They are also quite frequently a cost-effective
alternative to DCSs (discussed next) where sophisticated process control
strategies are not needed. Nevertheless, PLCs and DCSs can be combined in
a hybrid system where PLC connected through link to a controller or
connected directly to network.
21.3.3 Commercial Distributed Control Systems

In more complex pilot plants and full-scale plants, the control loops are of the
order of hundreds. For such large processes, the commercial distributed
control system is more appropriate. There are many vendors who provide
these DCS systems such as Baily, Foxboro, Honeywell, Rosemont,
Yokogawa, etc. In the following only an overview of the role of DCS is
outlined.
Conceptually, the DCS is similar to the simple PC network. However, there
are some differences. First, the hardware and software of the DCS is made
more flexible, i.e. easy to modify and configure, and to be able to handle a
large number of loops. Secondly, the modern DCS are equipped with
optimization, high-performance model-building and control software as
options. Therefore, an imaginative engineer who has theoretical background
on modern control systems can quickly configure the DCS network to
implement high performance controllers.
A schematic of the DCS network is shown in figure 21.3. Basically, various
parts of the plant processes and several parts of the DCS network elements
are connected to each other’s via the data highway (fieldbus). Although figure
3 shows one data highway, in practice there could be several levels of data
highways. A large number of local data acquisition, video display and
computers can be found distributed around the plant. They all communicate
to each other’s through the data highway. These distributed elements may
vary in their responsibilities. For example, those closest to the process handle
high raw data traffic to the local computers while those farther away from the
process deal only with processed data but for a wider audience.
The data highway is thus the backbone for the DCS system. It provides
information to the multi-displays on various operator control panels sends

177
Session 21: Overview of DCS

new data and retrieve historical data from archival storage and serves as a
data link between the main control computer and other parts of the network.
On the top of the hierarchy, a supervisory (host) computer is set. The host
computer is responsible for performing many higher-level functions. These
could include optimization of the process operation over varying time
horizons (days, weeks, or months), carrying out special control procedure
such as plant start up or product grade transition, and providing feedback on
economic performance.

Figure 0.3 The elements of a commercial distributed control system network

A DCS is then a powerful tool for any large commercial plant. The engineer
or operator can immediately utilize such a system to:
 Access a large amount of current information from the data highway.
 See trends of past process conditions by calling archival data storage.
 Readily install new on-line measurements together with local
computers for data acquisition and then use the new data immediately
for controlling all loops of the process.
 Alternate quickly among standard control strategies and readjust
controller parameters in software.
 A sight full engineer can use the flexibility of the framework to
implement his latest controller design ideas on the host computer or
on the main control computer.

178
Session 21: Overview of DCS

In the common DCS architecture, the microcomputer attached to the process


are known as front-end computers and are usually less sophisticated
equipment employed for low level functions. Typically, such equipment
would acquire process data from the measuring devices and convert them to
standard engineering units. The results at this level are passed upward to the
larger computers that are responsible for more complex operations. These
upper-level computers can be programmed to perform more advanced
calculations.

21.4 Description of the DCS elements

The typical DCS system shown in Figure 3 can consists of one or more of the
following elements:
 Local Control Unit (LCU). This is denoted as local computer in Figure
21.3. This unit can handle 8 to 16 individual PID loops, with 16 to 32
analog input lines, 8 to 16 analog output signals and some a limited
number of digital inputs and outputs.
 Data Acquisition Unit. This unit may contain 2 to 16 times as many
analog input/outputs channels as the LCU. Digital (discrete) and
analog I/O can be handled. Typically, no control functions are
available.
 Batch Sequencing Unit. Typically, this unit contains a number of
external events, timing counters, arbitrary function generators, and
internal logic.
 Local Display. This device usually provides analog display stations,
analog trend recorder, and sometime video display for readout.
 Bulk Memory Unit. This unit is used to store and recall process data.
Usually mass storage disks or magnetic tape are used.
 General Purpose Computer. This unit is programmed by a customer
or third party to perform sophisticated functions such as optimization,
advance control, expert system, etc.
 Central Operator Display. This unit typically will contain one or more
consoles for operator communication with the system, and multiple
video color graphics display units.
 Data Highway. A serial digital data transmission link connecting all
other components in the system may consist of coaxial cable. Most
commercial DCS allow for redundant data highway to reduce the risk
of data loss.
 Local area Network (LAN). Many manufacturers supply a port device
to allow connection to remote devices through a standard local area
network.

179
Session 21: Overview of DCS

21.5 The advantages of DCS systems

The major advantages of functional hardware distribution are flexibility in


system design, ease of expansion, reliability, and ease of maintenance. A big
advantage compared to a single-computer system is that the user can start out
at a low level of investment. Another obvious advantage of this type of
distributed architecture is that complete loss of the data highway will not
cause complete loss of system capability. Often local units can continue
operation with no significant loss of function over moderate or extended
periods of time.
Moreover, the DCS network allows different modes of control
implementation such as manual/auto/supervisory/computer operation for
each local control loop. In the manual mode, the operator manipulates the
final control element directly. In the auto mode, the final control element is
manipulated automatically through a low-level controller usually a PID. The
set point for this control loop is entered by the operator. In the supervisory
mode, an advanced digital controller is placed on the top of the low-level
controller (Figure 21.1). The advanced controller sets the set point for the
low-level controller. The set point for the advanced controller can be set either
by the operator or a steady state optimization. In the computer mode, the
control system operates in the direct digital mode shown in Figure 21.1.
One of the main goals of using DCS system is allowing the implementation
of digital control algorithms. The benefit of digital control application can
include:
 Digital systems are more precise.
 Digital systems are more flexible. This means that control algorithms
can be changed, and control configuration can be modified without
having rewired the system.
 Digital system cost less to install and maintain.
 Digital data in electronic files are easier to deal with. Operating results
can be printed out, displayed on color terminals, stored in highly
compressed form.

21.6 Important consideration regarding DCS systems.

21.6.1 The control loops

The control loop remains the same as the conventional feedback control loop,
but with the addition of some digital components. Figure 21.4 shows a typical
single direct digital control-loop. Digital computer is used to take care of all

180
Session 21: Overview of DCS

control calculations. Since the computer is a digital (binary) machine and the
information coming out of the process in an analog for, they had to be
digitized before entering the computer. Similarly, the commands issued by
the computer are in binary, they should be converted to analog (continuous)
signals before implemented on the final control element. This is the
philosophy behind installing the A/D and D/A converter on the control loop.
Signal conditioning is used to remove noise and smooth transmitted data.
Amplifier can also be used to scale the transmitted data if the signals gain is
small. Signal generators (transducer) are used to convert the process
measurements into analog signals. The most common analog signals used are
0-5 Volts and 4 -20mA. Some of the process variables are represented in
millivolts such as those form thermocouples, strain gauges, pH meters, etc.
Multiplexers are often used to switch selectively a number of analog signals.

Figure 0.4 The component of a digital control loop

All instrumentation hardware (1-9) is designed, selected, installed and


maintained by an instrumentation engineer. The computer is responsible for
making decisions (control actions). It can host a simple control algorithm or
a more advanced one. The latter can either purchased from a commercial
vendor or developed in-house by a process/control engineer (See section
21.6.3). The terminal is the main operator interface with the control system.
The operator can use the terminal to monitor the control performance, adjust
the set points and tune the controller parameters.

21.6.2 The basic units of a digital computer

The digital computer used in DCS systems is a regular microcomputer with


the simplified components shown in Figure 21.5. It includes the arithmetic
unit, which carry out arithmetic and logic commands. The control unit is the
part of the computer responsible for reading program statements from
memory, interpreting them, and causing the appropriate action to take place.

181
Session 21: Overview of DCS

The memory unit is used for storing data and programs. Typical computers
have Random-Access-Memory (RAM) and Read-Only-Memory (ROM). The
final unit is the input/output interface. The I/O interface is necessary for the
computer to communicate with the external world. This interface is the most
important in the control implementation. The process information is fed to the
computer through the I/O interface and the commands made by the computer
are sent to the final control element through the I/O interface.

Figure 0.5 A general purpose digital computer

In control application, the design of the I/O devices and interface is an


important part of the overall digital control philosophy. The following
subsections discuss some of these issues.

Information presentation and accuracy.


The modern digital computer is a binary machine. This means that internal
data and arithmetic and logic must be represented in binary format. Therefore,
all process information flowing into and out of the computer must also be
converted to that form. Traditionally, the computer memory location is made
up of a collection of bits called a word (register). A typical computer word

182
Session 21: Overview of DCS

consists of 16 bits (new computers carry 32-bits word). Consider, for


example, the following machine number:
16-bit computer word: 1011001100010100
The base for this word is 2. Therefore, each bit has the following decimal
equivalent:

Each single bit consists of binary elements, i.e. 0 or 1. Therefore, any integer
number from 0 to 7 can be represented by a three-bit word as follows:

In this case, analog process information should be first changed to voltage or


current as mentioned earlier. Then it is converted to digital form by an
electronic device called analog to digital converter (A/D). Similarly, digital
information is converted to analog form (Voltage or current) by a digital to
analog converter (D/A). The accuracy (resolution) of such digitization
process depends on the number of bits used to for representation. The degree
of resolution is given by:

where m is the number of bits in the representation. Obviously, higher


resolution can be obtained at higher number of bits. For example, consider a
sensor sends an analog signal between 0 and 1 volt and assume only a three-
bit computer word is available, and then the full range of the signal can be
recognized as follows:
This means that eight specific values for the analog signal can be exactly
recognized. Any values interim values will be approximated according to the
covered analog range shown in the fourth column of Table 21.1. In this way,
the error in resolution is said to be in the order of 1/14. Assume now a 4-bit
word is available for the same analog signal. Then the full range will be

183
Session 21: Overview of DCS

divided over 15 points, i.e. sixteen equally spaced values between 0 and 1 can
be recognized, and the error in resolution will be in the order of 1/30. Most
current control-oriented ADC and DAC utilize a 10 to 12-bit representation
(resolution better than 0.1%). Since most micro- and minicomputers utilize at
least a 16-bit word, the value of an analog variable can be stored in one
memory word. New computers are capable of using 32-bit word. Therefore,
new generation of ADC and DAC with higher resolution (up to 16 to 20 bit)
are emerging.
Table 0.1 Representation of a 0 to 1volt analog variable using a 3-bit word

Process interface
A typical plant with large number of variables contains abundance of process
information (data). Therefore, process information can be classified under
several classes (groups). Then a specialized device can be used to transfer all
information of a specific class into and out of the computer. This way
designing different I/O interface for each I/O device to be connected to the
computer is avoided. In fact, most process data can be grouped into four major
categories as listed in Table 21.2.
Table 0.2 Categories of process information

The digital input/output signals can be easily handled because the match the
computer representation format. The digital interface can be designed to have
multiple registers, each with the same number of bits as the basic computer

184
Session 21: Overview of DCS

word. In this way a full word of 16-bit can represent 16 separate process
binary variables and can be transmitted to the computer at one time and
stored. Each bit will determine the state of a specific process input lie. For
example, a state of 1 means the input is on and 0 means off or vice versa.
The generalized digital information usually uses binary coded decimal and
ranges from 0000 to 9999. Hence, a 16-bit register can be used as interface
device to transmit 4 digits of result because four-bits are necessary to
represent one digit (0-9) of binary coded decimal.
In the input pulse information case, a single register (interface device) is
designed for each input line. The register ordinarily consists of pulse counter.
The accumulated pulses over a specified length of time are transferred to the
computer in binary or BCD count. The output pulse interface consists of a
device to generate a continuous train of pulses followed by a gate. The gate
is turned on and off by the computer.
The analog input information must be digitized by ADC before fed to the
computer. Since the process has a large number of analog sensing devices, a
multiplexer is used to switch selectively among various analog signals. The
main purpose of a multiplexer is to avoid the necessity of using a single ADC
for each input line. The DAC devise performs the reverse operation. Each
analog output line from the computer has its own dedicated DAC. The DAC
is designed such that it holds (freeze) a previous output signal until another
command is issued by the computer.
Timing
The control computer must be able to keep track of time (real time) in order
to be able to initiate data acquisition operations and calculate control outputs
or to initiate supervisory optimization on a desired schedule. Hence, all
control computers will contain at least one hardware timing device. The so-
called real-time clock represents one technique. This device is nothing more
than a pulse generator that interrupts the computer on a periodic basis and
identifies itself as interrupting device.
Operator interface.
The operator interface is generally a terminal upon which the operator can
communicate with the system. Such terminals usually permit displaying
graphical information. Often these display consoles are color terminals for
better visibility and recognition of key variables. The operator will use the
keyboard portion of the terminal to perform specific tasks. For example, the
operator can type in requests for information or displaying trends, changing
controller parameters or set points, adding new control loop, and so on.
21.6.3 Digital control software

185
Session 21: Overview of DCS

To make the best use of a DCS system, an advance control strategy or


supervisory optimization can be incorporated in the main host computer. In
the past, computer control projects are written in assembly language, an
extremely tedious procedure. Nowadays most user software is written in
higher-level languages such as BASIC, FORTRAN, C etc. In many cases, the
user is able to utilize the template routines supplied by the vendor and is
required only to duplicate these routines and interconnect them to fit his own
application purposes. Another way is to write his own complete control
program and implement it.
Other software in the form of control-oriented programming languages is
supplied by the vendor of process control computers. A simpler approach for
the user is to utilize vendor-supplied firmware or software to avoid writing
programs. Currently, most DCS manufacturers develop their own advance
control and optimization software, which can include in the package as
options. Similarly, many control algorithm developers; (DMC, ASPEN, etc.)
design a special interface to allow incorporating their own control programs
into most of the commercial DCS network.

21.7 Conclusion

Digitally based control instrumentation represents a revolutionary change in


the process control paradigm. With digital systems the control engineer has
the opportunity to go beyond the narrow limitation of standard analog control
components to construct a system that is optimum for the information
processing and control requirements of large processes or even of entire
plants. This is why many industrial plants are updating their hardware and
instrumentation systems bearing in mind that the payout times for installation
and commissioning costs is as a low as three to four months.

186
Session 22: DCS integration with PLC and computers

Session 22
DCS integration with PLC and computers
Content
Introduction ............................................................................................. 188
22.1 Comparative Study of Different Techniques in Automation .......189
22.1.1 Relay and Contactor Logic ................................................... 189
22.1.2 Supervisory Control and Data Acquisition (SCADA) .........190
22.1.3 Distributed Control System (DCS) ....................................... 191
22.2 Programmable Logic Controllers (PLC)...................................... 192
22.2.1 The Program Memory .......................................................... 193
22.2.2 The Data Memory ................................................................. 193
22.2.3 The Input Devices................................................................. 193
22.2.4 The Output Devices .............................................................. 193
22.3 Elements of Ladder Logic and a Practical Example .................... 195
22.4 Conclusions .................................................................................. 198

187
Session 22: DCS integration with PLC and computers

Introduction

The programmable Logic Controller (PLC) is the central controlling unit in


the industry or a process. The effective operation of the process and safety
considerations if programmed appropriately can meet the required objectives.
The present technical paper briefly distinguishes the present automation
systems and the past technologies to identify and explore the capabilities of
PLCs for any process. The relay logic and contactor logics (RLC) were
practiced in the olden days which include the human intervention and errors.
The advent and application of microprocessors, microcontrollers and new
specific tools such as PLCs, Supervisory control and data acquisition
(SCADA) and Distributed control systems (DCS) have increased
productivity, accuracy, precision and efficiency. These systems reduced
human intervention and increased the flexibility in the process control. The
keyword automation clearly states that the working of a process or repetition
in an efficient manner by incorporating mechanisms and control sequences in
the proper order several times with acceptable deviations in the output of the
process.
The meaning of the word Automation is self-dedicated derived from Greek
literature. Automation helps to improve productivity by modernizing and
increasing the work efficiency. It is the process of having machines follows a
predetermined sequence of operation with or without human intervention in
a manufacturing process. The main objectives of automation are integration
of manufacturing processes, increased safety level of operator as well as work
piece to increase productivity, improve quality, efficiency and reduce labor
cost as well as the human errors. For the automation of a process the basic
requirements are namely, power source, suitable input and out puts, proper
feedback and commands. The present automation has taken series of
transformation from Relay and contactor logic, Programmable logic
controller (PLC), Supervisory control and data acquisition (SCADA) and
Distributed control system (DCS) in steps. The choice of specific method
goes with the problem and area of application. The gain and increased output
are highly noticed after the installation of automatic controls incorporating
suitable techniques. Currently the automation using PLCs is increasing
rapidly in all the sectors to evidence efficiency and profit. The MODICON
084 was the world's first PLC commercially produced by Bedford associates.

188
Session 22: DCS integration with PLC and computers

22.1 Comparative Study of Different Techniques in


Automation

From the literature, it is very clear that every process for its effective
operation need specific set of instructions and necessary infrastructure. The
processes operated by human or non-automated yield lesser productivity and
may not be energy efficient, but such practices were inevitable till the advent
of the concept automation. The Automation brought the revolution in every
field of application to a greater extent incorporating technologies and
machines to do activities in efficient manner by reducing human intervention.
In the following paragraph comparative study of such methods is done.
22.1.1 Relay and Contactor Logic

Relay and contactor logic use relay which is an electromagnetic switch which
opens and closes the contacts to control electrical circuit like as shown in
Figure 22.1. The energized coil with a suitable supply controls the circuit. A
simple RC circuit is usually installed across the coil to dissipate and absorb
the spikes of voltages which may damage the coil winding. Similar to this a
contactor is an electrically controlled switch used for switching a power
circuit activated by a control input. Contactors unlike a circuit breaker are not
intended to interrupt a short circuit current. A contactor generally consists of
Power Contacts, Auxiliary Contacts and Contact Springs etc.

Figure 0.1 Sample Star Delta starter circuit

The electromagnet is the main driving element which closes the contacts.
Generally, it is enclosed in a housing made up of insulating materials. The
major drawback of relay and contactor logic is it needs immediate
rectification on failure and it does not possess any redundant system.

189
Session 22: DCS integration with PLC and computers

22.1.2 Supervisory Control and Data Acquisition (SCADA)

SCADA is an acronym for Supervisory Control and Data Acquisition system.


It is software with necessary hardware to accomplish the task assigned.
SCADA is a computer system gathers and analyzes the data on real time.
SCADA is used to monitor and control a plant or equipment in industries such
as energy system, sugar, ceramic, cement, power, telecommunications, water
and waste control, oil and gas refining and transportation etc. SCADA interns
need PLC, necessary control mechanism, communication systems to fetch the
data or information from the field and control effectively. The role of operator
in SCADA system is very important and crucial. A well-organized SCADA
maximizes the system benefits. The present generation SCADA systems are
very highly motivated by the strategic skills and control sets to optimize and
maximize the operational benefits of its installation. The Figure 22.2 shows a
typical SCADA with associated components for its proper functioning. The
details of individual sub blocks are presented in the following paragraph.

Figure 0.2 SCADA with Field Instrumentation

22.1.2.1 Remote Terminal Unit (RTU)


RTU is a device installed in the field or a remote location from where it
collects data, codes and transmits to the central station or master. The RTU
may be a PLC gathering the required field data and status of all installed
devices. The other role RTU collects information from the master device and
implements and processes at the field for the requirements. The mode of
communication involve wired and wireless as per the scope.
22.1.2.2 Master Terminal Unit (MTU
MTU is the infrastructure installed at the Master Station for communicating
with the RTUs and PLCs, etc., through human machine / man machine
interface (HMI/MMI) with suitable software running on computer terminals
in the control room. This unit preprocess the data receive and stores it into the
data base. Main program written to control the entire process scans and use
this data and updates it. The MTU communicate with field via RTUs.

190
Session 22: DCS integration with PLC and computers

22.1.2.3 Field Instrumentation


The SCADA needs a lot of instrumentation like, the sensors, switches,
actuators, valves and other feedback devices that are connected to the
equipment or machines being controlled and monitored by the SCADA
system. The SCADA RTU is a PLC or small industrial computer which
allows the central SCADA to communicate with the field devices.
22.1.3 Distributed Control System (DCS)

Distributed control system (DCS) is a control system in which the controller


elements are not central in location but are distributed throughout the system
for the ease of control and management. In this system, each component or
sub system is controlled by one or more controllers. The entire system of
controllers is connected by a network for communication and monitoring.
General examples are like large processing units or manufacturing systems,
processes or any kind of dynamic system. A typical DCS system is shown in
Figure 22.3 and its subcomponents are described as below.

Figure 0.3 Architecture of Distributed Control Systems

22.1.3.1 Engineering Workstation


The Engineering Workstation (EWS) is used to set up project development,
configuration of graphics, logic, alarms, security etc. for the system wide use
and operation.

191
Session 22: DCS integration with PLC and computers

22.1.3.2 Process Historical Archives


The Process Historical Archives (PHA) stores and retrieves historical data
collected by the FCU, micro FCU, or any other intelligent device in the
system.
22.1.3.3 Controllers to Monitor Field Devices
The monitoring of the field devices is very necessary on real time basis Field
Control Unit (FCU) is used. It is typically a PLC/industrial computer. The
FCU executes sequential and regulatory logic and directly scans I/O of the
field devices depending on the FCU's configuration.
22.1.3.4 Networking and Communications
For the good control and operation, a wide spread communication networks
consisting of Fiber optic and Ethernet local using the TCP/IP networking
protocol with necessary firewalls need to be installed and used for the security
of the data and safety of the operation.

22.2 Programmable Logic Controllers (PLC)

Programmable Logic Controller (PLC) is globally known as the ‘work horse’


of industrial automation. Its invention was to replace the large sequential relay
circuits for machine control. PLCs were first introduced in the late 1960's.
Bedford associates (Bedford, MA) proposed a Modular Digital Controller
(MODICON) to a major US car manufacturer. The MODICON 084 is the
world's first PLC commercial production by Bedford associates [8]. Earlier
to this Sequencer state-machines were found in the mid1970's. The
standardization of communications among different PLCs was initiated in
1980's and finalized in 1990. Considerations in the choice of suitable PLC
with the large choice of options are now available from several original
equipment manufacturers (OEM). For a specific requirement or certain
function or input/output, it is possible that one system from a single
manufacturer standing out as more superior or cost effective than the other.
To determine the most suitable PLC to be used in the automation task need
several basic considerations to be made namely, number of input/outputs,
digital/analog I/O, memory capacity needed, speed and required power for
the CPU and coding instructions, manufacturer's service support etc. All these
parameters are interdependent, and choice need to be judicial. The PLC
mainly consists of a central processing unit (CPU), memory and
I/O modules to handle input/output data. PLCs have the basic structure as
shown in Figure 22.4. PLC has four main units and discussed below.

192
Session 22: DCS integration with PLC and computers

22.2.1 The Program Memory

It is the memory space where the program instructions for the logical control
sequence is stored.
22.2.2 The Data Memory

The status of inputs/outputs like, switches, interlocks, previous values of data


and other working data is stored.
22.2.3 The Input Devices

These are the hardware/software inputs from the field from the industrial
process. The signals may be from sensors, switches, proximity detectors and
interlock settings etc. These inputs trigger the sequences in user program for
the required output or a process. For example, Emergency stop input is always
monitored by the PLC program and as when this switch is hit by incident or
accident the whole PLC process is suspended to a halt situation.

Figure 0.4 Block diagram of PLC with I/O

22.2.4 The Output Devices

The solenoid valves and pneumatic actuators, motors, heaters, cooling fan
motors, alarm indicator and buzzers are the typical output devices. These
devices drive the industrial processes. The alarm indicator output mostly with
audio visual warn the operator of the process for the unexpected happening
in the sequential process currently running for the proper attention. In order
to program the PLC a programming unit is necessary which may be a personal
computer with suitable software to interface the PLC. The programming unit
helps to build, test and edit the logical sequences that the PLC will execute
repeatedly in the real process. The IEC 61131-3 standard explains the

193
Session 22: DCS integration with PLC and computers

different programming methods for PLCs namely, Sequential Function Chart,


Function Block Diagrams and Ladder logic. Standards are needed for
the exchange of information or data among the PLCs of different
manufactures. PLC contains both Random access memory (RAM) and Read
only memory (ROM) in varying capacities depending upon the application
and design. The PLC work by scanning its inputs and depending on their state,
turns ON/OFF relevant outputs. PLC continuously scans the user program
which is presented in Figure 22.5.

Figure 0.5 Scanning sequences in PLC

22.2.4.1 Input Scan


During the input scan, the current status of every input module is stored in the
input image table for the proper update. This is done by monitoring every
input device connected to the input modules and updating its current state into
the input memory table. PLC program on its run checks the conditions of
inputs and executes its controls via output. The much-updated status of the
input image is very necessary for the PLC.
22.2.4.2 Program Scan
Upon the completion of input scan CPU enters into its user program execution
or simply program scan. The execution involves step by step processing up
of instructions starting at the program's first instruction to the last instruction.
During the user-program execution the CPU continually update its output
image table up to date so that desired activities are performed as when
initiated by a suitable condition.
22.2.4.3 Output Scan
During program scan, the output modules themselves are not kept continually
up to date. Instead, the entire output image table is transferred to the output
modules during the output scan which comes after the program execution.

194
Session 22: DCS integration with PLC and computers

Thus, the output devices are activated accordingly during the output scan.
Finally, a PLC checks each of its input with intention to see which one has
status on or off and the action might be activation of certain outputs.
Changes are performed based on the input status that has been read during the
first step and based on the result of the program execution in step two
following execution of step three PLC returns a beginning of the cycle and
continually repeats these steps as shown in Figure 6.

Figure 0.6 Scan time in PLC

Programmable Logic Control is very much useful in the production processes


which undergo a fixed repetitive sequence of operations that involve logical
steps and decisions. A PLC is used to control, time and regulate the sequence.
Small PLCs are able to control a medium scale automatic machining station
or chemical process. Large PLC systems are capable of running entire factory
automation. Industrial production processes follow a fixed sequence of
actions that are determined by the identified steps in the production assembly
line, processing of raw materials, the formation of chemical or pharmaceutical
products in a chemical process etc.

22.3 Elements of Ladder Logic and a Practical Example

The basic components in a ladder logic program are the contact and the coil.
The contact is the name given to a general input device set by an external
switch, an internally set logic or timer function. Coil is the name given to an
output device and is used to drive relays, contactor, motors, solenoids and
other process actuators. Figure 22.7 shows the few such contacts used in
Ladder programming namely, normally open, normally closed, logic high out
and logic low out. A typical oil tank level control mechanism as shown in
Figure 22.8 is taken for a study example for the understanding of the working
of PLC. Chosen inputs and outputs for the example are listed in Table 1.
Initially the tank is empty. The status of low level and high-level probes is

195
Session 22: DCS integration with PLC and computers

logic High (No Oil). Therefore, input 0000 is TRUE (Logic High) and input
0001 is also TRUE (Logic High). A sample ladder diagram is as shown
below.

Figure 0.7 Basic Ladder Logic Components

Table 0.1 I/O’s of PLC based Oil dispenser

Figure 0.8 PLC based Oil dispenser

Figure 0.9 Ladder diagram with I/O

196
Session 22: DCS integration with PLC and computers

Figure 0.10 Sets up output 500 High

Figure 22.9 to Figure 22.14 show step wise logic conditions and execution of
Ladder steps to setup output and turns OFF once the tank is full. The gradual
filling up of tank takes place because of output 500 (Pump motor). After the
oil level rises above the low-level sensor and it becomes open (FALSE) there
is still a path of TRUE logic from left to right. This is because of internal
relay. Relay 1000 is latching the output 500 “ON”. It will stay ON until there
is no true logic path from left to right (or when 0001 becomes false). After
the oil level rises above the high-level sensor it becomes open (FALSE).
Since there is no more TRUE logic path, output 500 is no longer energized
(TRUE) and therefore the motor turns OFF. After the oil level falls below the
high-level sensor and it will become TRUE again the entire sequence repeats
again. Table 22.2 shows the overall comparison of different control methods
for the respective choice in automation.

197
Session 22: DCS integration with PLC and computers

Table 0.2 Typical comparison of different control methods

Table 0.3 Pros and cons associated with each control methods

22.4 Conclusions

The good automation and process control is very necessary in the competitive
world. Rapid production changes and attainment of good production with
minimal waste is really challenging. The PLC based automation works will
surely turn the production activities into profit. The complex operations and
reduction in set up time can be greatly reduced by making use of PLC based
automation. The works in ceramic, cement, chemical, a food processing,
packaging industry and so on strongly requires the use of PLC systems for
the great profit and performance. The comparative study of historical growth

198
Session 22: DCS integration with PLC and computers

in automation evidenced the current world and its challenges with PLC. The
innovative developments and opportunities to invest in such automation is the
hurdle right now with the related economy. Present work explored the control
schemes for industrial automation and system monitoring to improve system
operation, system reliability etc. Various types of automation systems such as
relays, contactor logic, PLCs, SCADA and DCS have been discussed also,
Pros and cons associated with each control methods have been summarized
in Table 3. The DCS, SCADA and communication systems integrate
protection, control and monitoring together to maximize the benefits. Truly,
automation and system monitoring are the logical choice to improve system
performance and to achieve customers and shareholder’s satisfactions.

199
Session 23: Features and advantages of DCS

Session 23
Features and advantages of DCS
Content
Introduction ............................................................................................. 201
23.1 Evolution of traditional control systems ...................................... 202
23.1.1 Pneumatic control ................................................................. 202
23.1.2 Electronic analog control ...................................................... 203
23.1.3 Digital control ....................................................................... 203
23.1.4 Modes of computer control .................................................. 204
23.1.5 Direct digital control ............................................................. 204
23.1.6 Supervisory control .............................................................. 205
23.1.7 Hierarchical computer control system .................................. 207
23.2 Distributed control systems ......................................................... 208
23.2.1 Programmable logic controllers............................................ 208
23.2.2 Distributed control systems .................................................. 209
23.2.3 DCS design considerations ................................................... 213
23.2.4 Hierarchy of plant operations ............................................... 215
23.3 Functional components of DCS ................................................... 217
23.3.1 Field communication ............................................................ 219
23.4 Functional features of DCS.......................................................... 222
23.4.1 System configuration/programming ..................................... 222
23.4.2 Communications ................................................................... 224
23.4.3 Control .................................................................................. 226
23.4.4 Alarms and events ................................................................ 226

200
Session 23: Features and advantages of DCS

Introduction

Automatic control typically involves the transmission of signals or


commands/information across the different layers of system, and calculation
of control actions as a result of decision-making. The term DCS stands for
distributed control system. They used to be referred to as distributed digital
control systems (DDCS) earlier, implying that all DCS are digital control
systems. They use digital encoding and transmission of process information
and commands. DCS are deployed today not only for all advanced control
strategies but also for the low-level control loops. The instrumentation used
to implement automatic process control has gone through an evolutionary
process and is still evolving today. In the beginning, plants used local, large-
case pneumatic controllers; these later became miniaturized and centralized
onto control panels and consoles. Their appearance changed very little when
analog electronic instruments were introduced. The first applications of
process control computers resulted in a mix of the traditional analog and the
newer direct digital control (DDC) equipment located in the same control
room. This mix of equipment was not only cumbersome but also rather
inflexible because the changing of control configurations necessitated
changes in the routing of wires. This arrangement gave way in the 1970s to
the distributed control system (DCS).
DCS controllers are distributed geographically and functionally across the
plant and they communicate among themselves and with operator terminals,
supervisor terminals to carry out all necessary control functions for a large
plant/process. The scope of control is limited to the part of the plant it is
distributed in. DCS is most suited for a plant involving a large number of
continuous control loops, special control functions, process variables, and
alarms. Most of the DCS architectures are generally similar in the way they
are designed and laid out. Operator consoles are connected to controllers
housed in control cubicles through a digital, fast, high-integrity
communications system. The control is dis-tributed across by the powerful
and secure communication system. The process inputs are connected to the
controllers directly or through IO bus systems such as Profibus, Foundation
FieldBus, and so on. Some systems also use proprietary field bus systems.
The DCS offered many advantages over its predecessors. For starters, the
DCS distributed major control functions, such as controllers, I/O, opera-tor
stations, historians, and configuration stations onto different boxes. The key
system functions were designed to be redundant. As such the DCS tended to
support redundant data highways, redundant controllers, redundant I/O and
I/O networks, and in some cases redundant fault-tolerant workstations. In
such configurations, if any part of the DCS fails the plant can continue to
operate. Much of this change has been driven by the ever-increasing
performance/price ratio of the associated hardware. The evolution of

201
Session 23: Features and advantages of DCS

communication technology and of the supporting components has


dramatically altered the fundamental structure of the control system.
Communication technology such as Ethernet and TCP/ UDP/IP combined
with standards such as OPC allowed third-party applications to be integrated
into the control system. Also, the general acceptance of object-oriented
design, software component design, and supporting tools for implementation
has facilitated the development of better user interfaces and the
implementation of reusable software.
With advancing technologies, DCS have rapidly expanded their capabilities
in terms of features, functions, performance and size. The DCSs available
today can perform very advanced control functions, along with powerful
recording, totalizing, mathematical calculations, and decision-making
functions. The DCS can also be tailored to carry out special functions, which
can be designed by the user. An essential feature of modern-day DCS is the
integration with ERP and IT systems through exchange of various pieces of
information.
To understand DCS, it is a good idea to review the evolution of control
systems. This includes hardware elements, system implementation
philosophies, and the drivers behind this evolution. This will help in
understanding how process control, information flow, and decision-making
have evolved over the years.

23.1 Evolution of traditional control systems

23.1.1 Pneumatic control

Earliest implementations of automatic control systems involved pneumatic


transmission of signals. They used compressed air as the medium for signal
transmission and actuation. Actual control com-mands were computed using
elements such as springs and bellows. Plants used local, pneumatic
controllers, which were large mechanical structures. These later became
miniaturized and centralized onto control panels and consoles.
A pneumatic controller has high margin for safety, and since it is explosion
proof, it could be used in hazardous environments. However, they have slow
response and are susceptible to interference.
The common industry standard pneumatic signal range is 3–15 psig where 3
psig corresponds to the lower-range value (LRV) and the 15 psig corresponds
to the upper-range value (URV).

202
Session 23: Features and advantages of DCS

23.1.2 Electronic analog control

Over time, electronic analog control was introduced. Electrical signals were
used as the mode of trans-mission in this implementation. As in pneumatic
control, computation devices are mechanical. Electrical signals to pressure
signals converter (E/P transducers) and pressure to electrical (P/E
transducers) are used to transmit signals to enable coexistence of pneumatic
and electrical signals. Disadvantage with analog signals is the susceptible to
contamination from stray fields, leading to degradation in signal quality over
long distances.
The most common standard electrical signal is the 4–20 mA current signals.
With this signal, a transmitter sends a small current through a set of wires.
The current signal is a kind of gauge in which 4 mA represents the lowest
possible measurement, or zero, and 20 mA represents the highest possible
measurement.
23.1.3 Digital control

In this mode, the transmission medium is still an electrical signal, but the
signals are transmitted in binary form. Digital signals are discrete levels or
values that are combined in specific ways to represent process variables and
also carry diagnostic information. The methodology used to combine the
digital signals is referred to as protocol. Manufacturers may use either an open
or a proprietary digital proto-col. Open protocols are those that anyone who
is developing a control device can use. Proprietary protocols are owned by
specific companies and may be used only with their permission. Open digital
protocols include the HART® (highway addressable remote transducer)
protocol, FOUNDATION™ FieldBus, Profibus, DeviceNet, and the
Modbus® protocol.
Digital signals are far less sensitive to noise. In digital signaling, we look for
two levels of signals, and the magnitude of the signals is expressed as a
combination of 1 and 0, corresponding to the magnitude expressed as a binary
number. Therefore, the impact of noise is reduced compared to an analog
signal. The computational devices are digital computers embedded processors
with real time operating systems. Digital computers are more flexible because
they are programmable. They are more versatile because there is virtually no
limitation to the complexity of the computations it can carry out. The
limitation is on the computing power of the computer, how many
computations it can perform in a given unit of time. Moreover, it is possible
to carry out computation with a single computing device, or with a network
of such devices.

203
Session 23: Features and advantages of DCS

Many field sensors naturally produce analog voltage or current signals.


Transducers that convert analog signals to digital signals (A/D) and those that
convert them back to analog signals (D/A) are used as interface between the
analog and digital elements of the modern control system. With the
development of digital implementation systems, on which DCS are based, it
is possible to implement many sophisticated control strategies at very high
speeds.
23.1.4 Modes of computer control

Computer control is usually carried out in the following modes:


1. Direct digital control (DDC)
2. Supervisory control
3. Hierarchical computer control system.
23.1.5 Direct digital control

In DDC, a digital computer computes control signals that directly operate the
control devices. A single computer digitally performs signal processing,
indication, and control functions and therefore the name “direct digital
control.” Initially computers were very large and housed in large buildings
with substantial environment controls. As the electronics evolved, and
medium scale integration (MSI) and large-scale integration (LSI) integrated
circuits (ICs) became available, powerful and relatively small mini computers
became a reality. These mini computers were first deployed to realize DDC
(Figure 23.1).
23.1.5.1 Disadvantages of DDC
The following are some of the disadvantages of a DDC:
 Centralized control – a single central processor is used to perform
several tasks necessary for control:
o IO scanning
o Data logging
o Control execution
o Alarm generation
o Database update
o Serving operator display updates
o Periodic and on-demand report generation
o Serve peripherals such as printers and recorders
o Process optimization
In addition, DDC systems support software systems (compilers)
configuration and engineering functions.

204
Session 23: Features and advantages of DCS

Figure 0.1 DDC architecture

 CPU memory requirements and management were challenges.


 Poor system reliability and performance – single failure is
catastrophic to plant control, with no provision for redundancy.
 Costly and complex – difficulty in troubleshooting.
 Creating customized software for advanced process control and
optimization was cumbersome.

23.1.6 Supervisory control

In supervisory control, a digital computer generates signals that are used as


reference (set-point) values for conventional analog controllers. This is
described in the block diagram (Figure 23.2). Measurements are transmitted
to computer and control signals are transmitted from the computer to control
valves at specific time intervals known as sampling time. In a supervisory
control system, analog control subsystem and panel instrumentation are used
for controlling but are interfaced to the supervisory control computer through
an interfacing hardware. The supervisory control computer provides the
facility for monitoring process (which provides the process mimics on the
video display units for operators with features such as alarm handling, data
storage, and real-time values).

205
Session 23: Features and advantages of DCS

23.1.6.1 Advantages of supervisory control


The following are the advantages of supervisory control:
 With supervisory control architecture, primary loop control is
returned to analog controllers, while the supervisory control computer
monitors the process and adjusts the set points. The computer is
relieved of intense computational tasks and therefore could be utilized
for process optimization and plant management functions, which are
less time critical and computation intensive.
 Analog controllers added to DDC computer system enhance the
overall reliability.
 Control algorithms, which provide the set point to the analog control
subsystem, accommodate higher complexity.

Figure 0.2 Architecture of Supervisory control

23.1.6.2 Disadvantages of supervisory control


The following are some of the disadvantages of supervisory control:
 Extensive wiring is required between the analog controllers and the
supervisory computer and also between other instrumentation and the
computer system.
 Interfacing equipment from multiple vendors (interfacing one
vendor’s computer system to another vendor’s analog
instrumentation) is difficult.
 Supervisory control is costlier than DDC.

206
Session 23: Features and advantages of DCS

All vendors who later started offering DCS offered both DDC and
supervisory control options. In the earlier versions, both versions coexisted
with the DCS. Many of the current DCS control technologies had their origins
in DDC and supervisory control systems. However, DDC is rarely used in
current industrial automation scenarios.
23.1.7 Hierarchical computer control system

A hierarchical system is a network of process and/or information management


computer systems integrated to serve common functions such as management
and control of large plants and geographically distributed systems such as
pipeline networks. (Figure 23.3)

Figure 0.3 Architecture of hierarchical control

 Information is passed up and down between primary level of process


monitoring and control, through supervisory levels, to decision
making/top management level.
 The computer network architecture usually parallels the
organizational structure of a company itself.
 With hierarchical systems, primary computers provide direct control
of process and they could be a combination of DDC, supervisory
control, or microcontroller-based controllers.

207
Session 23: Features and advantages of DCS

23.2 Distributed control systems

With the advent of microcontrollers, individual controllers became powerful.


They could execute a greater number of control algorithms and more complex
algorithms. They could also control a larger set of control steps. It became
easy to move the intelligence involved in controls to lower levels and improve
the signal processing in transmitters. Powerful microcontrollers also enabled
the design of faster networks. Together the concept of DCSs could be turned
into a reality.
23.2.1 Programmable logic controllers

Programmable logic controller (PLC) was the first manifestation of a


distributed controller. A PLC is specialized for process control of
noncontinuous systems such as batch processes or discrete manufacturing
systems that encompass equipment or control elements that operate
discontinuously. A PLC is programmed to execute the desired Boolean logic
and implement sequencing of operations. Therefore, a PLC is used in many
instances where interlocks are required. For example, a flow control loop
must not be actuated unless a pump responsible for the flow has been turned
on. Similarly, during startup or shutdown of continuous processes, elements
must be correctly sequenced; for example, upstream flows and levels must be
established before downstream pumps can be turned on. The PLC implements
a sequence of logical decisions to implement a control.
The inputs to the PLC are generally a set of relay contacts representing the
state of various process elements. Various operator inputs are also provided.
The outputs are generally a set of relays energized (activated) by the PLC that
can turn a pump on or off, activate lights on a display panel, operate solenoid
valve, and so on.
Although PLCs were initially conceived to implement simple binary logic,
they started using powerful microcontrollers and could therefore handle
comparatively complex functions such as proportional, integral, and
derivative (PID) controls. Currently, PLCs can handle thousands of digital
I/O and hundreds of analogs I/O and continuous PID control. However, PLCs
lack the flexibility for expansion and reconfiguration. The choice of operator
interfaces to PLC systems is also limited. However, PLCs continue to breach
their limits and are overcoming several limitations and are today positioned
for many complex control tasks.
PLCs are not typically applied in traditional continuous process plants.
However, for operations such as sequencing and interlocks, the speed and
power of PLCs can be used very effectively. Where sophisticated process
control strategies are needed, PLCs are a cost-effective alternative to DCSs

208
Session 23: Features and advantages of DCS

in the places where there is a large number of discrete operations and small
number of analog controls. PLC architectures have always focused on flexible
and fast local control. Recent advancements in PLC technology have added
process control features. When PLCs and HMI software packages are
integrated, the result looks a lot like a DCS.
PLCs and DCSs can be combined by design in a hybrid system where PLCs
are connected through a link to a controller forming part of a larger DCS or
are connected directly to network of the DCS.
23.2.2 Distributed control systems

A DCS is defined as a system comprising of functionally and physically


separate automatic process controllers, process monitoring and data logging
equipment all of which are interconnected through a fast, digital network.
This ensures sharing of relevant information for optimum control of the plant.
In large-scale manufacturing or process plants, there are hundreds of control
loops to be monitored and controlled. For such large processes, the
commercial DCS is normally the control system of choice. Figure 23.4 is the
most common architecture of the distributed control systems in the world
today.
The hardware and software of the DCS are quite flexible and easy to modify
and configure. They are capable of handling a large number of loops. Modern
DCSs are equipped with optional software elements for optimization, and
various controls based on process models. They also come with tools for
defining high-performance models.
As signified by the term “distributed,” DCS architecture enables distribution
of the controllers and the operator input elements through the network, called
process control network, which connects the different parts. Elements closest
to the process transmit and receive raw data between them and the local
computers while those farther away from the process exchange mostly
processed data at lesser frequencies but for a wider set of consumers of the
data. All data exchanged such as the presentation information for the multi
displays on various operator control panels and historical data to and from
archival storage have to pass through the data highway. The data highway is,
therefore, the backbone of the DCS system. A supervisory computer is
normally at the top of the hierarchy and is responsible for performing many
higher-level functions. These could include optimization of the process
operation over varying time horizons (days, weeks, or months), carrying out
special control procedures such as plant startup or product grade transitions,
and calculating the data required to track the economic performance of the
plant.

209
Session 23: Features and advantages of DCS

Figure 0.4 Architecture of a simple DCS

In the common DCS architecture, the microcomputers attached to the process


are known as front-end computers. They are usually less sophisticated
equipment employed for low-level functions. One such example is an IO
module. Typically, an IO module acquires process data from the measuring
devices and converts them to standard engineering units. The results are then
passed upward to the controllers that are responsible for more complex
operations. Processed data is then passed on to computers higher in the
hierarchy, which can be programmed to perform more advanced calculations.
DCSs are not limited to continuous control applications; they are also capable
of carrying out many of the functions of a PLC, in the same way that the PLCs
have evolved to perform several tasks traditionally done by the DCS. A range
of functions are designed into the controllers, such that the entire general plant
control operation functions, whatever the type, could be carried out by the
DCS. These functions include continuous control, cyclic control, logic
control, motor control, and batch control. One key difference is in the
processing speed of sequence functions. A PLC’s scan time (the time taken
to scan the inputs) is in 10 s of milliseconds, while the same for a DCS could
be in 100 s of milliseconds or seconds (typical fastest speed 0.25 s). In most
process control applications this speed of execution of the DCS is quite
adequate. If higher speeds are necessary in selected operations, PLCs can be
interfaced directly to most DCS using standard components, which make the
PLC transparent to the operator. However, it calls for using different
engineering tools for the DCS and the PLC. On some DCS the need for higher
speeds is addressed through the use of a high-speed controller and IO cards
and they provide the speed of execution necessary for special applications.
These controllers come with a cost premium and are used judiciously to
ensure cost-effectiveness of the solution.

210
Session 23: Features and advantages of DCS

Therefore, the standard elements of functionality available with DCS today


are sufficient to provide an integrated control system capable of controlling
most processes, either by themselves or through integration with other
controllers. They are also capable of providing necessary information for the
overall management of the production facility.
Traditionally, DCSs have been more expensive to purchase than a PLC-based
system and many processing plants had lower demands in terms of production
rates, yield, waste, safety and regulatory compliance than they are
experiencing today. A PLC-based system offered a lower capital investment
and from a functional point of view was “good enough.” Demands on
manufacturing companies have risen – and the purchase price of the DCS has
come down. Yet, DCSs do have advantages over PLC systems.
The DCS architecture has always been focused on distributing control on a
network so that operators can monitor and interact with the entire scope of
the plant and the classic DCS originated from an overall system approach.
Coordination, synchronization, and integrity of process data over a high-
performance and deterministic network are at the core of the DCS
architecture.
The major advantages of functional distribution of hardware and software
characteristic of DCS are:
 Flexibility in system design
 Ease of expansion
 Reliability
 Ease of maintenance.
Local control can be maintained even if central components fail or are
degraded functionally to a substantial extent. We then say the plant is
operating as a set of “islands of automation.” The DCS architecture is such
that complete loss of the data highway does not cause complete loss of system
capability. Often local units can continue operation with no significant loss
of function over moderate or extended periods of time. This greatly enhances
the availability and reliability of the system.
The control network is the most important component of DCS. Suppliers
ensure this through com-prehensive maximum topology testing and subject
the network to high levels of message volume in test labs to ensure reliable
network performance in demanding environments. Most of the DCSs are
provided with redundant industrial Ethernet networking technology utilizing
inexpensive off-the-shelf components to provide a high-availability solution.
Industrial Ethernet continuously monitors the process control network (PCN)
by providing network diagnostics that are tracked and reported as a part of
the DCS.

211
Session 23: Features and advantages of DCS

Control performance is another area where DCSs have great advantages.


Good process control is built on reliable and repeatable execution of the
control strategy. While the PLC runs “as fast as it can” the process controller
favors repeatability. That means, the control strategy runs on fixed clock
cycles – running faster or running slower are not tolerated. Other system
services are also designed to give priority to solving the controller
configuration. Most DCSs come with function blocks (FBs) with a complete
set of parameter-based functions, using which the user can develop and fine-
tune control strategies without designing control functions. All necessary
functions are available and documented as configurable selections. The
application engineer simply assembles the blocks into the desired control
configuration with a minimum of effort. A self-documenting, programming-
free controller configuration makes the DCS architecture efficient to engineer
and troubleshoot.
Moreover, the DCS network is versatile and allows different modes of control
implementation such as manual/auto/supervisory/computer operation for
each local control loop. In the manual mode, the operator manipulates the
final control element directly. In the auto mode, the final control element is
manipulated automatically through a low-level controller usually a PID block
executed in controller. The set point for this control loop is provided by the
operator. In the supervisory mode, an advanced digital controller is placed on
top of the low-level controller, which sets the set point for the low-level
controller. The set point for the advanced controller can be set either by the
operator or can be the out-come of steady-state optimization.
DCS vendors also supply the control building tools, a data historian, trend
tools, alarm management, asset management, back up and archive, OPC
servers, remote maintenance servers, web servers, documentation servers,
network management server, business integration software, and graphics
needed to run a plant as a single package that can be easily deployed on the
DCS. The capabilities of DCS architecture allow all of the control
applications to load correctly, are guar-anteed to be the correct version, and
are tested to work together. This becomes very significant when DCSs stay
deployed for longer periods of time and are expanded to meet changing plant
requirements.
On account of the systems approach to DCS design, the software elements
can be integrated to share a single data model, no matter where a data element
resides, it can be used by any element of the architecture and that particular
data element need not be duplicated. That is a significant advantage given the
integrated nature of a typical industrial automation system. Refer to Figure
23.5 for the DCS components from the figure, it can be derived that a DCS is
a combination of controllers, I/Os, networks, operation consoles and
engineering consoles.

212
Session 23: Features and advantages of DCS

Figure 0.5 Functional components of a DCS

The field Instrumentation also plays a role in terms of the architecture of the
DCS due to the evolution of various protocols.
23.2.3 DCS design considerations

Any modern DCS is expected to meet the following requirements from plant
operations and maintenance perspective. These form the considerations
during the design of the DCS:
 High Reliability: Any control or hardware component failure could
have devastating consequences leading to loss of life and property.
Therefore, reliability of the control system is a life-critical
requirement. During design, development and manufacturing, all
electronic components of the DCS are normally subjected to extensive
periods of cycling at temperatures exceeding the extremes listed in
equipment specifications. This process weeds out the components
most likely to fail. Suppliers provide redundancy in their design, and
as a redundant component in the architecture because reliability is still
probable. Power supplies, data highways, traffic directors, and
controller electronics are important single points of failure in the
system and are considered as candidates for having redundancy. It is
essential to have automatic transfer between redundant parts, so that
if one fails the other takes over without disturbance of the operation
or output. At the same time, there must be some form of alarm to alert
the operator to draw his attention to the fact that a failure has occurred.
 High Availability: High availability is as important as reliability.
Defining availability as the ratio of mean time between failure
(MTBF) to mean time between failure plus mean time to repair
(MTBF + MTTR), it is clear that a system is most available when it is
very reliable (high MTBF) and can be quickly repaired (low MTTR).
Since distributed control equipment is highly modular and contains

213
Session 23: Features and advantages of DCS

many printed circuit cards, time to repair can be very short if sufficient
spare parts are available and the components can be quickly brought
into service and the necessary software updated online without
affecting the control functions in any way.
 Low Cost: If we consider together the initial purchase price, cost of
initial implementation, and the cost of making subsequent changes to
the system over time, the DCS can be much less expensive. The total
project costs include the expenses required to build a working solution
that accomplishes the long-term goal of effective process control.
Maintenance costs and costs of changes to accommodate growth of
operations over time are key factors to consider while estimating the
cost of the system. This is called life cycle cost. This life cycle cost
turns out to be lower in case of DCS compared to providing
comparable level of functionality using PLCs because the built-in
functions and inherent integration capabilities available in a DCS
enable implementation and maintenance of a more effective system
with reduced labor, plant life cycle cost while avoiding degradation in
functionality over time.
 High Alarm Management Capability: DCS systems must be capable
of intelligent alarm management to aid in abnormal situation
management. Alarm management is necessary in a manufacturing
process environment using a control system, such as a DCS or a
system of PLCs. Such a system may have hundreds of individual
alarms that up until very recently have probably been configured with
limited consideration of the interdependencies among all alarms in the
system. Humans can pay attention to a limited number of stimuli and
messages at a time. Therefore, we need a way to ensure that alarms
are presented at a rate that can be assimilated and acted upon by a
human operator, particularly when the plant is in a disturbed state or
in an emergency condition. Alarms also need to be capable of
directing the operator’s attention to the most important problem that
he or she needs to act upon methodically, using a prioritization
scheme. The capabilities of alarm management in the system must go
beyond the basic level of utilizing multiple alarm priority levels. It is
to be noted that alarm priority itself is often dynamic. Likewise,
disabling an alarm based on unit association or suppressing audible
annunciation based on priority do not meet operational requirements
in a complex plant. We need dynamic, selective alarm annunciation
in such cases. DCSs are designed with an alarm management system
that dynamically filters the process alarms based on the current plant
operation and conditions so that only the currently significant alarms
are annunciated.
 Scalability: A small SCADA/PLC system is easy to design and
configure. As the system grows bigger, the effort involved to properly
engineer and configure the system grows in a nonlinear fashion. It also

214
Session 23: Features and advantages of DCS

increases the risks of errors creeping into the process of engineering.


It is easy to design and implement a single loop PID controller in a
SCADA/PLC system and it can be done quickly. However, to design
and implement the base layer control for a refinery using a SCADA/
PLC system can be a daunting endeavor. One of the chief
considerations in the design of the engineering tools for a DCS is that
engineering time for system expansion and other changes must be
considerably less. Features such as batch updates, replication of
application programs with suitable substitution are provided in the
engineering tools.
 Distributed Systems: A DCS has to share real-time data across a
network, despite the fact that the components are geographically
distributed. The need for a seamless transfer of control signals among
the distributed controllers, the supervisory controllers, operator
workstations, plant computers, and so on can never be overstated. This
makes networking a major component in the DCS architecture. The
objectives of a networking topology in industrial automation systems
include the following.
o Enable wide distribution of the components
o Connectivity to different machines and nodes
o Reliable data gathering and sharing
o Redundant communication medium
o Deterministic transmission and receipt of data
o Sufficient speed to match the plant requirements

23.2.4 Hierarchy of plant operations


Plant operations could be classified into three control zones:
 PCN (plant/process control network) layer where the process control
operations and data transfer occur.
 DMZ (demilitarized) layer – where servers such as OPC, remote
maintenance, web server can be operated.
 PIN (plant information network) layer – for plant and office personnel
access.
Field devices are the first level that comes in the hierarchy, intended to
communicate the parameters in field such as flow, level,
pressure/temperature/proximity/analysis, and so on. They are inputs. The
outputs are actuators, valves, and motors, which are final control elements
controlling the elements such as liquid/gas flow.
They communicate over multiple protocols such as HART/Foundation
FieldBus/Profibus, etc. They also support the traditional 4–20-mA signal. The
adoption of the IEC1158-2 Fieldbus standard by major DCS manufacturers
ushered in the next generation of control and automation products and

215
Session 23: Features and advantages of DCS

systems. Based on this standard, fieldbus capability can be integrated into a


DCS system to provide:

 Advanced function added to field instruments


 Expanded view for the operator
 Better device diagnostics
 Reduced wiring and installation costs
 Reduced I/O equipment by one half or more
 Increased information flow to enable automation of engineering,
maintenance, and support functions
 Lower CAPEX and OPEX
At the control level, the signals from the transmitters in the field are processed
to generate com-mands to the actuators. The usual equipment is a PLC or
process control system (PCS). Based on the signals received, valves are
opened and closed, or pumps and motors stopped and started. PCS is
generally a customary controller offered by DCS vendors. All DCS vendors
have a customized protocol to communicate with their subsystems and the
controllers also support interfacing with multiple vendors through commonly
supported protocols such as Modbus.
Supervisory platforms are intended to monitor downstream control and
translate the control into a user viewable format, by creating the engineering
design and then loading into controllers to view it. The vital components of a
supervisory system are as follows:
 Engineering stations
 Operator stations
 Application stations
Engineering station has the software that has libraries used for analog/digital
control. Using the library, control strategies are written to design logic to run
the devices in the field.
This logic flow can be validated from an operator station that mimics the
process in the field. It is not uncommon to use proprietary protocols until this
layer to regain the hold on control that is sup-ported by each vendor.
A system has to be flexible enough to support popular and open protocols
available in market be-cause they are bound to deal with most of the popular
protocols and solutions available in the market once they are in the field. This
adds to a point where a solution that is offered leaves the existing working
controls of multiple vendors undisturbed.
Rate of control is an important aspect at the control level because controls are
very time critical and often needs less latency at this level. In a layer above
this, users may have to access data from other DCS in which case
technologies such as OPC are used. This data access is not very critical for

216
Session 23: Features and advantages of DCS

control; therefore, even if the latency is higher than at the control level, it can
be tolerated.
Plant diagnostics and integration are often referred as applications. Plant
diagnostics offers a smart-er way of viewing things that are happening in the
plant. For instance, plant diagnostics could include an application that
consolidates all the alarms that are happening in the lower layers, to warn the
user about a trend in the alarms that could lead to a potential process failure.
It may include a historian just dedicated to collect the entire history over the
plant and maintain it to derive various conclusions for the data that is stored.
Integration refers to communicating with third party devices over
technologies such as OPC, a common example is data from one DCS vendor
to another is transferred over OPC. The distinction be-tween integration and
control level are with respect to criticality; delay in transferring and receiving
data cannot be tolerated at the supervisory layer whereas at the control layer
it is tolerated to certain extent. And data available at the control layer is
filtered and made noncritical, when compared to supervisory systems layer,
for security reasons. The control layer that normally runs over IT local area
network (LAN) has higher risk of getting hacked. Hence, noncritical controls
or purely supervisory data such as history is transferred through OPC.
The enterprise level is where process information flows into the office
management world, for example, to aid ordering and billing via service access
point (SAP) or production planning. This is the world of supply chain
management that collates and evaluates such information as the quantities of
raw materials in store, the amount of water being used every day, the energy
used in heating and refrigeration, the quantities of product being processed,
and the amount that can be sold within the next day, week, or month.

23.3 Functional components of DCS

DCSs are made of several components namely:


 Input/Output Subsystems
 Controller Subsystem
 Networks (PCN, PIN)
 Plant Information Network
 Operation and control Subsystems
 Gateways
 Engineering/configuration subsystem
Majority of the field devices even today are hardwired to the controllers. They
can also be connected via analog, digital, or combined analog/digital buses.
The devices, such as valves, valve positioners, switches, and transmitters or

217
Session 23: Features and advantages of DCS

direct sensors, are located in the field. Smart field devices can communicate
on an IO buses while performing control calculations, alarming functions, and
other control functions locally. Control strategies that reside in field-mounted
controllers send their signals over the hardwired cables or over the field buses
to the final control elements, which they control.
IO subsystems perform a variety of functions such as analog-to-digital
conversion, conversion to engineering values, limit checking, quality tagging,
and so on. They also enable assignment of addresses to the field signals for
use by the controllers.
Controllers subsystems execute the control logic once in a fixed interval of
time, which constitutes the controller cycle time. During the engineering
process, the tools ensure that the given package of control logic can be
executed in a deterministic way within the cycle time of the controller.
The controllers consist of a base firmware that enables the controllers to
communicate on the network, exchange data, and commands among
themselves or with the operator and supervisory interfaces. The application
programs are plant specific and are engineered for every plant where the DCS
is being implemented. The engineering tools enable the creation and
validation of the application program. The application programs are
downloaded into the controllers after the validation process and are typically
stored and executed from a designed separate memory area in the controller.
Information from the field devices and the controller are transmitted to the
operator workstations, application workstations, data historians, report
generators, centralized databases, and so on over the plant control network.
These workstations run applications that enable the operator to perform a
variety of operations. An operator may interact with the system through the
displays and keyboards to change control parameters, view the current state
of the process, or access and acknowledge alarms that are generated by field
devices and controllers. The operator can also change the control software
executing in the controllers and in some of the field devices. Some systems
also support process simulation for training personnel or for testing the
process control software.
Information from the operator workstations is passed on to the other
workstations, which execute a variety of higher-level applications. The
operator workstations and the supervisory workstations exchange data
through the PIN. The PIN could also be part of the enterprise network,
although there are substantial safeguards to prevent unauthorized access to
the networks in the realm of the control systems. Figure 23.6 provides the
integrated view of a typical closed loop as seen from different sub systems of
a DCS.

218
Session 23: Features and advantages of DCS

Gateways enable two-way data transfer. Gateways can be made available on


any of the networks to enable data exchange from other control systems that
are used in the DCS either to aid control decisions or to store for historical
references. They could alternatively be used to share data from the DCS with
other plant/business systems.
DCS are always sold as packages because the parts function together as a
system. Since the com-ponents of the system communicate over a shared data
highway, no change is required to the wiring when the process and its control
logic are modified. However, standards have enabled interoperability
between components from different vendors. DCS users are no longer tied to
a single vendor.
23.3.1 Field communication

Digital communications technology reduces wiring and improves end-to-end


signal accuracy and integrity in modern digital plants. Digital technology
enables new innovative and more powerful devices, wider measurement
range, elimination of range mismatch, and access to more information.
Overall, use of digital technology can reduce automation project costs
significantly by in addition to operational improvement.

Figure 0.6 Data integrity of a DCS

One of the greatest advantages of digital communications over analog is the


ability to communicate vast amounts of data over a limited number of data
channels. Using digital communications and multidrop Fieldbus wiring
instead of conventional wiring also has many advantages including reduction
in cable and connections as many devices connect to a single bus. Other

219
Session 23: Features and advantages of DCS

benefits include better accuracy for control loops as no precision is lost in


D/A and A/D conversion, and higher integrity as distortions can be detected
using 8- or 16-bit error checking. Two-wire devices get more current,
allowing delivery of new and faster diagnostics over the bus, enabling plants
to adopt a predictive maintenance program. Further, digital values may be
transferred in engineering units, allowing transmitters to be used over their
full range and eliminating range mismatch. Access to more information is
also a key to intelligent device management.
There exist many different digital communication standards designed to
interconnect industrial instruments and various IO modules that understand
such protocols. Some common and popular digital communication standards
and their description are as follows:
 HART
 Modbus
 FOUNDATION FieldBus
 Profibus
 AS-I
 CANbus
 ControlNET
 DeviceNet
HART: The HART technology is managed by the HART Communication
Foundation (HCF) head-quartered in Austin, Texas, USA. HART was
designed specifically for use in process control instrumentation such as
temperature, pressure, level, flow, conductivity, density, concentration,
resistivity, dissolved oxygen, oxygen transmitters as well as final control
elements such as control valve positioners. Because of its importance for
process control, HART is supported in all modern digital automation systems,
in leading SIS logic solvers, and in device management software part of asset
management suites. Additionally, a handheld communicator is available to
work on HART devices in the field. HART is a hybrid, superimposing digital
communications on top of 4–20 mA signals, and is usually used in a point-to-
point scheme, multidrop in some applications. A separate chapter is included
to dis-cuss more on HART protocol.
Modbus/RTU: Modbus/RTU is one of the bus technologies managed by the
Modbus organization headquartered in North Grafton, Massachusetts, USA.
Modbus/RTU has been adopted in a very wide range of distributed
peripherals such as conventional I/O blocks, flow computers, remote terminal
units (RTU), and weighing scales. Final control elements such as a.c. and d.c.
drives are also available. Because of its simplicity, Modbus/RTU is supported
in all digital automation systems, DCS, and most PLCs. For this reason,
Modbus/RTU is often used to integrate package unit controllers to the main
control system. Refer to the separate chapter to learn more on Mod-bus.

220
Session 23: Features and advantages of DCS

Foundation FieldBus H1: FOUNDATION FieldBus H1 is one of the bus


technologies managed by the FieldBus Foundation organization
headquartered in Austin, Texas, USA. FOUNDATION FieldBus H1 was
designed specifically for use in process control instrumentation for measuring
temperature, pressure, level, flow, pH/ORP, conductivity, density,
concentration, resistivity, dissolved oxygen, and oxygen transmitters as well
as machinery health monitors. Final control elements such as control valve
positioners, electric actuators, discrete switches, on/off valves, and signal
converters are also available. Because of its importance for process control,
FOUNDATION FieldBus H1 is supported in all modern digital automation
systems and in device management software residing in asset management
suites. Additionally, a handheld communicator is available to work on
Fieldbus devices in the field. Refer to the separate chapter to learn more on
FF.
PROFIBUS: PROFIBUS is one of the bus technologies managed by the
Profibus International (PI) organization headquartered in Karlsruhe,
Germany. PROFIBUS was designed specifically for Distributed Peripherals
(DP) such as conventional I/O blocks and weighing scales. Final control
elements such as drives, motor starters, circuit breakers, and solenoid valve
manifolds are also available.
DeviceNet: DeviceNet is one of the bus technologies managed by the Open
DeviceNet Vendor As-sociation headquartered in Ann Arbor, Michigan,
USA. A wide range of products are available using DeviceNet including
conventional I/O blocks, inductive and optical switches, encoders and
resolves, barcode readers and RFID, final control elements such as electric
and pneumatic actuators, and valves, a.c. and d.c. drives, motor starters, and
solenoid valve manifolds.
Finally, since different areas of automation and different levels of the control
system hierarchy have different communication needs, many different
Fieldbus technologies exist. All types of devices are not available with all the
different protocol options, and therefore it is necessary to use more than one
protocol in control systems. For example, transmitters and valves will
communicate using FOUNDATION FieldBus because the bus must be
synchronized for good PID control. Electric drives will use PROFIBUS DP
because of the higher speed possible at short distances, although DeviceNet
is also an option. Discrete I/O may use either DeviceNet or AS-I.
Modbus/RTU is used when integrating real-time control and interlock signals
from OEM-packaged units to the main control system. The control system
must integrate these buses, requiring that the digital system to have the
interface cards for direct connection of these buses; using gateways or
multiplexers is costly, time-consuming, error prone, and less reliable. The
engineering station and engineering software must support the different
protocols being integrated, as a fully featured engineering tool will eliminate

221
Session 23: Features and advantages of DCS

the need for special applications software for each protocol, which would be
too difficult to manage.

23.4 Functional features of DCS

The major architectural components of any DCS comprise of the following


(refer to the Figure 23.8):
 System configuration
 Communications
 Control
 Alarms and events
 Diagnostics
 Redundancy
 Historical data
 Security
 Integration

23.4.1 System configuration/programming

Every DCS controller is a computer, although not endowed with all the
peripherals of a computer system, images of which one tends to conjure up
when the word “computer” is mentioned. The controllers therefore need
instructions to execute the control actions. Two distinct terms must be
distinguished here – programming and configuration. Every controller comes
with inbuilt firmware. In addition, an application program is downloaded into
another partition of the controller memory. In some cases, manufacturers
provide the application programs also and a way to set certain parameters to
make the generic logic work in the particular plant. For example, a DCS
manufacturer may provide an application program to control a set of three
compressors. The same controller might be sold by the same vendor with an-
other application program say for control of a set of pumps. At the field, it is
just a matter of defining the minimum and maximum pressures, the number
and type of relay contacts to be operated, and the measurement range for field
signals. This process is called configuration. Consider the case where the
DCS vendor provides the controller with the application program memory in
a blank state. DCS users can write custom application programs to cater to
specific logic; this process is termed as programming.
Engineering tools enable configuration programming of controllers,
depending on the applications. The actual features might be controlled by
licensing mechanisms. The engineering tools also hide the complexities of

222
Session 23: Features and advantages of DCS

programming the microcontrollers (which have their own specific instruction


sets) by providing a common programming language with suitable user
interfaces. Therefore, application and process engineers describe the control
logic mostly graphically, which are translated into the instruction set of the
microcontrollers. Typically, the control strategies are made up of
interconnected FBs, sequential function charts (SFC), and equipment and unit
representations, which perform functions within the control scheme based on
inputs. The FBs also provide outputs to other FBs and/or physical I/O within
the control scheme. The set of FBs are invariably provided by the DCS vendor
as libraries. This eliminates the need for any special software language to
program the microprocessors used in the system. In addition, the users can
create custom FBs by combining the predefined FBs to more complex library
of functions. The vendor pretested FBs enable a DCS to be applied to any
plant very quickly, cutting down on the debugging necessary on programmed
software. The engineering tools invariably follow ISO standards on the types
of FBs, the representation of the blocks and the accepted form of
interconnection. The programmed control schemes are stored as files/control
programs in a configuration database. The engineering tool is used to
download these strategies via the control network to distributed controllers,
consoles, and devices. Refer to Figure 23.7 for an engineering view.
In addition to software configuration, some hardware configuration is also
solicited, like setting the addresses of the IO modules and controllers.
Sometimes depending upon the type of connections, different links might
have to be connected to adopt the same card to suit different field interfaces.
While the hardware configuration needs the actual hardware in the field,
software programming can be accomplished remotely and downloaded to the
controllers at the plant.
The configuration/engineering application also allows a designer to create or
change operator interfaces, such as plant schematics and process control
diagrams viewed on the operator displays through a viewing application.
These diagrams displayed on the screen enable the operator to change settings
within the PCS.
Many DCS can also be reconfigured without need to take the system off-line
which enables plant modification, and so on with minimum down-time,
provided the necessary precautions are taken to ensure integrity of the running
process. This capability is particularly useful in applications where the
process is continuous and shutdown maintenance periods are short or limited.

223
Session 23: Features and advantages of DCS

23.4.2 Communications

DCS systems vary in size from very small to very large, depending on the
size and complexity of the plant being controlled. Today’s systems are
enabled with integrated web services for plant integration while supporting a
variety of open standards, such as OPC, for communicating with data sources
external to the system.

Figure 0.7 Loop configuration in DCS

The communication infrastructure comprising the control network supports:


 Connections between different subsystems in the system
 Unsolicited communications for real-time data changes in the process
 Synchronous and asynchronous read/writes
 Configuration downloads to nodes, controllers, and devices
 Autosensing of workstations, controllers, IO cards, devices
 Diagnostics of system components and control strategies
 Online upgrades of system in operation
 Hot/warm/cold restart of control strategy from backup
 Secure and unsecured access to information in the system
 Alarms and events generated by process and system

224
Session 23: Features and advantages of DCS

 Device alerts generated by devices and equipment in the system


 Time synchronization across nodes, devices, and I/O
 Deterministic communication of plant data so that data exchange is
guaranteed in the system

Figure 0.8 Functional architecture of DCS

The data highway is the communication medium that allows a DCS to permit
distribution of the controlling function through a large plant area.
Depending on the speed of transmission, bandwidth supported, and physical
characteristics of the medium of the data highway, the highway length could
vary. However, data highways are designed as segments with suitable
bridges/extenders connecting segments so that the length of a segment of data
highway is not a limiting factor. The most popular physical medium is
Ethernet CAT5 cable. However, several suppliers still offer communication
over twisted and shielded coax cables. Several modern DCS also have
implemented the data highway using fiber optic cables. Some of them also
have successfully incorporated wireless exchange of data instead of a physical
medium such as the data highway.
Optic fiber cables are used most commonly for point-to-point connection
between switches and hubs. Optical fiber is attractive for use as a data
highway medium because it eliminates problems of electromagnetic and
radiofrequency interference, ground loops, and common mode voltages. It is
safe in explosive or flammable environments. It can carry more information
than copper conductors. It is inert to most chemicals and is lighter and easier
to handle than coaxial cable. However, special equipment and skilled labor
are needed to terminate and connect optical fibers.

225
Session 23: Features and advantages of DCS

23.4.3 Control

The DCS is connected to field sensors and actuators and uses set-point control
to control the process in the plant. The most common example is a set-point
control loop consisting of a pressure sensor, controller, and control valve.
Pressure or flow measurements are transmitted to the controller, usually
through transmitted and signal conditioning I/O cards. When the measured
variable reaches a certain point, the controller instructs a valve or actuation
device in the field to open or close until the fluidic flow process reaches the
desired set-point. Large oil refineries have many thousands of I/O points and
employ very large DCSs. This is a typical example where processes are not
limited to fluidic flow through pipes. DCS controls can also include things
such as paper machines and their associated quality controls, variable speed
drives, and motor control centers. DCS are widely used in control of cement
kilns, mining operations, and ore-processing facilities, among others.
A typical DCS consists of functionally and/or geographically distributed
digital controllers capable of executing several (actual numbers depend upon
the model of the controller and the complexity of the individual loops)
regulatory control loops in one controller. The I/O devices can be either
collocated with the controller or located remotely via a field network. Today’s
controllers have extensive computational capabilities and, in addition to PID
control, can generally perform logic and sequential control. Modern DCSs
also support neural networks and fuzzy applications.
23.4.4 Alarms and events

A critical part of the DCS is the integrated alarms and events processing
subsystem. The engineering software is used to configure to get notified of
significant system states. This enables monitoring the system states and
acknowledging them. Priorities can also be associated with the events so as
to monitor the events in the plant. Events represent significant changes in state
for which some action is potentially required. An active state indicates that
the condition that caused the event still exists. When an operator has seen the
message and acknowledges the same, the event enters the acknowledged
state.
In most DCS event types can also be defined. The event type specifies the
message to be displayed to an operator for the various alarm states and the
associated attributes whose value should be captured when an event of this
type occurs. Event priorities can also be defined. An event priority type
defines the priority of an event for each of its possible states.
Many DCS systems also support device and equipment alerts. Like process
alarms, alerts can be assigned priority, can be acknowledged, and convey

226
Session 23: Features and advantages of DCS

information related to the condition that caused them. Unlike process alarms,
however, these alerts are generated by the DCS hardware or devices and
equipment external to the DCS. Alarms and alerts are presented to the
operators in alarm ban-ners and summaries. Operators use these specialized
interfaces, to quickly observe and respond to conditions. They then typically
navigate to a specific display to view additional details and take ap-propriate
actions. Operators can also suppress and filter alarms. Alarm suppression is
typically used to temporarily remove alarms from the system for which some
condition exists that the operator knows about (e.g., a piece of equipment has
been shut down or is under maintenance). Alarm filtering pro-vides a way for
the operator to view collections of alarms and efficiently manage alarms when
there is a flood of alarm messages and to suppress several alarms which result
as a consequence of a basic alarm condition.
Alarms are the most vital part of a system. In facts alarms are a subset of
alerts. Alerts can be broadly classified into three categories:
 Alarms
 Events
 Messages
Alarms, events, and messages attract user attention with high priority when
they appear in the system. Alarms are classified based on the priority;
priorities can be urgent, high, and low. They should have a provision of audio
and display annunciation.
The difference between alarms and annunciation is that alarms appear in the
system by default and the user has to do some engineering to dedicate an
annunciation panel for the same. Annunciating panel comprises of the
following:
 Hooter
 Push button switches with lights for visual indication.
Hooter provides the audio alerts for alarms. Push button switches with visual
indication provide the option of acknowledging the alarms for the panel, on
seeing what the indication is against.
Alarms that are of much lesser priority can be configured as an event; events
are dedicated for certain operations as per system design. For example, a user
logging in with a certain privilege is logged as an event or if a new batch for
manufacturing is started it is generated as an event; this is as per system
configuration and user has no control over it. Alternately, noncritical process
or equipment malfunction generating annoying alarms can be marked as only
to be reported as events. Also, there is an advanced feature available in case
of nuisance alarms. These alarms can be hidden for certain period to avoid
unnecessary attention. The groups of alarms that are a result of equipment

227
Session 23: Features and advantages of DCS

malfunction are known to the operators; certain permissions enable alarm-


hiding features to keep these alarms out of regular list of alarms.
Alarms are mainly classified into two categories:
 Process alarms
 Diagnostic alarms
Process alarms represent a malfunctioning in the control loop. When a process
is upset it could be because of multiple reasons; when a valve goes back it
will initialize the PID controlling it, resulting in a mode initialization; under
this condition a process alarm is seen. This kind of process upsets are reported
under process alarms. PV in the following example means process variable,
when there is an upset in process that is represented as HiHi (High High)/Hi
alarms/Lo alarm/LoLo (Low Low alarm). This means that the process
variable is varying between extreme limits on the high limit and the low limit;
therefore, if the flow rate is more than the high limit and HiHi limit mentioned
for the loop then that alarm is seen; same applies for LoLo limits as well
(Figure 23.9).
For example, if the cable connecting to an operator console is disconnected
then the cable bad alarm is shown. This is the diagnostic representation of
equipment health and reported under separate category of diagnostic alarms.
The following is a sample illustration of diagnostic alarms.
Observe that the last alarm in the list shows a cable fault being reported. Every
alarm generates an event whereas event does not generate an alarm. An event
is representative of multiple happenings in the system. The following is a
representation of events.
There is no representation of priority for events, like in alarms because they
are intended for informative purpose. The events logging is more than alarms
because numerous operations are logged peri-odically for the purpose of
providing information to users on demand, to understand what is happening
in the system and when no alarms are present.
Messages are another feature that is key to advising certain actions to users
when needed. Messages are also present in the station along with alarms but
in a different page. An alarm is defined on the basis of abnormal condition of
process. A message is used to make user aware of certain operation in the
plant for and it can recommend the action to be taken. Since these messages
can be customized, they can be clearly maintained in a way equipment is
communicating with user. For example, if a valve got damaged then the
upstream block such as PID does not function in the regular mode and
generates an alarm. Although alarm is useful, a user can also be passed on a
message saying that valve in X area malfunctioned. Please check for spares
in y department.

228
Session 23: Features and advantages of DCS

Figure 0.9 Process alarm display

However, there is a flipside to the alarms, events, and messages. Like history
it is important to maintain the record of events; in fact, when a process glitch
is seen in trend it is events that help users understand what went wrong with
the process. Hence, they play a very vital role in diagnosis. But again,
maintenance of event records comes at the cost of disk space. Therefore, if
there are too many events, alarms, and messages over a period, the system is
likely to get overloaded. Hence, users are advised to limit their rate of alerts
by the DCS manufacturer.

229
Session 24: Overview of modelling and simulation

Session 24
Overview of modelling and simulation
Content
24.1 Introduction .................................................................................. 231
24.1.1 Review Methodology ........................................................... 231
24.1.2 Definitions ............................................................................ 232
24.1.3 The Historical Trends of the Evolution of Simulation ......... 232
24.1.4 Types of Simulation Models ................................................ 234
24.2 Simulation in product and production lifecycles ......................... 235
24.3 Product and Production Lifecycle ................................................ 235
24.3.1 Product and Production Lifecycle Tools .............................. 237
24.4 Future trends ................................................................................ 247

230
Session 24: Overview of modelling and simulation

24.1 Introduction

Manufacturing is defined as the transformation of materials and information


into goods for the satisfaction of human needs. In the current highly
competitive business environment, the manufacturing industry is facing
constant challenges of producing innovative products at shortened time-to-
market. The increasing trend towards globalization and decentralization of
manufacturing requires real-time information exchanges between the various
nodes in a product development life cycle, e.g., design, setup planning,
production scheduling, machining, assembly, etc., as well as seamless
collaboration among these nodes. Product development processes are
becoming increasingly more complex as products become more versatile,
intricate and inherently complicated, and as product variations multiply to
address to the needs of mass customization. Simulation modelling and
analysis is conducted in order to gain insight into this kind of complex
systems, to achieve the development and testing of new operating or resource
policies and new concepts or systems, which live up to the expectation of
modern manufacturing, before implementing them and, last but not least, to
gather information and knowledge without disturbing the actual system. It
becomes evident from the total number of directly related papers (15,954)
from the early 70s till today, that simulation is a continuously evolving field
of research with undoubted contribution to the progress of manufacturing
systems. This paper investigates the evolution, advances, current practices
and future trends of simulation methods and tools. More specifically, CAx,
factory layout design, material and information flow design, manufacturing
networks design, manufacturing systems and networks planning and control,
augmented and virtual reality in product and process design, planning and
verification (ergonomics, robotics, etc.) are examined (Figure 24.1).
24.1.1 Review Methodology

This review is based on academic peer-reviewed publications that use


simulation not only in manufacturing applications but also simulation in
general, over a period of 54 years, from 1960 to 2014. The review focuses
mainly on simulation methods and tools as described in the abstract and was
carried out in three stages: (a) search in scientific databases (Scopus, Science
Direct and Google Scholar) with relevant keywords, (b) identification of
relevant papers by abstract reading and (c) full-text reading and grouping into
research topics. The relevant keywords utilized were: simulation and
manufacturing in combination with CAx, layout design, material flow design,
manufacturing networks and systems planning and control, augmented
reality, virtual reality, ergonomics, digital mock up, lifecycle assessment,
product data management, enterprise resource planning, knowledge
management, manufacturing execution systems, process simulation,

231
Session 24: Overview of modelling and simulation

supervisory control and data acquisition and supply chain. As a result, the
literature was organized based on keywords enabling the distinction between
the relevant and irrelevant topics of academic papers (Figure 24.2).

Figure 0.1 Number of publications related to simulation technology

24.1.2 Definitions

Hereby, two of the most prominent definitions of simulation in the


manufacturing context are presented and are adopted for the scope of the
present research work. “Simulation modelling and analysis is the process of
creating and experimenting with a computerized mathematical model of a
physical system”. “Simulation is the imitation of the operation of a real-world
process or system over time. Simulation involves the generation of an
artificial history of the system, and the observation of that artificial history to
draw inferences concerning the operating characteristics of the real system
that is represented”.
24.1.3 The Historical Trends of the Evolution of Simulation

It is generally considered that the contemporary meaning of simulation


originated by the work of Comte de Buffon who proposed a Monte Carlo-like
method in order to determine the outcome of an experiment consisting of
repeatedly tossing a needle onto a ruled sheet of paper. He aimed at
calculating the probability of the needle crossing one of the lines. So, it is
obvious that although the term “Monte Carlo method” was invented in 1947,
at the start of the computer era, stochastic sampling methods were used long
before the evolution of computers. About a century later, Gosset used a
primitive form of manual simulation to verify his assumption about the exact
form of the probability density function for Students t-distribution [8]. Thirty
years later, Link constructs the first “blue box” flight trainer and a few years
later, the army adopts it in order to facilitate training. In the mid-1940s,
simulation makes a significant leap with the contribution of Tochter and
Owen develop the General Simulation Program in 1960, which is the first
general purpose simulator to simulate an industrial plant that consists of a set
of machines, each cycling through states as busy, idle , unavailable and tailed.

232
Session 24: Overview of modelling and simulation

Figure 0.2 The investigated domains of contemporary manufacturing

They also introduce the three-phase method for timing executives, publishes
the first textbook in simulation “The Art of Simulation” (1963) and developed
the wheel chart or activity-cycle diagram (ACD) (1964). During the period
1960-1961, Gordon introduces the General Purpose Simulation System
(GPSS). With use of light, sound motion and even smell to immerse the user
in a motorcycle ride, Heilig designed the Sensorama ride, which is considered
as a predecessor of Virtual Reality (VR). Simultaneously, Nygaard and Dahl
initiate work on SIMULA and they finally release it in 1963 and Kiviat
develops the General Activity Simulation Program (GASP). In 1963, the first
version of SIMSCRIPT is presented for non-experts and OPS-3 is developed
by MIT. Sutherlend presents manipulation of objects on a computer screen
with a pointing device. Although, a significant evolution of simulation is
noticed, there are still problems concerning model construction and model
analysis which are mentioned and addressed by Conway et al. General
Precision Equipment Corporation and NASA uses analogue and digital
computers to develop Gemini simulators. Lackner proposes the system theory
as a basis for simulation modelling. In 1968, Kiviat introduces the
entity/attribute/set concept in SIMSCRIPT II. At the same time, Sutherland
constructs head-mounted computer graphics display that also track the
position of the users’ head movements and the Grope project explores real-
time force feedback. Two years later, power plant simulators are introduced.
In 1972, an explanatory theory of simulation based on systems-theoretic
concepts is presented by Zeigler. In 1973, Pritsker and Hurst introduce the
capability for combined simulation in GASP IV and Fishman composes the
state-of-the-art on random number generation, random variate generation and
output analysis with his two classical texts. Clementson extended ECSL
(Extended Control and Simulation Language) with the Computer Aided
programming System using ACD representation and Mathewson develops
several versions of DRAFT to produce different programming language
executable representations in 1975. In 1976, Delfosse introduces the
capability for combined simulation in SIMSCRIPT II.5 as C-SIMSCRIPT

233
Session 24: Overview of modelling and simulation

and a year later, user interface is added to it. Moreover, Bryant initiates
parallel simulation. In 1978, computer imaging with the introduction of
digital image generation is a significant contribution to the advancement of
simulation. In the beginning of the 1980s, major breakthroughs take place,
military flight simulators, naval and submarine simulators are produced, and
NASA develops relatively low-cost VR equipment. Nance introduces an
object-oriented representational approach in order to join theoretical
modelling issues with program-generation techniques and with software
engineering concepts. Balci and Sargent contribute to formal verification and
validation. Law and Kelton contribute with their first edition which includes
advanced methodologies concerning random number generation, random
variate generation and output analysis. Furthermore, Schruben develops event
graphs in 1983. While, Visual Interactive Simulation is initiated in 1976 by
Hurrion and becomes commercially available in 1979 through SEE-WHY, it
is properly described in methodological terms, contrasting the active and
passive forms in model development and experimentation, by Bell and
O’Keefe in 1994. In early 1990s, real-time simulations and interactive
graphics become possible due to the increased computer power and
commercial VR applications become feasible. In 1990, as well, Cota and
Sargent develop a graphical model representation for the process world view,
named Control Flow Graphs which are subsequently extended to Hierarchical
Control Flow Graphs in order to help the control of representational
complexity by Fritz and Sargent in 1995. In addition, the development of
high-resolution graphics focuses on gaming industry surpassing in that way
the military industry. In 1997, Knuth describes comprehensively the random
number generation techniques and tests for randomness. The historic
evolution of simulation is also depicted in (Figure 24.3).

Figure 0.3 Historical Evolution of Simulation.

24.1.4 Types of Simulation Models

Simulation models are categorised based on three basic dimensions: 1) timing


of change, 2) randomness and 3) data organisation. Based on whether the

234
Session 24: Overview of modelling and simulation

simulation depends on the time factor or not, it can be classified into static
and dynamic. A static simulation is independent of time while dynamic
simulation evolves over time. Dynamic simulation can be further categorised
to continuous and discrete. In discrete simulation, changes occur at discrete
points in time while in continuous, the variable of time is continuous.
In addition, discrete simulation is divided to time-stepped and event driven.
Time-stepped consists of regular time intervals and alterations take place after
the passing of a specific amount of time. On the other hand, in event-driven
simulation, updates are linked to scheduled events and time intervals are
irregular. As far as the dimension randomness is concerned simulation can be
deterministic or stochastic. Deterministic means that the repetition of the
same simulation will result to the same output, whereas, stochastic simulation
means that the repetition of the same simulation will not always produce the
same output. Last but not least, simulation is classified to grid-based and
mesh-free according to data organisation. Grid-based means that data are
associated with discrete cells at specific locations in a grid and updates take
place to each cell according to its previous state and those of its neighbours.
On the other hand, mesh free relates with data of individual particles and
updates look at each pair of particles.

24.2 Simulation in product and production lifecycles

The following two sections present a mapping between the simulation


methods and tools to product and production lifecycles (Figure 24.4).

24.3 Product and Production Lifecycle

Initially, the basic concept or idea for a product is conceived considering the
initial request from a customer and, subsequently, it is transformed into a
working prototype. Satisfaction of the initial customer request is followed by
marketing appraisal of the product in relation to its potential demand from
additional customers. If no further demand if foreseeable, then the product is
retained in the design- and- build facility in order to relate the customers’
voice to product design requirements, and translate these into characteristics
of parts, manufacturing operations, and production requirements.

235
Session 24: Overview of modelling and simulation

Figure 0.4 Mapping of Key-Enabling Technologies on Product and Production lifecycle [lifecycle
phases adapted from EFFRA FoF 2020 Consultation Document]

The aim of the product development process is to ensure that the product and
its components meet the required specifications. Thereinafter, innovative
engineering allows collaborating teams to streamline the engineering of the
product and of the production process engineering. The digital factory
concept enables the integration of CAD designs and CAE information and the
synchronisation of the engineering processes that require the participation of
the entire value chain accessing all product information needed. Moreover, it
provides the opportunity to all product-related teams to work together
effectively without regard to physical location.
Following the production of the product, maintenance ensures that a system
continually performs its intended functions at its designed level of reliability
and safety. At the end of product lifecycle, the product is recycled which
means that the product retains its geometrical form and it is reused either for
the same purpose as during its original life- cycle or for secondary purposes.
Instead of recycling, remanufacturing is used. Used or broken-down products
or components are restored to useful life.
As far as production lifecycle is concerned, at first, the design stage starts
with a stakeholder analysis to identify the constraints and degrees of freedom
for the design. It should be mentioned that stakeholders have different
interests in and requirements to the system.
After the specification of the requirements the design and redesign of
manufacturing systems follows which is summarized as manufacturing
system engineering (MSE). It is a complex, multi-disciplinary process that

236
Session 24: Overview of modelling and simulation

involves not only people located at different production sites, but also a
variety of tools that support special subtasks of the process.
During the process development phase, the rough solutions from the design
phase are refined to a level that allows investigation with analysis tools. If
system dimensions are fixed, first contact with suppliers is established to
integrate them in the further system definition. In the final system definition
phase, one solution is refined to a detail that allows to start system
implementation. This phase is characterised by system integration, where
previously identified subsystems are composed to a complete system. Fine
tuning of subsystem cooperation and fixing of last details lead to the final
system definition and emission of orders to suppliers.
As soon as the integration is completed the production ramp-up begins. It is
defined as the period between the end of product development and full
capacity production. Two conflicting factors are characteristic of this period:
low production capacity, and high demand.
Finally, to face environmental problems, manufacturers undertake efforts on
recycling namely, recovering materials or components of used equipment in
order to make them available for new products or processes.
24.3.1 Product and Production Lifecycle Tools

Augmented Reality (AR)


Augmented Reality (AR) is defined as a real-time direct or indirect view of a
physical real-world environment that has been enhanced/augmented by
adding virtual computer- generated information to it. AR systems aim at
enhancing the way the user perceives and interacts with the real world. In the
modern, highly competitive manufacturing environment, the application of
augmented reality consists an innovative and effective solution to simulate,
assist and improve the manufacturing processes. The challenge is to design
and implement integrated AR-assisted manufacturing systems that could
enhance the manufacturing processes, as well as product and process
development, leading to shorter lead-times, reduced costs and improved
quality. In the field of automobile development, rudimentary car prototypes
are completed by virtual components with the use of AR and design decisions.
In addition, the exploitation of AR environment in the rapid creation and
modification of freeform surfaces is introduced and methods for enabling
increased flexibility during exploratory, conceptual industrial product design
through three-dimensional (3D) sketch-based user input are explored. The
facilitation of robot programming and trajectory planning is succeeded with
the use of an AR-based system. The system takes into consideration the
dynamic constraints of the robots but it is still limited as far as the achieved
accuracy is concerned. In order to enhance the perception of the user towards

237
Session 24: Overview of modelling and simulation

a product design, a fiducial marker is sent to the customer which can be used
for AR visualisation via handheld devices. Each AR marker is mapped to
specific 3D models and functionalities. AR was applied through integrating a
hybrid rendering of volume data, vision-based calibration, accurate real-time
tracking methods, tangible interfaces, multimedia annotations, and
distributed computation and communications at DaimlerChrysler.

Computer Aided Design (CAD)


Computer-Aided Design (CAD) is the technology related to the use of
computer systems to assist in the creation, modification, analysis and
optimisation of a design. Nowadays, the strong competition in market
increases significantly the level of requirement in terms of functionality and
quality of products. At the same time, the complexity of the design process is
increasing, whereas product development time is decreasing. Such constraints
on design activities require efficient CAD systems and adapted CAD
methodologies. A Life Cycle-CAD (LC-CAD) provides an integrated design
of a product and its life-cycle, manages the consistency between them and
evaluates their performance concerning the environment and the economy
with the use of simulation. In addition, an effort is made in order to partially
retrieve 3D CAD models for design reuse using a semantic-based approach.
A framework of collaborative intelligent CAD, which consists of the
collaborative design protocol and the design history structure is proposed. It
reasons redundant design, reduces design conflicts and is the basis for the
implementation of more intelligent collaborative CAD systems. Attention is
paid to the fundamental activity of engineering using CAD systems with
emphasis on CAD graphical user interfaces (GUIs) and how they can be
potentially enhanced using game mechanics to provide more engaging and
intuitive environments. Innovative CAD methods for complex parts
modelling in parametric CAD system are applied in the industry and
presented by [18].

Computer Aided Process Planning (CAPP)


Process planning deals with the selection of necessary manufacturing
processes and determination of their sequences to ‘transform’ the ideas of
designers into a physical component economically and competitively.
Currently, manufacturing is moving towards a more advanced, intelligent,
flexible and environmentally friendly policy so it demands advanced and
intelligent CAPP systems. A holistic component manufacturing process
planning model based on an integrated approach which combines
technological and business considerations is developed in order to ameliorate
decision support and knowledge management capabilities and advance the

238
Session 24: Overview of modelling and simulation

existing CAPP. CAD/CAPP/CAM systems are integrated in order to evaluate


alternative process plans in different levels, through which, the exploitation
of the available resources and their optimal setup can contribute in the overall
sustainability of the production facilities. The authors in [43] introduce a
highly specialised CAPP/CAM integrated system, called Generative Pattern
Machining (GPM), for automatic tool paths generation to cut die pattern from
the CAD model of the stamping die. GPM is being used by DaimlerChrysler
pattern shop very successfully.

Digital Mock Up (DMU)


A digital mock-up (DMU) consists of 3D models which integrate the
mechanical structure of a system. A virtual prototype is created to identify
problems in the initial design and it often leads to design changes and multiple
iterations of the prototype as a means to optimize the design without the need
for a physical model. This eliminates time and money and perhaps more
importantly, the initial design and virtual prototype can be created with
simultaneous input from every engineer involved in the project. Digital mock-
up is a rapidly evolving technology and a lot of advances are presented in this
field. A heterogeneous CAD assembly method constructs a Digital Mock-Up
system which facilitates the avoidance of mismatches and interferences
during precision design processes. Furthermore, a digital mock- up
visualisation system can import giga-scale CAD models into the memory of
a computer simultaneously based on a compression representation with
triangular patches. Also, DMU is applied for the verification and validation
of the ITER remote handling system design utilizing a system engineering
framework. The DMUs represent virtual remote handling tasks and provide
accuracy and facilitation to the integration into the control system. Moreover,
AIRBUS Military researches the implementation of industrial Digital Mock-
Up (iDMU) concept to support the industrialisation process of a medium size
aero-structure [48]. Finally, coloring the DMU enables highlighting the
required attributes from the customers. This method was tested and
implemented in collaboration with Airbus.

Life Cycle Assessment (LCA)


Life cycle assessment is a data intensive analysis conceived to track, store and
assess data over the entire product life cycle. For a product to perform its
function it must be developed, manufactured, distributed to its users and
maintained during use. All these phases include supportive activities that
consume resources and cause environmental impacts. In order to get an
impression of this total impact, the analysis must focus on the product system
or the life cycle of the product. As a result of the environmental awareness,

239
Session 24: Overview of modelling and simulation

developments in the field are significantly increasing. The current and


foreseen roles of sustainable bioprocess system engineering and life cycle
inventory and assessment in the design, development and improvement are
explored. The dependence between the manufactured precision of a product
and its environmental impact during its entire lifecycle is estimated with the
use of an extended LCA methodology to evaluate the impact of
manufacturing process precision on the functional performance of a product
during its use phase. Also, a computational approach for the simultaneous
minimisation of the total cost and environmental impact of thermodynamic
cycles is attempted with the exploitation of a combination of process
simulation, multi-objective optimisation and LCA within a unified
framework. A methodology for the development of a reliable gate-to-gate
LCA is integrated in a simulation tool for discrete event modelling of
manufacturing processes and enables the characterisation of single machine
behaviour and the evaluation of environmental implications of industrial
operation management before the real configuration of the manufacturing
line. A methodology, implemented through a software tool, is used for the
investigation of the environmental impact caused by centralised and
decentralised manufacturing networks, under heavy product customisation.
Lastly, a wide variety of industrial applications in the field is presented by
[21].

Product Data Management (PDM)


Product data management integrates and manages all the information that
defines a product, from design to manufacture and to end-user support.
Current manufacturing industry is facing an increasing challenge to satisfy
customers and compete in market. To stay competitive, manufacturing
companies are adopting IT solutions to facilitate collaborations and improve
their product development/production. Among these IT solutions, product
data management (PDM) systems play an essential role by managing product
data electronically. A new concurrency control model for PDM succeeds in
improving the concurrency ability of PDM systems by adapting the
accessibility of entities according to the action that the users will perform and
the product architecture of the entity. A new concept of product as a pivotal
element is proposed. The product incorporates all the information about itself
which refers to a so-called ONTO-PDM “Product Ontology” and as a result,
the information exchange between the systems related to it is succeeded by
the minimisation of semantic uncertainty. Concerning industrial applications,
Unified Modelling Language-based approaches are used for modelling and
implementing PDM systems especially concerning product structure and
workflow.
Virtual Reality

240
Session 24: Overview of modelling and simulation

Virtual Reality (VR) is defined as the use of real-time digital computers and
other special hardware and software to generate the simulation of an alternate
world or environment, believable as real or true by the users. VR is a rapidly
developing computer interface that strives to immerse the user completely
within an experimental simulation, thereby enhancing the overall impact and
providing a much more intuitive link between the computer and the human
participants. VR has found application from the design to the process
simulation phase. Currently, new semantic- based techniques are introduced
in order to facilitate the design and review of prototypes by providing
usability and flexibility to the engineer / designer. In the field of collaborative
management and verification of design knowledge, a new platform, called
DiCoDEv (Distributed Collaborative Design Evaluation), eases the
cooperation among distributed design experts with the use of a shared virtual
environment. The VR environment provides the multiple users with the
capability of visualizing, immersing and interacting with the virtual
prototype; managing efficiently the knowledge during the product design
phase; collaborating in real-time on the same virtual object and reviewing it
and making an ergonomic evaluation with the use of digital human
simulation. Furthermore, an intelligent virtual assembly system using an
optimal assembly algorithm provide haptic interactions during the process of
virtual assembly. Another haptic VR platform, named as HAMMS, is
introduced in order to facilitate the performance, the planning and the
evaluation of virtual assembly of components. Finally, VR tools are applied
to fusion in ITER project facilitating maintenance and integration aspects
during the early phase design.

Computer Aided Manufacturing (CAM)


Computer Aided Manufacturing (CAM) can be defined as the use of
computer systems to plan, manage and control the operations of a
manufacturing plant through either direct or indirect computer interface with
the production resources of the plant. In other words, the use of computer
system in non- design activities but in manufacturing process is called CAM.
The application of CAM in the production offers advantages to a company to
develop capabilities by combining traditional economies of scale with
economies of scope resulting in the desired flexibility and efficiency.
An effort is made in order to optimize the machining of complex shaped parts
with flat-end tools with the use of a novel five-axis tool path generation
algorithms. A methodology facilitates the determination of global optimum
tool paths for free form surfaces with the incorporation of an algorithm which
aims at finding the optimal tool path and succeeding minimisation of the
average cutting forces without exceeding a pre-set maximum force
magnitude. An integrated system of part modelling, nesting, process
planning, NC programming and simulation and reporting for sheet metal

241
Session 24: Overview of modelling and simulation

combination processing functions has been applied in several sheet-metal


manufacturing plants.

Enterprise Resource Planning (ERP)


An Enterprise Resource Planning (ERP) system is a suite of integrated
software applications used to manage transactions through company-wide
business processes, by using a common database, standard procedures and
data sharing between and within functional areas. ERP systems are becoming
more and more prevalent throughout the international business world.
Nowadays, in most production distribution companies, ERP systems are used
to support their production and distribution activities and they are designed
to integrate and partially automate financial, resource management,
commercial, after-sale, manufacturing and other business functions in to one
system around a database. A literature-based and theory-driven model was
developed in order to test the relationship between ERP system
implementation status and operational performance. Moreover, a general
risks taxonomy for ERP maintenance is investigated with the use of analytic
hierarchy process. An objectives-oriented approach with one evaluation
model and three optimisation models addresses key management issues in the
implementation of critical success strategies (CSSs) to ensure the success of
an ERP project. In order to deal with the problem of independence in risk
assessment, an approach using Coloured Petri Nets is developed and applied
to model risk factors in ERP systems.

Ergonomics Simulation
Ergonomics are defined as the theoretical and fundamental understanding of
human behaviour and performance in purposeful interacting socio-technical
systems, and the application of that understanding to design of interactions in
the context of real settings. In the past, workplace ergonomic considerations
have often been reactive, time- consuming, incomplete, sporadic, and
difficult. Ergonomic experts who were consulted after problems occurred in
the workplace examined data from injuries that had been observed and
reported. There are now emerging technologies supporting simulation-based
engineering to address this in a proactive manner. These allow the workplaces
and the tasks to be simulated even before the facilities are physically in place.
The comparison between ergonomic measurements in virtual and real
environments during some specific task is analysed in [79]. Also, with the use
of USAs VR Lab, named “HEMAP”, training simulations were explored and
ergonomic risks were estimated and spacecraft flight systems were evaluated
as part of the design process. ErgoToolkit implements ergonomic analysis
methods, already available in literature or company practice, into digital tools

242
Session 24: Overview of modelling and simulation

for ergonomics, namely, Posture Definition and Recognition and Stress


Screening, integrated into state-of-the- art virtual manufacturing software. An
approach to human motion analysis and modelling which respects the
anthropometric parameters is tested and the real motion data are collected and
processed with the use of statistical methods and the models that are produced
can predict human motion and direct digital humans in the virtual
environment.

Knowledge Management
Knowledge Management (KM) is defined as the process of continuously
creating new knowledge, disseminating it widely through the organisation,
and embodying it quickly in new products/services, technologies and
systems. KM is about facilitating an environment where work critical
information can be created, structured, shared, distributed and used. To be
effective such environments must provide users with relevant knowledge, that
is, knowledge that enables users to better perform their tasks, at the right time
and in the right form. KM has been a predominant trend in business in the
recent years. Firstly, an explorative study on the Personal Knowledge
Management is conducted and an active knowledge recommender system
model, which is built on distributed members personal knowledge
repositories in the collaborative team environments, is proposed. Moreover,
the implementation process of Lean Production and forms to classify
knowledge and pattern-based approach to knowledge flow design starts from
basic concepts, uses a knowledge spiral to model knowledge flow patterns
and operations, and lays down principles for knowledge flow network
composition and evolution. As far as the industrial applications of knowledge
management is concerned, a framework for marketing decision making with
the use of agent technology, fuzzy AHP (Analytical Hierarchy Process) and
fuzzy logic is implemented in a car factory.

Layout Planning Simulation


Facility layout planning (FLP) refers to the design of the allocation plans of
the machines/equipment in a manufacturing shop-floor. Factory layout design
is a multidisciplinary, knowledge-intensive task that is of vital importance to
the survival of manufacturers in modern globally competitive environment.
The need to design and construct a new factory layout or reconfigure the
current one has increased largely because of the fast changes in customer
demand both from product quantity and product variety aspects. This requires
companies to be more agile to plan, design and reconfigure the factory layout
to be able to introduce new products to market and keep their competitive
strength. Using predefined objects, a layout model can be implemented in 3D

243
Session 24: Overview of modelling and simulation

avoiding the drawing stage of the equipment and virtual reality factory
models created provide the user with the ability to move through factory
mock-ups, walk through, inspect, and animate motion in a rendered 3D-
factory model. Moreover, a method implemented in a web-based tool is able
to generate job rotation schedules for human based assembly systems. A
method of deriving assembly line design alternatives and evaluating them
against multiple user-defined criteria is applied in an automotive case. An
AR-based application is developed which aims at meeting the need and
demands of an AR-supported factory and manufacturing planning, usability,
analysis functionalities and accuracy demands. It provides the user with the
necessary tools for production planning and measuring tasks.

Manufacturing Execution Systems (MES)


A manufacturing execution system (MES) is a system that helps
manufacturers attain constant product quality, comply with regulatory
requirements, reduce time to market, and lower production costs. As
manufacturers strive to become more competitive and provide world-class
service to their customers, emphasis has been placed on total quality
management (TQM) programs. The need for a quality manufacturing system
solution is a driving factor creating the demand for MES. The functions of
MES are consistent with the goals of TQM applied to industrial
manufacturing companies. A holonic MES that utilizes a given schedule as a
guideline for selecting among task execution alternatives, which are
independent from the original schedule, is proposed. An innovative design
and verification methodology for an autonomic MES is presented. Its basis
consists of well- defined interactions between autonomic agents which
perform the monitor-analyse-plan-execution loop and simultaneously they
manage orders and resources. This system was extended to allow selfish
behaviour and adaptive decision- making in distributed execution control and
emergent scheduling. A Radio Frequency Identification (RFID)- enabled
real-time manufacturing execution system is proposed and tested in a
collaborating company which manufactures large-scale and heavy-duty
machineries. On the shop-floor, RFID devices are used in order to track and
trace manufacturing objects and acquire real-time production data and
identification and control of disturbances.

Material Flow Simulation


Materials flow within manufacturing is the movement of materials through a
defined process or a value stream within a factory or an industrial unit for the
purpose of producing an end product. In today’s changing manufacturing
world with new paradigms such as mass customization and global

244
Session 24: Overview of modelling and simulation

manufacturing operations and competition, companies need greater


capabilities to respond quicker to market dynamics and varying demands. The
adoption of suitable production and materials flow control (PMFC)
mechanisms, combined with the implementation of emergent technologies,
can be of great value for improving performance and quality of manufacturing
and of service to customers. An automated motion planning framework
integrated into the scene modelling workflow from a material flow simulation
framework generates automatically motion paths for moving objects. It
depends on an actual model layout and it avoids collision with other objects.
An integrated planning approach for the evaluation and the improvement of
the changeability of interlinked production processes before the event
actually takes place is proposed. The key element of the approach is the use
of material flow simulation of variant scenarios. An approach is designed to
support the management of a ship repair yard by integrating in an open and
flexible system a number of critical business functions with production
planning, scheduling and control. This approach is implemented in a software
system fully developed in Java and designed by using UML. An assignment
logic of the workload to the resources of a dairy factory has been implemented
in a software system. The system simulates the operation of the factory and
creates both a schedule for its resources and a set of performance measures,
which enable the user to evaluate the proposed schedule. A P3R- driven
modelling and simulation system in Product Lifecycle Management (PLM) is
introduced and implemented in automotive press shops. It consists of a P3R
data structure for simulation-model generation, an application based on the
P3R object-oriented model and a concurrent material flow analysis system.

Process Simulation
A manufacturing process is defined as the use of one or more physical
mechanisms to transform the shape of a material's shape and/or form and/or
properties. Newly emerging composite manufacturing processes, where there
exist only limited industrial experience, demonstrate a definite need for
process simulations to reduce the time and cost associated with the product
and process developments. The FEDES software (Finite Element Data
Exchange System) included case studies for simulation of manufacturing
process chains including aero-engine components. A simulation-based
approach for modelling and dimensioning process parameters in a process
chain as well as the corresponding technological interfaces are introduced.
Procedure models for an efficient and target figure dependent analysis
including identification, categorisation, prioritisation and interdependencies
of a big variety of process parameter constellations by means of the developed
simulation models are presented and methodology of sequentially simulating
each step in the manufacturing process of a sheet metal assembly is proposed.

245
Session 24: Overview of modelling and simulation

Supervisory Control and Data Acquisition (SCADA)


SCADA is the technology that enables a user to collect data from one or more
distant facilities and to send limited control instructions to those facilities.
Certain services in our society are essential to our way of life, including clean
water, electricity, transportation, and others. These services are often
manufactured or delivered using Supervisory Control and Data Acquisition
(SCADA) systems. An integrated framework for control system simulation
and near- real-time regulatory compliance monitoring with respect to
cybersecurity named SCADASim is presented. The vulnerabilities caused by
interdependencies between SCADA and System Under Control are examined
and analysed with the use of a five-step methodical framework. A significant
step of this framework is a hybrid modelling and simulation approach which
is used to realize identification and assessment of hidden vulnerabilities. The
interdependencies between industrial control systems, underlying critical
infrastructures and SCADA are investigated in order to address the
vulnerabilities related to the coupling of these systems. The modelling
alternatives for system-of-systems, integrated versus coupled models, are also
under discussion. SCADA systems are applied worldwide in critical
infrastructures, ranging from power generation, over public transport to
industrial manufacturing systems.

Supply Chain Simulation


A supply chain is the value-adding chain of processes from the initial raw
materials to the ultimate consumption of the finished product spanning across
multiple supplier-customer links. Modern manufacturing enterprises must
collaborate with their business partners through their business process
operations such as design, manufacture, distribution, and after-sales service.
Robust and flexible system mechanisms are required to realize such inter-
enterprises collaboration environments. A generic hybrid-modelling
framework for supply chain simulation is presented and A method to model,
simulate and optimize supply chain operations by taking into consideration
their end-of-life operations is used to evaluate the capability of OEMs to
achieve quantitative performance targets defined by environmental impacts
and costs of lifecycle. A method of examining multi objective re-
configurability of an Original Equipment Manufacturer supply chain is
presented in order to adapt with flexibility dynamically changing
environmental restrictions and market situations. A discrete-event simulation
model of a capacitated supply chain is developed and a procedure to
dynamically adjust the replenishment parameters based on re-optimisation
during different parts of the seasonal demand cycle is explained. A model is
implemented in the form of Internet enabled software framework, offering a
set of characteristics, including virtual organisation, scheduling and
monitoring, in order to support cooperation and flexible planning and

246
Session 24: Overview of modelling and simulation

monitoring across extended manufacturing enterprise. Finally, the application


of the mesoscopic simulation approach to a real- world supply chain example
is illustrated utilizing the software MesoSim.

Manufacturing Systems and Networks Planning and Control


A modern manufacturing network is composed of cooperating OEM plants,
suppliers and dealers that produce and deliver final products to the market.
Original Equipment Manufacturers (OEMs) operate in highly competitive,
volatile markets, with fluctuating demand, increasing labour costs in
developing countries, and new environmental regulation. Driven by the ever
increasing need to reduce cost and delivery times, OEMs are called to
efficiently overcome these issues by designing and operating sustainable and
efficient manufacturing networks. The complexity and the stability of
manufacturing systems is investigated by introducing concepts based on
discrete event simulation and nonlinear dynamics theory. Furthermore, the
evaluation of the performance of automotive manufacturing networks under
highly diversified product demand is succeeded through discrete-event
simulation models with the use of multiple conflicting user-defined criteria
such as lead time, final product cost, flexibility, annual production volume
and environmental impact due to product transportation. Alternative network
designs are proposed and evaluated through a set of multiple conflicting
criteria including dynamic complexity, reliability, cost, time, quality and
environmental footprint. A method implemented in a software tool comprises
of a mechanism for the generation and evaluation of manufacturing network
alternative configurations. A continuous modelling approach for supply chain
simulation was applied in the automotive industry and depicted that initial
inventory levels and demand fluctuation can create delivery shortages and
increased lead times.

24.4 Future trends

Digital manufacturing technologies have been considered an essential part of


the continuous effort towards the reduction in the development time and cost
of a product as well as towards the expansion in customisation options. The
simulation-based technologies constitute a focal point of digital
manufacturing solutions, since they allow for the experimentation and
validation of different product, process and manufacturing system
configurations. The simulation tools reviewed in this research are constantly
evolving and they certainly lead towards more efficient manufacturing
systems. But, in the current highly competitive business environment, which
is constantly facing new challenges, there is always need for even more

247
Session 24: Overview of modelling and simulation

efficient and adaptive technologies. Hereafter, the identification of the major


gaps of each simulation related key enabling technology are discussed and
future trends are outlined.

Augmented Reality
AR applications in manufacturing and design requires a high level of
accuracy in tracking and superimposition of augmented information. Very
accurate position and orientation tracking will be needed in operations such
as CNC simulation and robot path planning. Computer-vision-based tracking
will not be able to handle high frequency motion as well as rapid camera
movements. Hybrid systems using laser, RFID and other types of sensing
devices will be required. Another basic issue in AR is the placing of virtual
objects with the correct pose in an augmented space. This is also referred to
as Registration. As different tracking methodologies possess their own
inherent deficiencies and error sources, it is necessary to study the best
tracking method for a particular application which could be subject to poor
lighting condition, moving objects, etc. AR displays require an extremely low
latency to maintain the virtual objects in a stable position. An important
source of alignment errors come from the difference in time between the
moment an observer moves and the time the corresponding image is
displayed. This time difference is called the end-to-end latency, which is
important as head rotations can be very fast and this would cause significant
changes to the scene being observed. Further research should focus on the
setup of an AR environment which consists of four essential elements: target
places, AR content, tracking module and display system.

Computer Aided Design


Current deficiencies and limitations of current CAD tools are the complexity
of menu items or commands, the limitation of active and interactive assistance
while designing in CAD and the integration of informal conceptual design
tools in CAD. Moreover, current tools include inadequate human– computer
interface design; focused on functionality but not on usability and fixation on
design routines.

Computer Aided Process Planning


To fulfil the needs of modern manufacturing processes, computer-aided
process planning should be responsive and adaptive to the alterations in the
production capacity and functionality. Nowadays, conventional CAPP
systems are incapable of adjusting to dynamic operations and a process plan,
created in advance, is found improper or unusable to specific resources. This

248
Session 24: Overview of modelling and simulation

phenomenon results in spending a significant amount of time and effort


unnecessarily.

Digital Mock Up
According to the reviewed literature, DMU scope has to be extended to the
services function. Services engineers are also stakeholders of the product
development and should be able to exploit the DMU to design services.

Virtual Reality
Moreover, VR tools should be integrated not only in the central planning
phases, but in every phase of the factory planning process. VR should not
only be used for visualisation means, but also for collaborative and
communicative means. VR is now used in many industrial applications and
cuts costs during the implementation of a PLM. The main challenges are a
result of the following drawbacks. Implementation of a CAE simulation is a
time-consuming process and VR systems used in industry focus on one or a
few particular steps of a development cycle (e.g. design review), and may be
used in the framework of the corresponding product development project
review. There is no VR tool in the current state of the art which enables us to
deal globally with the different steps of the PLM and the corresponding
projects reviews.

Lifecycle Assessment
A few challenges concerning LCA that should be highlighted are LCA
modularisation and standardisation of environmental profiles for machine
tools. Also, “hidden flows’’ modelling should be addressed, and data
accuracy should be ameliorated. Last but not least, value stream mapping is
of high importance.

Product Data Management


As far as the future trends in the field of PDM are concerned, the efficiency
of these systems can be further enhanced by studying factors that affect the
accessibility of product data, for example, the nature of data in different
timeframe of a development, the relationship between the maturity of the data
and the probability of them being modified.

249
Session 24: Overview of modelling and simulation

Computer Aided Manufacturing


As a result of the dynamically changing and evolving manufacturing
environment, the need is presented for effective coordination, collaboration
and communication amongst all the aspects of production, from humans to
machines. The future CAM systems need to focus on collaborative technics,
effective communication and efficient data exchange.

Enterprise Resource Planning


Future trends of ERP systems, on technological level, include software as a
service, mobile technology and tightly integrated business intelligence. The
tendency of being able to obtain ERP functionality as a service has to be
mentioned. Especially in the mid-market, the ERP suites will no longer be
hosted internally but instead will be obtained as a service offered by the ERP
provider. New ways of providing software are to be investigated, mainly
linked with the development of cloud computing. In addition, access to
information with the use of mobile devices has become a reality even for end
consumers over the last years. The ERP system providers should face these
challenges by offering mobile-capable ERP solutions. Another important
issue is the reporting and data analysis which grows with the information
needs of users. Business Intelligence (BI) is becoming not only easier to use
over time but also tighter integrated into ERP suites.

Ergonomics Simulation
Many advances have been made during the last decades, and the amount of
possible applications is growing in the field of ergonomics. Yet much
research has still to be conducted for many open issues. Systems’ complexity
and number of features are only increasing and ways of effectively
implementing them should be explored. The interaction between models is
required, as well, for an integrated approach of common daily design
problems which is directly related to an integration of models into an
encompassing DHM. Moreover, techniques for measuring human quantities,
which has not become easier despite the evolution of the technical means,
should be evolved. Another issue is the harmonisation of the data
representation in different disciplines. Without such agreement it will remain
extremely difficult to develop integrated models.

Knowledge Management
Agent-oriented approaches to knowledge management and collaborative
systems need further development. Methodologies are needed that support the

250
Session 24: Overview of modelling and simulation

analysis of knowledge management needs of organisations and its


specification using software agents and agent societies. Also, reusable agent-
oriented knowledge management frameworks, including the description of
agent roles, interaction forms and knowledge description should be
developed. The existence of agent-based tools for organisational modelling
and simulation that help determine the knowledge processes of the
organisation is crucial. Finally, research should focus on the role of learning
in agent-based KM systems, namely, how to use agent learning to support and
extend knowledge sharing.

Layout Design Simulation


Today, in the field of layout design simulation, some commercial software
can represent decoupling data from 3D model and export them in XML or
HTML format. While this is an export of properties, this cannot fully solve
the interoperability and extensibility issues since the interoperability depends
on how the different software and users define contents of data models.

Manufacturing Execution Systems


In the turbulent manufacturing environment, a key issue of modern
Manufacturing Execution Systems is that they cannot plan ahead of time. This
phenomenon is named decision myopia and causes undoubtedly significant
malfunctions in manufacturing.

Process Simulation
The planning, the data transfer and the optimisation of manufacturing process
chains must be integrated into a common model. Moreover, the macro-scale
manufacturing process chains are optimised with simulation tools using
numerical techniques such as the FEM while the micro-scale manufacturing
process chains are mainly optimised by experimental approaches. This shows
that the macro-scale manufacturing process chains are more mature than the
micro-scale manufacturing process chains in terms of modelling and
simulation which indicates that modelling and simulation of micro-scale
manufacturing process chains is still a challenge. Also, the macro-scale
manufacturing processes chains are not fully understood and there are still
challenges for improving the manufacturing process chains related to
different industries and development of new manufacturing process chains
for new emerging applications.

251
Session 24: Overview of modelling and simulation

Material Flow Simulation


Moreover, while the steady decline in computational cost renders the use of
simulation very cost-efficient in terms of hardware requirements, commercial
simulation software has not kept up with hardware improvements.
Concerning material flow simulation, it can be very time-consuming to build
and verify large models with standard commercial-off- the-shelf (COTS)
software. Efficient simulation-model generation will allow the user to
simplify and accelerate the process of producing correct and credible
simulation models.

Supervisory Control and Data Acquisition


Whilst SCADA systems are generally designed to be dependable and fail-
safe, the number of security breaches over the last decade shows that their
original design and subsequent evolution failed to adequately consider the
risks of a deliberate attack. Although best practices and emerging standards
are now addressing issues which could have avoided security breaches, the
key problems seem to be the increased connectivity and the loss of separation
between SCADA and other parts of IT infrastructures of organisations.

Supply Chain Simulation


Identifying the benefits of collaboration is still a big challenge for many
supply chains. Confusion around the optimum number of partners, investment
in collaboration and duration of partnership are some of the barriers of healthy
collaborative arrangements that should be surpassed.

Manufacturing Systems and Networks Planning and Control


Existing platforms do not tackle the numerous issues of manufacturing
network management in a holistic integrated manner. The results of
individual modules often contradict each other because they refer to not
directly related manufacturing information and context (e.g. long-term
strategic scheduling vs. short term operational scheduling). The
harmonisation, both on an input / output level and to the actual contents of
information is often a mistreated issue that hinders the applicability of tools
to real life manufacturing systems.

252
Session 24: Overview of modelling and simulation

General Challenges
Apart from the gaps for each technological method / tool, general future
challenges and trends are discussed in the following section.
Firstly, developers of simulation tools are gradually introducing cloud-based
technologies in order to facilitate the mobility of the applications and the
interoperability between different partners. Currently, only few commercial
tools have integrated this function. Moreover, the even more complex
processes and products demand high performance simulations which require
powerful and expensive CPUs. In addition, efforts are being made towards
the creation of application that run in multiple and mobile devices. The
extended use of open and cloud-based tools can address these problems and
result in high performance computing at a minimum cost.
Nowadays, simulation software tools usually offer only dedicated application
object libraries for developing fast and efficient models of common scenarios
and they can be characterised as limited concerning the broad field of
manufacturing. Another issue is that while there is a great variety of functions
and resources, the vast majority of tools are focused only to a small
percentage of them. There is also a lack of proper data exchange among
different domains and few or no common standards or integrated frameworks,
which cause difficulties in the interoperability and collaboration between
systems and partners. The use of incremental model building, on the one hand
allows in-process debugging and on the other hand increases the complexity
of the model. All the issues could be address with the development and
utilisation of multi-disciplinary and multi-domain integrated simulation tools.
As far as the lifecycle simulation is concerned, the poverty of adequate
modelling tools should be noted. Only few applications take into serious
consideration product life-cycle costs and environmental issues. In addition,
tools usually aim at the re-manufacturing of specific product types and they
are still insufficient for de-manufacturing of products. So, the researchers
should focus on the development of tools for the field of lifecycle
management.
Currently, object-oriented, hierarchical model of plants, encompassing
business, logistic and production processes exist but the direct integration of
modelling tools with CAD, DBMS (ORACLE, SQL Server, Access, etc.),
direct spreadsheet link in/out, XML save format, HTML reports is still
limited. As a result, simulation tools that will assure the multi-level
integration among them should be developed.
Gradually, enterprises are starting to adopt the concept and the models of the
virtual factory. But, still the technologies related to virtual factory and
especially these concerning data acquisition, control and monitoring are still
in their infancy, expensive, complicated and hard to apply. So, the research

253
Session 24: Overview of modelling and simulation

should move towards the direction of real-time factory controlling and


monitoring and applicable and affordable tools should be developed.
Effort is made in order to create smart, intelligent and self- learning tools. The
current practice involves integrated neural networks and experiment
handling, inbuilt algorithms for automated optimisation of system parameters
and custom model. Moreover, there are applications that base on empirical or
past data and some knowledge-based advisory systems. Although,
satisfactory, analytical simulation capabilities in continuous processing units
can be noticed, research is required in order to develop more intelligent tools
that will lead to autonomous and-self adapting systems.
The use of simulation for human-centered learning and trainings should be
evolved and spread. Currently, it is used restrictedly in the fields of aviation
and automotive due to the fact that it is costly and time-consuming but if
affordable tools are developed, the trainings will become more effective in
the long run.
Last but not least, the complexity of existing frameworks used in the design
phases is increased and requires high skill and long processing time which, as
a result, do not facilitate the use of crowdsourcing.
In conclusion, there is a significant evolution of simulation tools, but they are
still undoubtedly a fertile field of research.

254
Session 25: Building mathematical model of a plant

Session 25
Building mathematical model of a plant
Content
Introduction ............................................................................................. 256
25.1 Types of Production Systems ...................................................... 256
25.1.1 Serial Production lines .......................................................... 256
25.1.2 Assembly systems ................................................................ 259
25.2 Structural Modeling ..................................................................... 260

255
Session 25: Building mathematical model of a plant

Introduction

All methods of analysis, continuous improvement, and design described in


this textbook are model based, i.e., their application requires a mathematical
model of the production system under consideration. Therefore, the issue of
mathematical modeling is of central importance. The main difficulty here is
that no two production systems are identical. Even if they were designed
identically, numerous changes and adjustments, introduced in the course of
time by engineering and equipment maintenance personnel, force them to
evolve so that they become fundamentally different. Thus, there are,
practically speaking, infinitely many different production systems.
Nevertheless, it is possible to introduce a small set of standard models to
which every production system may be reduced, perhaps at the expense of
sacrificing some fidelity of the description. The purpose of this chapter is to
discuss these standard models and indicate how a given production system
can be reduced to one of them. The issue of parameter identification is also
addressed.

25.1 Types of Production Systems

25.1.1 Serial Production lines

Serial production line {a group of producing units, arranged in consecutive


order, and material handling devices that transport parts (or jobs) from one
producing unit to the next.
Figure 25.1 shows the block diagram of a serial production line where, circles
represent producing units and rectangles are material handling devices.

Figure 0.1 Serial production line

The producing units may be either individual machines or work cells, carrying
out machining, washing, heat treatment, and other operations. If assembly
operations are performed, the parts to be attached to the one being processed
are viewed as produced by another production system and, therefore, the line
is still serial (rather than an assembly system { to be considered in Subsection
25.1.2). The producing units may also be departments or shops of a
manufacturing plant. For instance, they may represent the body shop, paint
shop, and the general assembly of an automotive assembly plant. Finally, the
producing units may even be complete plants, representing various tiers of a
supply chain. However, since the emphasis of this book is on parts °ow rather

256
Session 25: Building mathematical model of a plant

than on the technology of manufacturing, we refer to all producing units as


machines.
The material handing devices may be boxes, or conveyors, or automated
guided vehicles, when the producing units are machines or work cells or
shops in a plant. They may be trucks, trains, etc., when the producing units
are plants. Whatever their physical implementation may be, we refer to them
as bufiers, since the most important feature of material handling devices, from
the point of view of the issues addressed in this textbook, is their storing
capacity.
The bufiers, discussed above, are called in-process bufiers. In addition, se-
rial production lines may have finished goods buffers (FGB). The purpose of
the latter is to filter out production randomness and, thereby, ensure reliable
satisfaction of customers demand by unreliable production systems. An
example of a serial line with a FGB is shown in Figure 25.2.

Figure 0.2 Serial production line with a finished goods buffer

In some cases, parts within a serial line are transported on carriers, some-
times referred to as pallets, skids, etc. Such lines are called closed with respect
to carriers (see Figure 25.3). Here, raw materials must be placed on a carrier,
and the finished parts must be removed from the carrier, returning the latter
to the empty carrier buffer. Thus, the performance of such lines may be
impeded, in comparison to the corresponding open lines, since the first
machine may be starved for carriers and the last machine may be blocked by
the empty carrier buffer. Too many carriers lead to frequent blockages of the
last machine; too few carriers lead to frequent starvations of the first machine.
Thus, an additional problem for closed lines is selecting a \just right" number
of carriers.

Figure 0.3 Closed serial line

Along with producing units, serial lines may include inspection operations
intended to identify and remove defective parts produced in the system. Such
a line is shown in Figure 25.4 where the shaded circles are the machines,
which may produce defective parts, and the black circles are the inspection
machines; the arrows under the inspection machines indicate scrap removal.

257
Session 25: Building mathematical model of a plant

Figure 0.4 Serial line with product quality inspection

Another variation of serial lines is production lines with rework. Here, if a


defective product is produced, it is repaired and returned to an appropriate
operation for subsequent re-processing. An example of a serial line with
rework is shown in Figure 25.5. Such lines are typical, for instance, in paint
shops of automotive assembly plants.
A generalization of lines with rework are the so-called re-entrant lines,
illustrated in Figure 25.6, where some of the machines are represented by
ovals to better indicate the °ow of parts. Here, each part may visit the same
machine multiple times. Typically, this structure arises in semiconductor
manufacturing where, on the one hand, equipment costs are extremely high,
and, on the other hand, the products have a layered structure, which
necessitates/permits the utilization of the same equipment at various stages of
the production process. Clearly, these lines may have even more severe
problems with blockages and starvations and, therefore, their performance is
typically inferior to corresponding \untangled" serial lines. In addition, since
each machine serves several bufiers, priorities of service become an important
issue.

Figure 0.5 Serial line with rework

Figure 0.6 Re-entrant line

The serial production line is a \work horse" of manufacturing. It is hardly


possible to find a production system, which would not include one or more
serial lines. Moreover, all other production systems may be broken down into
serial lines connected according to a certain topology. Thus, the study of serial

258
Session 25: Building mathematical model of a plant

lines is of fundamental importance in Production Systems Engineering, and


it is a major component of this textbook (Parts II and III).
25.1.2 Assembly systems

Assembly system {two or more serial lines, referred to as component lines,


one or more merge operations, where the components are assembled, and,
perhaps, several subsequent processing operations performed on an
assembled part.
Figures 25.7 and 25.8 show the block diagrams of typical assembly systems
where, as before, the circles represent the machines and rectangles are the
bufiers. Systems similar to that of Figure 25.8 are typical in automotive
engine plants where the horizontal line represents the general engine
assembly (with engine blocks as \raw materials"), while the vertical lines are
various departments producing engine parts, such as crank shaft, camshaft,
etc.

Figure 0.7 Assembly system with a single merge operation

Figure 0.8 Assembly system with multiple merge operations

Clearly, assembly systems may be viewed as several serial production lines


connected through their finished goods bufiers. Each of these component
lines may have all other variations described above, e.g., being closed with
respect to carriers or re-entrant. In this book, assembly systems are studied in
Part IV.

259
Session 25: Building mathematical model of a plant

Figure 0.9 Complex production system

While it is highly desirable that a production system under consideration be


reduced to either a serial line or an assembly system, it is possible to carry out
some analyses (for instance, performance evaluation) for more complex
model-s Types of production systems! Complex lines. Figure 25.9 shows an
example of such a model.

25.2 Structural Modeling

It is quite seldom that production systems on the factory °oor have exactly the
same structure as one of those shown in Figures 25.1 - 25.9. For instance, a
serial line may have multiple machines in some operations, as shown in
Figures 25.10 and 25.11. The situation in Figure 25.10 typically happens
because no machines of the desired capacity are available for some
technological operations. Figure 25.11 exemplifies the situations where a
machine performs several synchronous dependent operations in the sense that
all operations are down if at least one of them is down. In all these cases, the
production systems must be reduced to one of the standard types discussed
above (see Figure 325.12) in order to carry out their analysis and design using
the tools described in this book. We refer to this process as structural
modeling.

Figure 0.10 Serial production line with parallel machines

Figure 0.11 Serial production line with synchronous dependent machines

Figure 0.12 Structural model of serial production lines of Figures 25.10 and 25.11

260
Session 25: Building mathematical model of a plant

The general approach to structural modeling is based on the maxim at-tributed


to Einstein: \The model should be as simple as possible, but not simpler." The
last clause makes the process of modeling more an art than engineering and,
like the arts, must be taught through examples and experience. A few
examples described below illustrate how this process is carried out, while
Sub-section 25.3.5 shows how the characteristics of the aggregated machines
of Figure 25.12 can be calculated. Case studies in Section 25.10 offer
additional examples.
Consider an automotive ignition module production system shown in Figure
25.13, which operates as follows: The raw materials for parts A1 and A2 are
loaded on conveyors at operations 1 and 9, respectively, and then transported
to other operations. At operation 8, parts A1 are unloaded into the buffer,
which is another conveyor, and which transports them to the mating (or
merge) operation 13, where the assembly of A1 and A2 takes place.
Operations 14 - 18 perform additional processing.
As it follows from this description, this system can be modeled as shown in
Figure 25.14, which is a standard assembly system.

Figure 0.13 Layout of automotive ignition module assembly system

Figure 0.14 Structural model of the automotive ignition module assembly system of Figure 25.13

261
Session 25: Building mathematical model of a plant

The situation with the system of Figure 25.15 is more complex. Here, 13
injection molding machines produce seven different part types necessary for
the assembly. Which part is produced by a specific injection molding machine
depends on scheduling. A physical model of this system is shown in Figure
25.16. To simplify it, we note that from the point of view of the in-process
bufiers, it is not important which particular machine is producing a specific
part type at each time moment. What is important is the rate of parts °ow into
each buffer. Therefore, it is possible to substitute the 13 real machines by 7
virtual machines (see Figure 25.17), each producing a specific part type. Also,
the additional processing operations can be aggregated into one assembly
machine. If it is possible to calculate the parameters of the virtual machines,
based on the parameters of the real machines and scheduling procedures
(which, in fact, can be done with a certain level of fidelity), then the
production system of Figure 25.16 is reduced to a standard assembly system,
shown in Figure 25.17.

Figure 0.15 Layout of injection molding - assembly system

262

You might also like