You are on page 1of 74

Unit III - COMPUTER CONTROLLED SYSTEMS

Syllabus
Basic building blocks of computer controlled systems – Data acquisition system –
Supervisory control – Direct digital control- SCADA:- Hardware and software, Remote
terminal units, Master Station and Communication architectures.
Introduction

Figure 1. Centralized control system


Centralized control (in Figure 1) is used when several machines or processes are
controlled by one central controller. The control layout uses a single, large control system to
control many diverse manufacturing processes and operations. Each individual step in the
manufacturing process is handled by a central control system controller. No exchange of
controller status or data is sent to other controllers.
One disadvantage of centralized control is that, if the main controller fails, the whole
process stops. A central control system is especially useful in a large, interdependent process
plant where many different process must be control for efficient use of facilities and raw
materials.

The Distributive Control System (in Figure 2) differs from the centralized system in
that each machine is handled by a dedicated control system.

Figure 2. Distributed Control System


Each dedicated control (PLC) is totally independent and could be removed from the
overall control scheme if it were not for the manufacturing functions it performs. Distributive
control involves two or more computers communicating with each other to accomplish the
complete control task. This type of control typically employs local area networks (LANs), in
which several computers control different stages or processes locally and are constantly
exchanging information and reporting the status on the process. Communications among
computers is done through single coaxial cables or fiber optics at very high speed.
Distributive control drastically reduces field wiring and heightens performance because
it places the controller and I/O close to the machine process being controlled. Because of their
flexibility, distributive control systems have emerged as the system of choice for numerous
batch and continuous process automation requirements.
Basic Building Blocks of Computer Controlled System
The basic functions of computer aided process control system are:
 Measurement and acquisition.
 Data conversion with scaling and checking.
 Data accumulation and formatting.
 Visual display.
 Comparing with limits and alarm raising.
 Recording and monitoring of events, sequence and trends.
 Data logging and computation.
 Control action.

Figure 3. Schematic diagram of Computer aided process control system


As shown in figure 3, the controlled variable (output of the process) is measured as
before in continuous electrical signal (analog) form, and converted into a discrete-time signal
using device called analog-to-digital converter (ADC). The value of discrete signal thus
produced is then compared with the discrete form of the set-point (desired value) inside the
digital computer to produce an error signal (e). An appropriate computer program representing
the controller, called control algorithm, is executed which yields a discrete controller output.
The discrete signal is then converted into a continuous electrical signal using a device called
digital-to-analog converter (DAC), and the signal is then fed to the final control element. This
control strategy is repeated at some predetermined frequency so as to achieve the closed-loop
computer control of the process.
Analog and Digital I/O Modules
Analog input signals are received from sensors and signal conditioner and represent the
value of measurand like flow, position, displacement, temperature, etc. the role of a sensor is
to measure the parameter for which it is constructed and present an equivalent electrical signal
as output. The signal conditioner takes as input the output of sensor and suitably conditions
them to be acceptable to real-time systems. The signal may be amplified, filtered or/and
isolated in signal conditioner depending on the sensor type and its electrical characteristics.
Digital input signals refer to the ON-OFF states of various valves, limit switches, etc.
one digital input signal represents status of the limit switch or valve and is represented by on
bit of information for real-time systems. Normally digital input signals are compatible to real-
time systems and can be inputted directly.
Analog Input Module
The module continuously scans the analog inputs signals in the pre-defined order and
frequency, converts them into the digital and then sends these values to processor and memory
module for processing. The analog input module may also provide signal conditioning for some
standard transducers like thermocouples, LVDTs, strain gauges etc. in such cases, analog input
modules may be different for different types of transducers.

Figure 4. Analog Input module


The analog input module (in Figure 4) operates under the command from processor in the
following manner:
 The processor initiates multiplexer by sending the address of input channel.
 The multiplexer connects the particular channel to the ADC.
 The processor sends the start convert signal to ADC. The ADC converts the analog
signal to digital, puts it at the output and issues end of conversion signal.
 The processor on receipt of the end of conversion signal reads the ADC output and
stores in memory.
 The operation is repeated by processor by sending the address of next channel to
multiplexer.
Digital Input Module

Figure 5. Digital Input module


The digital inputs can be accepted directly by the processor. Thus no analog to digital
converter is required in digital input module. The figure 5 shows each digital input channel
consists of n bits which are transferred in parallel.
Analog Output module
The objective of analog output module is to provide appropriate control signals to
different control valves. The structure of analog output module which is derived by reversing
the analog input module shown in figure 6. The demultiplexer switches the digital output
received from the master processor to the output channel whose address is specified. The digital
to analog converter of particular channel will convert the input digital value to equivalent
analog signal which is connected to control valve, motor etc.

Figure 6. Analog Output module


Timer/Counter Module
Timer/counter module basically consists of a number of times/counters which may be
cascaded or used independently. Each timer/counter may be programmed in different modes in
which case the timer/counter output will be different. The modes may be interrupt on zero
count, rate generator, monoshot etc. We shall not discuss the modes of timer/counter here. The
structure of a timer/counter is shown in Figure 7.

Figure 7. Timer / Counter module


The processor loads the count in the form of data byte/words. The clock may be derived
from processor clock or may be provided externally. The gate signal is used to enable/disable
the counter operation. The processor may read the current counter value at any instant by
stopping the counter using gate signal or read it on the Fly, i.e., without stopping the counter.
The output may be used to interrupt the processor or in any other way as programmed.
Display Control Module
Display control module consists of following independent sub-modules.
 Manual entry sub-module
 CRT controller sub-module
 LED/LCD control sub-module
 Alarm annunciator sub-module
 Printer controller sub-module
The manual entry to the system may be via thumb wheel switches, various ON-OFF
command switches and / or keyboard. The keyboard may be a full ASCII keyboard or a
specialized numeric and task-oriented keyboard. There are both advantages and disadvantages
in using any manual entry sub modules. Basically, manual entry sub-modules have built in
buffer to store current setting of thumb wheel switches and last command/data entered through
keyboard. The processor may read these values on its own or may be interrupted whenever a
new data/command is entered.
CRT controller sub-module interfaces main processor to Visual Display Unit, which is
used to show the status of process by displaying transducers values, present set points entered
through manual entry sub-module, historical trend of various parameters, mimic diagram of
process, alarm status etc. All the above display tasks are provided in the main processor through
software. LED/LCD control sub-module interfaces array of LED/LCD to main processor. This
sub-module accepts data bytes/word from main processor and displays it on LED/LCD. Alarm
annunciator controller sub-module generates ON-OFF signal for each type of alarm. The
processor may send an alarm byte/word to the alarm annunciator controller sub-module which
decodes byte/word and sends ON-OFF signals for various types of alarms. These alarm
modules are separate and require only digital signal to light the incandescent bulbs and/or audio
alarms. Printer controller sub-module is printer interface to main processor. Generally it has
local intelligence for printer control, a data buffer to store the data for printing etc. It accepts
the data bytes/words from main processor and prints it for the benefit of operator. The
interaction with main processor may be through periodic direct memory access or under the
direct control of main processor.
Role of Computers in Measurement and Control
The figure 8 shows the Role of computer in process control. After the technological
development of digital computer system, its use for measurement and control application has
tremendously increased. The basic objective of computer-aided measurement and control is to
identify the information flow and to manipulate the material and energy flow of given process
in a desired, optimal way. The requirement in terms of response time, computing power,
flexibility and fault tolerance are stricter, since the control is to be carried out in real-time.
Other difficulties, as a result of computer technology developments are a solution to the
problem of complexity, flexibility, and geographical separation of process elements.
Digital computer control application exists today for two major areas in the process
industries passive and active applications. Passive application involves acquisition and
manipulation of process data whereas; active application involves manipulation of process as
well. The passive application deals predominantly with monitoring, alarming and data
reduction systems. The process data, after captured (measured) on-line, are sent to the data
acquisition computer through interface module. The smart instruments such as smart sensors,
smart transmitters and smart actuators (final control element), are now available which have a
microcomputer built into them. The smart instruments help the operator to get real-time process
measurement information and automatic transmission is required form for further processing
by the process control computer. The smart instruments ensure that the actuator, transmitter or
sensor function according to design.
The major application of digital computers is in process control and plant optimization.
Computer control systems, once prohibitively expensive, can now be tailored to fit most
industrial applications on a competitive economic basis. The advances in the use of computer
control have motivated many and changed the concepts of the operations of industrial
processes. Video display terminals now provide the four for operators to supervise the whole
plant from a control room. Large panels of instruments, knobs and switches are replaced by a
few keyboards and screens. Control rooms are now much smaller and fewer people are required
to supervise the plant.

Figure 8. Computer based control system


Data Acquisition System (DAQ)
Data acquisition systems (DAS) shown in Figure 9. DAS interface between the real
world of physical parameters, which are analog, and the artificial world of digital computation
and control. With current emphasis on digital systems, the interfacing function has become an
important one; digital systems are used widely because complex circuits are low cost, accurate,
and relatively simple to implement.
In addition, there is rapid growth in the use of microcomputers to perform difficult
digital control and measurement functions. Computerized feedback control systems are used
in many different industries today in order to achieve greater productivity in our modern
industrial societies. Industries that presently employ such automatic systems include steel
making, food processing, paper production, oil refining, chemical manufacturing, textile
production, cement manufacturing, and others.
Figure 9. Block diagram of Data Acquisition System
The devices that perform the interfacing function between analog and digital worlds are
analog-to-digital (A/D) and digital-to-analog (D/A) converters, which together are known as
data converters. Some of the specific applications in which data converters are used include
data telemetry systems, pulse code modulated communications, automatic test systems,
computer display systems, video signal processing systems, data logging systems, and sampled
data control systems. In addition, every laboratory digital multimeter or digital panel meter
contains an A/D converter.
Obtaining proper results from a PC-based DAQ system (in Figure 10) depends on each
of the following system elements.
• Personal computer
• Transducers
• Signal conditioning
• DAQ Hardware
• DAQ Software

Figure 10. Typical PC based DAQ system


Personal Computer
The computer used for your data acquisition system can drastically affect the maximum
speeds at which you are able to continuously acquire data. Today’s technology boasts Pentium
and Power PC class processors coupled with the higher performance PCI bus architecture as
well as the traditional ISA bus and USB. With the advent of PCMCIA, portable data acquisition
is rapidly becoming a more flexible alternative to desktop PC based data acquisition systems.
For remote data acquisition applications that use RS-232 or RS-485 serial communication, your
data throughput will usually be limited by the serial communication rates.
Transducers
Transducers sense physical phenomena and provide electrical signals that the DAQ
system can measure. For example, thermocouples, RTDs, thermistors, and IC sensors convert
temperature into an analog signal that an ADC can measure. Other examples include strain
gauges, flow transducers, and pressure transducers, which measure force, rate of flow, and
pressure, respectively. In each case, the electrical signals produced are proportional to the
physical parameters they are monitoring.
Signal Conditioning
The electrical signals generated by the transducers must be optimized for the input
range of the DAQ board. Signal conditioning accessories can amplify low-level signals, and
then isolate and filter them for more accurate measurements. In addition, some transducers
require voltage or current excitation to generate a voltage output. Signal conditioning has the
following applications:
i) Amplification, ii) Isolation, iii) Filtering, iv) Excitation and v) Linearization.
Amplification
The most common type of conditioning is amplification. Low-level thermocouple
signals, for example, should be amplified to increase the resolution and reduce noise. For the
highest possible accuracy, the signal should be amplified so that the maximum voltage range
of the conditioned signal equals the maximum input range of the analog-to-digital converter
(ADC). Very high resolution reduces the need for high amplification and provides wide
dynamic range.
Isolation
Another common application for signal conditioning is to isolate the transducer signals
from the computer for safety purposes. The system being monitored may contain high-voltage
transients that could damage the computer. An additional reason for needing isolation is to
make sure that the readings from the plug-in DAQ board are not affected by differences in
ground potentials or common-mode voltages. When the DAQ board input and the signal being
acquired are each referenced to “ground,” problems occur if there is a potential difference in
the two grounds. This difference can lead to what is known as a ground loop, which may cause
inaccurate representation of the acquired signal, or if too large, may damage the measurement
system. Using isolated signal conditioning modules will eliminate the ground loop and ensure
that the signals are accurately acquired.
Filtering
The purpose of a filter is to remove unwanted signals from the signal that you are trying
to measure. A noise filter is used on DC-class signals such as temperature to attenuate higher
frequency signals that can reduce the accuracy of your measurement. AC-class signals such as
vibration often require a different type of filter known as an antialiasing filter. Like the noise
filter, the antialiasing filter is also a low pass filter; however, it must have a very steep cut off
rate, so that it almost completely removes all frequencies of the signal that are higher than the
input bandwidth of the board. If the signals were not removed, they would erroneously appear
as signals within the input bandwidth of the board.
Excitation
Signal conditioning also generates excitation for some transducers. Strain gauges,
thermistors, and RTDs, for example, require external voltage or current excitation signals.
Signal conditioning modules for these transducers usually provide these signals. RTD
measurements are usually made with a current source that converts the variation in resistance
to a measurable voltage. Strain gauges, which are very low-resistance devices, typically are
used in a Wheatstone bridge configuration with a voltage excitation source.
Linearization
Another common signal conditioning function is linearization. Many transducers, such
as thermocouples, have a nonlinear response to changes in the phenomena being measured.
Therefore, application software includes linearization routines for thermocouples, strain
gauges, and RTDs.
DAQ Hardware
Data acquisition hardware includes the following functions.
Analog Inputs
The analog input specifications can give you information on both the capabilities and
the accuracy of the DAQ product. Basic specifications, which are available on most DAQ
products, tell you the number of channels, sampling rate, resolution, and input range. The
number of analog channel inputs will be specified for both single-ended and differential inputs
on boards that have both types of inputs. Single-ended inputs are all referenced to a common
ground point. These inputs are typically used when the input signals are high level (greater than
1 V), the leads from the signal source to the analog input hardware are short (less than 15 ft.),
and all input signals share a common ground reference. If the signals do not meet these criteria,
you should use differential inputs. With differential inputs, each input has its own ground
reference. Noise errors are reduced because the common-mode noise picked up by the leads is
cancelled out.
Sampling Rate
This parameter determines how often conversions can take place. A faster sampling
rate acquires more points in a given time and can therefore often form a better representation
of the original signal.

Figure 11. Effect of too low a Sampling Rate


For example, audio signals converted to electrical signals by a microphone commonly
have frequency components up to 20 kHz. To properly digitize this signal for analysis, the
Nyquist sampling theorem tells us that we must sample at more than twice the rate of the
maximum frequency component we want to detect. So, a board with a sampling rate greater
than 40 kS/s is needed to properly acquire this signal.
Multiplexing
A common technique for measuring several signals with a single ADC is multiplexing.
The ADC samples one channel, switches to the next channel, samples it, switches to the next
channel, and so on. Because the same ADC is sampling many channels instead of one, the
effective rate of each individual channel is inversely proportional to the number of channels
sampled.
Resolution
The number of bits that the ADC uses to represent the analog signal is the resolution.
The higher the resolution, the higher the number of divisions the range is broken into, and
therefore, the smaller the detectable voltage changes.
Figure 12. Digitized sine wave with 3-bit Resolution.
Figure 12 shows a sine wave and its corresponding digital image as obtained by an ideal
3-bit ADC. A 3-bit converter (which is actually seldom used but a convenient example) divides
the analog range into 23, or 8 divisions. Each division is represented by a binary code between
000 and 111. Clearly, the digital representation is not a good representation of the original
analog signal because information has been lost in the conversion. By increasing the resolution
to 16 bits, however, the number of codes from the ADC increases from 8 to 65,536, and you
can therefore obtain an extremely accurate digital representation of the analog signal if the rest
of the analog input circuitry is designed properly.
DAQ Software
Data acquisition software transforms the PC and DAQ hardware into a complete DAQ,
analysis, and display system. DAQ hardware without software is useless and DAQ hardware
with poor software is almost useless. The following two types of software is used: i) Driver
Software and ii) Application Software.
Driver Software
The majority of DAQ applications use driver software. Driver software is the layer of
software that directly programs the registers of the DAQ hardware, managing its operation and
its integration with the computer resources, such as processor interrupts, DMA, and memory.
Driver software hides the low-level, complicated details of hardware programming, providing
the user with an easy-to-understand interface.
Driver functions for controlling DAQ hardware can be grouped into analog I/O, digital
I/O, and timing I/O. Although most drivers will have this basic functionality, you will want to
make sure that the driver can do more than simply get data on and off the board. Make sure the
driver has the functionality to:
• Acquire data at specified sampling rates.
• Acquire data in the background while processing in the foreground.
• Use programmed I/O, interrupts, and DMA to transfer data.
• Stream data to and from disk.
• Perform several functions simultaneously.
• Integrate more than one DAQ board.
• Integrate seamlessly with signal conditioning equipment.
Application Software
An additional way to program DAQ hardware is to use application software. But even
if you use application software, it is important to know the answers to the previous questions,
because the application software will use driver software to control the DAQ hardware.
Application software adds analysis and presentation capabilities to the driver software.
Application software also integrates instrument control such as GPIB (General Purpose
Interface Bus), RS-232, PXI (Peripheral component interact Extensions for Instrumentation),
and VXI (Virtual Extension for Instrumentation) with data acquisition.
Supervisory Control
The advent of microprocessor has changed the field of process control completely. The
tasks which were performed by complex and costly minicomputers are now easily programmed
using microcomputers.
In the past computer was not directly connected to the process but was used for
supervision of analog controllers. The analog controllers were interfaced to the process directly
as well as through specialised control for dedicated functions shown in figure 13.
The analog controllers and specialised controllers were called level 2 and level 1 control
respectively. The emergence of economical and fast microprocessor has made analog
controllers completely out-dated, as the same functions can be performed by digital computers
in more efficient and cost effective way.

Figure 13. Supervisory Computer Control


Direct Digital Control
Digital Data Control (DDC) is the automated control of a condition or process by a
digital device (computer). DDC takes a centralized network-oriented approach. All
instrumentation is gathered by various analog and digital converters which use the network to
transport these signals to the central controller.
The centralized computer then follows all of its production rules (which may
incorporate sense points anywhere in the structure) and causes actions to be sent via the same
network to valves, actuators, and other HVAC components that can be adjusted.
Direct Digital Control (DDC) Structure
The DDC (Direct Digital Control) directly interfaces to the process for data acquisition
and control purpose. That is, it has necessary hardware for directly interfacing (opto-isolator,
signal conditioner, ADC) and reading the data from process. It should also have memory and
arithmetic capability to execute required P, P + I or P + I + D control strategy. At the same
time, the interface to control valve should also be part of DDC.
Figure 14. Direct Digital Control
Figure 14 shows the various functional blocks of a direct digital control system. These
functional blocks have been described in number of books on microprocessor. The multiplexer
acts like a switch under microprocessor control. It switches and presents at its output the analog
signal from a sensor/transmitter. The analog to digital converter converts the analog signal to
digital value.

The microprocessor performs the following tasks:


• It reads the various process variables from different transmitters through multiplexer
and ADC.
• It determines the error for each control loop and executes control strategy for each
loop.
• It outputs correction value to control valve through DAC.
Direct Digital Control (DDC) Software
The main part, DDC software is program for control loops. There are two algorithms
for programming a three-mode PID control loop:
i) Position algorithm and ii) Velocity algorithm.
Position Algorithm
The three-mode controller cann be represented by,
e 1
Yn  KP .en  KD . 
t K I 0 e.t  Y0
------ (1)

Where,
Yn - valve position at time n
Yo - median valve position
KP - proportional constant = l00 / PB (where, PB - proportional band in per cent),
KI - integral constant = 1/TI (where TI - integral time constant)
KD - derivative constant =TD (where TD - derivative constant)
en - error at instant t n = (S - Vn)
Vn - value of controlled variable at instant tn
S - set-point
The PID control can be realised with a microprocessor based system, if only the above
equation is implemented in the software. Apparently, it is very difficult to write the software
for implementing the above equation for a microprocessor based system. However, the above
equation can be modified such that its software implementation becomes easy. The
modifications are discussed in the following section.
The integral term at any given instant t n is equal to the algebraic sum of all the control
forces generated by the integral control action from the beginning to that instant. Thus integral
term can be represented as
1n
K   et .t
I t 0
and the differential term, KI . Δe/Δt at any instant tn is proportional to the rate of change of the
error.
Thus, differential term, can be represented as
e  en1
K D. n
t
Where, en is the current error and en-1 is the previous error calculated at instant tn-1.

Figure 15. Flow chart of PID Control.


Thus, with these modifications the three-mode controller equation will become:
Y  K .e  K . en  en1  1
n
e .t  Y
n P n
D
t K
 t 0
------ (2)
I t0
The integral and the differential control forces are dependent upon the interval between
the two consecutive errors. This interval is the inverse of the rate at which the value of the
controlled variable is measured i.e. the sampling rate. Hence the provision for defining the
sampling rate should be made available in the software.
The flowchart for calculating PID control output based on above equation (Eq. 2) is
shown in Figure 15.
Velocity Algorithm
In number of control loops, the final control element is stepper motor or stepper motor
driven valve. In such cases, the requirement at the computer output will be a pulse train
specifying the change in valve position. Thus output of position algorithm cannot be used, since
it gives the new position of the valve, in absolute term. In velocity algorithm, the computer
calculates the required change in valve position. The output is digital pulse train which can be
directly used in case valve is stepper motor driven. In case of other valves, stepper motor
combined with slide wire arrangement can be used. The same function can be performed by an
integrating amplifier.
Supervisory Control And Data Acquisition (SCADA)
SCADA stands for supervisory control and data acquisition. The SCADA system is
based on Hardware and Software. The hardware included “Master Terminal Unit (MTU),
Remote Terminal Units (RTUs)” or / and actuators, and sensors. The software included Human
Machine Interface (HMI) or other user software that provides communication interface
between SCADA hardware and software.

Figure 16. General Architecture of SCADA


Human Machine Interface (HMI) also provides facility to visualized entire SCADA
communication included controlling and monitoring. Master Terminal Unit (MTU) is located
at control center or perform the services of control station and connected with one or more
Remote Terminal Units (RTUs), that may geographically distributed over remote sites (wide
area network) or within Local Area Network (LAN) using communication link / media such as
radio signals, telephone line, cable connection, satellite and micro waves media. Typically,
physical environment is connected with actuators or/and sensors and actuators / sensors are
connected with Remote Terminal Units (RTUs). The RTUs have been collecting
data/information from actuators/sensors and then process to Master Terminal Station (MTU)
for monitoring and controlling the entire SCADA system. Usually, SCADA system is divided
into five main components / parts:
i) Master Terminal Unit (MTU),
ii) Remote Terminal Unit (RTU),
iii) Human Machine Interface (HMI),
iv) SCADA communication media or link, and
v) Field instrumentation
SCADA System
After having dealt with the basic hardware modules of a real-time system, let us first
concentrate on Supervisory Control and Data Acquisition (SCADA) system, since it is the first
step towards automation. The basic functions carried out by an SCADA system are:
 Channel scanning
 Conversion into engineering units
 Data processing.

Figure 17. Supervisory control and data acquisition system


The figure 17 shows the block diagram of SCADA. Before considering other features of
SCADA, let us discuss the basic functions.
Channel Scanning
There are many ways in which microprocessor can address the various channels and
read the data.
Polling
The microprocessor scans the channels to read the data, and this process is called
polling. In polling, the action of selecting a channel and addressing it, is the responsibility of
processor. The channel selection may be sequential or in any particular order decided by the
designer. It is also possible to assign priority to some channels over others, i.e. some channels
can be scanned more frequently than others. It is also possible to offer this facility of selecting
the order of channel addressing and channel priorities to the operator level, i.e. make these
facilities as dynamic.
The channel scanning and reading of data requires the following actions to be taken:
 Sending channel address to multiplexer
 Sending start convert pulse to ADC
 Reading the digital data.
For reading the digital data at ADC output, the end of conversion signal of ADC chip
can be read by processor and when it is 'ON', the digital data can be read. Alternatively, the
microprocessor can execute a group of instructions (which do not require this data) for the time
which is equal to or greater than conversion time of ADC and then read ADC output. Another
modification of this approach involves connecting the end of conversion line to one of the
interrupt request pins of the processor. In this case the interrupt service routine reads the ADC
output and stores at predefined memory location.
The channels can be polled
sequentially, in which case the channel
address in first step above increase by one
every time or they may be scanned in some
other order. In the latter case, a channel
Scan Array can he maintained in memory.
The scan array contains the addresses of the
channels in the order in which they should
he addressed. The ASCN array has 9, 10, 1,
2 ... as entries in sequence. Thus, the first
channel to be scanned will be channel 9,
followed by 10, 1, 2. . . pointer reaches the
last entry in the array, first entry is again
taken up (i.e. channel 9 is scanned).
If a channel number is repeated in
the array, then that particular channel will
be scanned repeatedly. Thus it is possible
to scan some channels more than others.
This gives them higher priority over others.
In Figure 18 shows the channel 9 is scanned
Figure 18. Channel scan array
3 times, channel, 2 is scanned 2 times while
other channels are scanned once during a
cycle.

Figure 19 shows the flowchart for


one scan cycle. The processor may scan the
channels continuously in the particular
order as illustrated by the flowchart or the
channels may be scanned after every fixed
time period. The second approach requires
a timer/counter circuit whose output is
connected to interrupt request input. The
scan routine, for one channel is
incorporated in 'Interrupt Service Routine'.
It is also possible to make at time gap
between two channels as variable. This
would require a n x 2 dimension scan array
as shown in Figure below. The Interrupt
Service Routine fetches time gap value for
next channel, timer/counter with the value
and initiates the timer/counter before
returning to main program.

Figure 19. Scan cycle flow chart


permanently in (ROM) Read Only Memory.
Thus the channels are always scanned in
that particular order. However, it may be
desirable to offer the facility of changing
the sequence at the operator level. The
operator may like to take this action
depending on the condition of the plant
being monitored. As an example, if at any
instant operator finds out that the transducer
connected to channel 9 has generated some
fault, then he might take decision to bypass
the channel 9, as otherwise its data will be
taken into analysis to produce incorrect
result. In another situation, it may become
necessary to scan some channel more
frequently for some time to observe its
response to some modifications
incorporated in the system. The operator
should be able insert new scan array at any
time. This facility may be provided through
a key switch, which may be connected to
Figure 20. Scan array with time interrupt request input of processor. The
The scan array may he decided at Interrupt Servicing Routine will accept new
the design stage of SCADA and fused scan array and store in place of the old one.

Interrupt Scanning
Another way of scanning the channels may be to provide some primitive facility after
transducer to check for violation of limits. It sends interrupt request signal to processor when
the analog signal from transducer is not within High and Low limits boundary set by Analog
High Analog Low signals. This is also called Scanning by Exception.
When any parameter exceeds the limits then the limit checking circuit would send
interrupt request to microprocessor which in turn would monitor all parameters till, the
parameter values, come back within pre-specified limits. This allows a detailed analysis of the
system and the problems by the SCADA system.

Figure 21. Interrupt request generation on limit violation


The limit checking circuit for one channel is shown in Figure. Two analog comparators
check whether the input signal is within high and low limits. The output is ORed and the final
output is used as interrupt request to microprocessor. This limit checking should not be
construed as alarm condition, but the condition for the start of any abnormality which may
generate alarm condition or may be controlled by the system before any alarm condition is
reached. Therefore, the system should be watched closely during this time.
Conversion to Engineering Units
The data read from the output of ADC should be converted to the equivalent
engineering units before any analysis is done or the data is sent for display or printing. For an
8-bit ADC working in unipolar mode the output ranges between 0 and 255. An ADC output
value will corresponds to a particular engineering value based on the following parameters.
 Calibration of transmitters.
 ADC mode and digital output lines.
The transmitter output should be in the range of 0-5 V or 4-20 mA range. Depending
on the input range of measured value for transmitter, a calibration factor is determined. If a
transmitter is capable of measuring parameter within the input range Xl and X2 and provides 0-
5 V signal at output then calibration factor is
𝑋2 − 𝑋1
1𝑉𝑜𝑙𝑡 = 𝑢𝑛𝑖𝑡𝑠
5
If we are converting this signal to digital through an 8-bit ADC (Input range 0-5 V) in unipolar
mode then 5 V = 255 and 0 V =0, i.e.1 Volt =255/5.
Thus the conversion factor is ADC output
255/5 = X2- Xl / 5 engineering units.
ADC output 1 = X2- Xl / 255 engineering units.
If the ADC output is Y then the corresponding value in engineering units will be
Y(X2 - Xl) / 255.
Conversion factor is therefore (X2- Xl ) / 255.
The conversion of ADC output to engineering units, therefore, involves multiplication
by conversion factor. The conversion factor is based on the ADC type, mode and the transmitter
range. This multiplication can he achieved by shift and add method in case of 8-bit
microprocessor. For 16-bit microprocessor, a single multiplication instruction will do the job.
Data Processing

The data read from the ADC output


for various channels is processed by the
microprocessor to carry out limit checking
and performance analysis. For limit
checking the Highest and Lowest limits for
each channel are stored in an array. When
any of the two limits is violated for any
channel, appropriate action like alarm
generation, printing, etc. is initiated.
The limit array shown in Figure,
simplifies the limit checking routine.
Through this, the facility to dynamically
change the limits for any channel may also
be provided, on the lines similar to scan
array.
Figure 22. Limit array

Master Terminal Unit (MTU)


Master terminal units (MTU) in SCADA system is a device that issues the commands
to the Remote Terminal Unit (RTUs) which are located at remote places from the control,
gathers the required data, stores the information, and process the information and display the
information in the form of pictures, curves and tables to human interface and helps to take
control decisions. This is the operation of the Master Terminal Unit (MTU) located in the
control center.
Communication between the MTU and RTU is bidirectional, however the major
difference is RTU cannot initiate the conversation, and RTU simply collects the data from the
field and stores the data. Communication between the MTU and RTUs are initiated by the
programs with in the MTU which are triggered either by operator instructions or automatically
triggered. 99% of the instructions and messages to the RTUs from MTUs are automatically
triggered. When Master Terminal Unit (MTU) asks the desired information RTU sends it.
So MTU is considered as master and RTU is the Slave. After receiving the data required MTU
communicates to the printers and CRTs which are operator interface through necessary
protocols. At this level of communication it will be of the form peer-to-peer communication
rather than master slave communication unlike communication between MTUs and RTUs.
Thus in SCADA system Master Terminal Unit (MTU) acts as heart of the system.
Remote Terminal Unit (RTU)
The Remote Terminal Units (RTUs) are basically distributed SCADA based systems
used in remote locations in applications like oil pipelining, irrigation canals, oil drilling
platforms etc. They are rugged and should be able to work unattended for a long duration.
There are two modes in which Remote Terminal Units work.
1. Under command from central computer
2. Standalone mode
Since these RTU's have to operate for a long duration unattended, the basic
requirements would be that they consume minimum power and have considerable self-
diagnostic facility. Following are the main parts of remote terminal units; Input / Output
modules, and Communication module.
Input / Output Modules
Input / Output modules contain analog input modules, analog output modules, digital
input modules and digital output modules.
Communication Module
The communication module is the most important portion of remote terminal unit and
has the interface available with 2-wire/4-wire communication line. Some of the RTUs may also
have built-in transceivers and modems. Following are the basic communication strategies that
a RTU may use depending on the application need:
Wireline communications
The wireline communication may have number of options and these options can be
selected depending upon the distance between central computer and RTU or between two
RTUs. These options are enlisted here:
(i) Option l-RS-232C/442. RTU can support communication via standard RS-
232C/442. The I/O ports can select the average levels as well as the baud rates.
(ii) Option 2-Switch line modem. When the user wants to use the existing telephone
lines for communication, the switch line modem can be effective. Such RTUs contain the
facilities like auto answer, auto dial and auto select baud rates. The modem is ideal for data
networks configured in time or event reporting RTUs and for master station polling networks.
(iii) Option 3-2-wire or 4-wire communication. The modem residents in the RTU can
be configured to 2 or 4-wire communication on dedicated lines. The same communication
protocol is used for all devices making the actual network configuration transparent to the user.
Terrestrial UHF/VHF radio with store and forward capability
The RTU may support a complete line of UHF/VHF terrestrial radios. The
communication protocol in these RTUs is transparent to the user and supports CRC
intelligence, error checking, and packet protocol for error free data transmission. The store and
Forward capability of the RTU minimizes the required input in large numbers.
Satellite communications
In the applications where wireline and terrestrial radio communications are impossible
or cost prohibitive, the satellite communications may be desirable. Some of the latest RTUs
provide the facility to be interfaced to one-way or two-way satellite communication using Very
Small Aperture Termina1s (VSATs). These terminals use one metre antennas and have data
rates from 50 to 60 Kbps (Kilobits per second).
Fibre-optic communications
For applications where electromagnetic interferences or hazardous electrica1 potentials
exist, the RTU can be networked using fibre-optic cables. The same communication protocols
and networking concepts available for wireline and radio are used for fibre-optic
communication.
Distributed SCADA System
In any application, if the number of channels is quite large then in order to interface
these to processor, one has to use multiplexers at different levels.

Figure 23. 256 Channel SCADA with single microprocessor


Figure 23 shows the interfacing of 256 channels, using 17 multiplexer of 16 channels
each. The 8 address lines are used to address 256 channels. Out of the 8-address lines, upper
four are used to select a particular multiplexer and lower four lines are used to select a particular
channel in the multiplexer. The 8-bit channel address thus directly maps into channel number
and can be manipulated in any way. The other parts are same as described earlier. This approach
will be suitable for the processes which are basically slow. Even if a channel is scanned only
once in every scan, it is only after 255 channels have been scanned, limit checking and analysis
have been performed, a particular channel will be addressed again. This is not acceptable in
many processes.
(a) Star configuration

(b) Daisy chain configuration


Figure 24. Distributed SCADA structure
Figures 24 (a) and (b) shows the interfacing of number of SCADA systems with central
computer in star configuration and Daisy chain configuration respectively. The SCADA
system directly connected to transducers are called nodes and are the same as the systems
described earlier. They scan the channels using one of the techniques discussed, earlier; convert
the data into engineering units, perform the limit checking, generate alarm, if data item crosses
the limit and generate print out. In addition to these functions, the data regarding the channels
in the node are transferred to central computer which analyses the system performance and
generates print out. The print outs are generated by exception, i.e. unnecessary data is not
printed at any point. At the node level, the print out is required for the operators to run the
system. Depending on the node performance, operator may decide to monitor any channel more
frequently, change the limits etc. The print out at the central node is required for the managers
to take long term decision to optimise the performance. The details on the channel performance,
limit violation are not required at this level. On the other hand, histogram on the input and
output material flow and the fuel consumption etc. may he more helpful. The concept of local
area networks or microprocessor interconnections can be used in case of distributed SCADA
system very effectively. Thus we conclude that the distributed SCADA is the ultimate solution
for complex process plant monitoring.
Data Logger
A data logger (also data logger or data recorder) is an electronic device that records
data over time or in relation to location either with a built in instrument or sensor or via external
instruments and sensors. Increasingly, but not entirely, they are based on a digital processor (or
computer). They generally are small, battery powered, portable, and equipped with a
microprocessor, internal memory for data storage, and sensors. Some data loggers interface
with a personal computer and utilize software to activate the data logger and view and analyze
the collected data, while others have a local interface device (keypad, LCD) and can be used
as a stand-alone device.

Figure 25. Data logger or Data recorder


Data loggers vary between general purpose types for a range of measurement
applications to very specific devices for measuring in one environment or application type only.
It is common for general purpose types to be programmable; however, many remain as static
machines with only a limited number or no changeable parameters. Electronic data loggers
have replaced chart recorders in many applications.
One of the primary benefits of using data loggers is the ability to automatically collect
data on a 24-hour basis. Upon activation, data loggers are typically deployed and left
unattended to measure and record information for the duration of the monitoring period. This
allows for a comprehensive, accurate picture of the environmental conditions being monitored,
such as air temperature and relative humidity.
The cost of data loggers has been declining over the years as technology improves and
costs are reduced. Simple single channel data loggers cost as little as $25. More complicated
loggers may costs hundreds or thousands of dollars.
Applications
Applications of data logging include:
 Unattended weather station recording (such as wind speed/direction, temperature, relative
humidity, solar radiation).
 Unattended hydrographic recording (such as water level, water depth, water flow, water
pH, water conductivity).
 Unattended soil moisture level recording.
 Unattended gas pressure recording.
 Offshore buoys for recording a variety of environmental conditions.
 Road traffic counting.
 Measure temperatures (humidity, etc.) of perishables during shipments: Cold chain.
 Measure variations in light intensity.
 Process monitoring for maintenance and troubleshooting applications.
 Process monitoring to verify warranty conditions
 Wildlife research with pop-up archival tags
 Measure vibration and handling shock (drop height) environment of distribution
packaging.
 Tank level monitoring.
 Deformation monitoring of any object with geodetic or geotechnical sensors controlled by
an automatic deformation monitoring system.
 Environmental monitoring.
 Vehicle Testing (including crash testing)
 Motor Racing
 Monitoring of relay status in railway signalling.
 For science education enabling 'measurement', 'scientific investigation' and an appreciation
of 'change'
 Record trend data at regular intervals in veterinary vital signs monitoring.
 Load profile recording for energy consumption management.
 Temperature, Humidity and Power use for Heating and Air conditioning efficiency studies.
 Water level monitoring for groundwater studies.
 Digital electronic bus sniffer for debug and validation.
Unit IV - DISTRIBUTED CONTROL SYSTEM
Syllabus
DCS – Various Architectures – Comparison – Local control unit – Process
interfacing issues – Communication facilities
Evolution of Industrial Control Technology

Figure 1. Evolution of Industrial Control Technology


The lines of technological development can be divided into two separate streams, as
shown in figure 1. The upper stream with its two branches is the more traditional one, and
includes the evolution of analog controllers and other discrete devices such as relay logic and
motor controllers. The second stream is a more recent one that includes the use of large scale
digital computers and their mini and micro descendants in industrial process control. These
streams have merged into the current main stream of distributed digital control systems.

Key Milestones in Control System Evolution:


 1934 – Direct-connected pneumatic controls dominate market.
 1958 – First computer monitoring in electric utility.
 1959 – First supervisory computer in refinery.
 1960 – First solid-state electronic controllers on market.
 1963 – First Direct Digital Control (DDC) system installed.
 1970 – First programmable logic controllers (PLC) on market.
 1975 – First distributed digital control system on market.
Traditional Control Systems Developments
The early discrete device control systems listed were distributed around the plant.
Individual control devices: such as governors and mechanical controllers were located at the
process equipment to be controlled. Local readouts of set points and control outputs were
available and a means of changing the control mode from manual to auto mode usually was
provided. It was up to the operator to coordinate the control of many devices that made up the
total process. They did this by roaming around the plant and making corrections to the control
devices. This is a feasible approach to the control of early industrial processes because the plant
was small geographically and the processes were not too large or complex.
The same architecture was copied when direct connected pneumatic controllers were
developed in the late 1920s. These controllers provided more flexibility in selection and
adjustment of control algorithms, but all the control elements of the control loop were still
located in the field. There was no communication between the controllers other than that
provided by the each operator to other operators in the plant using visual and vocal means. This
situation changed of necessity in the late 1930s due to growth in size and complexity of the
process to be controlled. More difficult to run a plant using the isolate –loop control architecture
described above.
The emphasis on improving overall plant operations led to a movement towards
centralized control and equipment rooms. This was made possible by the development of
transmitter-type pneumatic systems. In this architecture, measurements are made at a process
were converted to pneumatic signals at standard levels, which were then transmitted to the
central location. The required control signals were computed at this location, and then
transmitted back to the operator at the central location. The great advantage of this architecture
was that all of the process information was available to the operator at the central location.
In the late 1950s and early 1960s, the technology used to implement this architecture
started to shift from pneumatics to electronics. One of the key objectives of this shift was
replacing the long runs of tubing used in pneumatic systems with wires used in electronics one.
This change reduced the cost of installing the control systems and also eliminates the time lag
inherent in pneumatic systems.
Another consequence of centralized control architecture was development of the split
range controller structure. In this type of controller, the operator display section of the
controller is panel mounted in the control room and the computing section is located in a
separate rack in an adjoining equipment room. The split controller structure is especially
appropriate for complex, interactive control systems.
In the early 1970s a sophisticated device known as the programmable logic controller
(PLC) was developed to implement sequential logic systems. This device is significant because
it was one of the first special purpose, computer based devices that could be used by someone
who was not a computer specialist. It was designed to be programmed by a user who was
familiar with relay logic diagrams. All the versions of sequential logic systems have been
implemented in direct-connected distributed architecture as well as in centralized ones. In each
case logic controller has been associated directly with the corresponding unit of process
equipment, with little or no communication between it and other logic controllers. The late
1970s PLCs and computers started to be connected together in integrated systems for factory
automation.
Computer-based Control System Developments
The first application of computers to industrial processes was in the areas of plant
monitoring and supervisory control. In September 1958, the first industrial computer system
for plant monitoring was installed at an electric utility power generating station. This
innovation provided an automatic data acquisition capability. In 1959 and 1960, supervisory
computer control systems were installed in a refinery and in a chemical plant. In these
applications, analog controllers were still the primary means of control. The computer used the
available input data to calculate control set points. The next step in the evolution of computer
process control was the use of the computer in the primary loop itself, in a mode usually known
as direct digital control or DDC. The first DDC system was installed in 1963 in a petrochemical
plant. For security, a backup analog control system was provided to ensure that process could
be run automatically in the event of a computer failure. The advantages of digital control over
analog control: tuning parameters and set points, complex control algorithms can be
implemented to improve plant operation, and control tuning parameters can be set adaptively
to track changing operating conditions. As the result of developments described above, two
industrial control system architectures came to dominate the scene by the end of the 1970s. The
typical examples of these architectures are shown in figure below.
Hybrid System Architecture

Figure 2. Hybrid System Architecture


The figure 2 shown in hybrid architecture one, making use of a combination of discrete
control hardware and computer hardware in a central location to implement the required control
functions. In this approach, first level or local control of the plant unit operations is
implemented by using discrete analog and sequential logic controllers. Panel board
instrumentation connected to these controllers is used for operator interfacing and is located in
the central control room area. A supervisory computer and associated data acquisition system
are used to implement the plant management functions, including operating point optimization,
alarming, data logging and historical data storage and retrieval. The computer also used to drive
its own operator interface, usually consisting of one or more video display units (VDUs).
Central Computer System Architecture
In this type is shown in Figure 3, all system functions are implemented in high-
performance computer hardware in a central location. In general, redundant computers are
required so that the failure of a single computer does not shut the whole process down. Operator
interfacing for plant management functions is provided using computer driven VDUs. Operator
interfacing for first-level continuous and sequential closed-loop control also may be
implemented using VDUs. Optionally, the computers can be interfaced to standard panel board
instrumentation so that the operator in charge of first-level control can use a more familiar set
of control and display hardware. Note both of the above systems use computers. The main
difference between the two systems is the location of the implementation of the first level
continuous and sequential logic control functions.

Figure 3. Central Computer System Architecture


Distributed Control System (DCS) Architecture
Introduction: The biggest disadvantages of the centralized computer control
architecture is that the central processing unit (CPU) represents a single point process if it is
lost. Another problem with these computer-based systems has been that the software required
to implement all of the functions is extremely complex, and requires a priesthood of computer
experts to develop the system.
The centralized system is limited in its capability to accommodate change and
expansion. The disadvantages of hybrid system architecture has its deficiencies. It is composed
of many different subsystems. Starting them up and making them work as an integrated whole
is no less difficult task. The hybrid approach also is functionality limited compared to the
central computer based system.
The limitation of centralized computer system and hybrid system introduced distributed
control system shown in Figure 4. The devices in this (DCS) architecture are grouped into three
categories; those that interface directly to the process to be controlled or monitored, those that
perform high-level human interfacing and computing functions, and those that provide the
means of communication between the other devices. A brief definition of each device is given
below:
Local Control Unit (LCU): The smallest collection of hardware in the system that can
do closed loop control. The LCU interfaces directly to process.
Low Level Human Interface (LLHI): A device that allows the operator or instrument
engineer to interact with LCU (eg: to change set points, control modes, control configurations,
or tuning parameters) using a direct connection. LLHIs can also interface directly to the
process. Operator-oriented hardware at this level is called Low Level Operator Interface
(LLOI); instrument engineer–oriented hardware is called a Low Level Engineering Interface
(LLEI).
Data Input / Output Unit (DI/OU): A device that interfaces to the process alone for the
purpose of acquiring or outputting data. It performs no control functions.
High Level Human Interfaces (HLHI): A collection of hardware that performs
functions similar to the LLHI but with increased capability and user friendliness. It interfaces
to other devices only over the shared communication facilities. Operated-oriented hardware at
this level is called a High Level Operator Interface (HLOI); instrument engineer-oriented
hardware is called a High Level Engineering Interface (HLEI).
High Level Computing Device (HLCD): A collection of microprocessor based
hardware that performs plant management functions traditionally performed by a plant
computer. It interfaces to other devices only over the shared communication facilities.

Figure 4. Generalized Distributed Control System


Computer Interface Device (CID): A collection of hardware that allows an external
general purpose computer to interact with devices in DCS using shared communication
facilities.
Shared Communication Facilities: One or more levels of communication hardware
and associated software that allow the sharing of data among all devices in DCS. Shared
communication facilities do not include dedicated communication channels between hardware
elements within the device.
Comparison of Architectures
The development of distributed control systems has been to maintain the best features
of the central computer control and hybrid architectures described.
CENTRAL
FEATURE HYBRID DISTRIBUTED
COMPUTER
1. Scalability and Good - due to Poor – very limited Good - due to
expandability modularity. range of system size. modularity.
2. Control Limited – analog Full - digital control Full - digital control
capability and sequential capability. capability.
control hardware.
3.Operator Limited by panel Digital hardware Digital hardware
interfacing board provides significant provides improvement
capability instrumentation. improvement for large for full range of system
systems. sizes.
4. Integration of Poor - due to All functions Functions integrated in
system functions variety of products. performed by central a family of products.
computer.
5. Significance of Low - due to High. Low - due to
single point modularity. modularity.
failure
6. Installation High – discrete Medium- saves Low – savings in both
costs wiring, large control and equipment wiring costs and
volume of room space, but use equipment space.
equipment. discrete wiring.
7. Maintainability Poor –many Medium- requires Excellent - automatic
module types; few highly trained diagnostics and
diagnostics. computer maintenance module replacement.
personnel.
Local Control Unit (LCU)
The LCU is the smallest collection of hardware in the DCS that performs closed loop
control. That is, it takes inputs from process-measuring devices and commands from operator
and computes the control outputs needed to make the process follow the command. It then
sends control output to actuators, drives valves and other mechanical devices that regulate the
flows, temperatures, pressures, and other variables to be controlled in the plant. An LCU
malfunction can cause a condition that is hazardous to both people and equipment. Its proper
design is critical to the safe and efficient operation of the plant.
Basic Elements of a Microprocessor-Based Controller
The basic elements of a LCU are shown in figure 5. The microprocessor along with the
associated clock comprises the central processing unit (CPU) of the controller. ROM is used
for permanent storage of controller programs. RAM is used for temporary storage of
information. Depending upon the type of microprocessor used, RAM and ROM can be located
on the microprocessor chip or on separate memory chip. The LCU must have I/O circuitry so
that it can communicate with the external world by reading in, or receiving analog and digital
data as well as sending similar signals out. The CPU communicates with the other elements in
the LCU over an internal shared bus that transmits addressing, data control and status
information in addition to the data.
The controller structure shown in figure is the minimum required to perform basic
control functions. The control algorithms could be coded in assembly language and loaded into
ROM. After the controller was turned on, it would read inputs, execute the control algorithms,
and generate control inputs in a fixed cycle indefinitely. However, because the situation is not
this simple in industrial applications, the controller structure shown in figure must be enhanced
to include the following:
 Flexibility of changing control configuration.
 Ability to use the controller without being computer expert.
 Ability to bypass the controller in case it fails so that the process still can be
controlled manually.
 Ability of the LCU to communicate with other LCUs and other elements in the
system.

Figure 5. Basic elements of a Local Control Unit


LCU Architecture
There are three configurations are shown in LCU architecture.
i) Configuration A ii) Configuration B and iii) Configuration C.
Configuration A: In configuration A the controller size is the minimum required to
perform a single loop of control or a single motor control function or other simple sequencing
function. LCU that provides both analog and digital inputs and outputs and executes both
continuous and logic function blocks. All outputs in the range of 0.1 to 0.5 seconds maximum.
Controller size: Number of functions needed for single PID loop or motor controller.
Controller functionality: Uses both continuous and logic function blocks.
Controller scalability: High degree of scalability from small to large systems.
Controller performance: Requirement can be met with simple and inexpensive set of
microprocessor based hardware.
Communication channels: Need inter module communications for control.
Controller output security: Controller has single loop integrity; usually only manual backup is
needed.
Figure 6. LCU Architecture Configurations
Configuration B: Configuration B represents architecture in which two different types
of LCUs are used to provide the full range of required continuous and logic functions. Outputs
in the range of 0.1 to 0.5 seconds.
Controller Size: Includes functions and I/O needed for eight control loops and a small logic
controller.
Controller functionality: Continuous and logic function blocks split between controllers.
Controller scalability: Requires both types even in small systems.
Controller performance: Because of functional split, performance requirements are not
excessive. It is usually implemented using a high-performance eight bit or an average
performance 16bit microprocessor and matching memory components.
Communication channels: Functional separation requires close interface between controller
types.
Controller output security: Lack of single loop integrity requires redundancy in critical
applications.
Configuration C: Configuration C represents a multi loop controller architecture in
which both continuous and logic functions are performed. Outputs in the range of 0.5 seconds
of less.
Controller size: System size is equivalent to small DDC system.
Controller functionality: Uses both continuous and logic function blocks: can support high
level languages.
Controller scalability: Not scalable to very small systems.
Controller performance: Hardware must be high performance to execute large number of
functions. Usually implemented with one or more 16bit or 32bit microprocessor in conjunction
with support hardware such as arithmetic co processors to attain the required speed.
Communication channels: Large communication requirement to human interface: minimal
between controllers.
Controller output security: Size of controller requires redundancy in all applications.
Comparison of LCU Architecture
CONFIGURATION CONFIGURATION CONFIGURATION
ARCHITECTURE
A B C
Parameters
(Single Loop) (2 LCU Types) (Multi-Loop)
1. Controller size Single PID loop or 8 control loops Small DDC system
motor controller Small logic controller
2. Controller Both continuous and Split between Can support high-
functionality logic blocks controllers level languages

3. Controller Small to large Both even in small Not very small


scalability systems systems systems
4. Controller Simple & Requirements not High performance to
performance Inexpensive excessive execute large no. of
microprocessor based functions
hardware
5. Communication Need inter module Close interface Large- human
channels communications for between controller interface, min- bet
control; min. need for types controllers
human interface
6. Controller output Controller has single Lack of single loop Requires redundancy
security loop integrity; Only integrity; requires in all application
manual backup redundancy in critical
application

LCU-Process Interfacing Issues


Figure 7 shows a block diagram illustrating these other interfaces from the point of
view of the LCU. This figure expands on the basic LCU elements through the addition of
interfaces to external communication facilities and to a low-level human interface device. The
communication interfaces permit the LCU to interact with the rest of the distributed system to
accomplish several functions:
 To allow several LCUs to implement control strategies that is larger in scope than
possible with a single LCU.
 To allow transmission of process data to higher-level elements.
 To allow these Higher-level elements to transmit information requests and control
commands to the LCUs.
 To allow two or more LCUs to act together as redundant controllers to perform the
same control or computational functions.
 To augment I/O capability of LCUs with that of data input/output units in system.
The LLHI device and its associated interface hardware allow several important human
interfacing functions to be accomplished through hardware that is connected directly to the
LCU rather than over the shared communication facilities. These functions include:
 Allowing the plant operator to select control set points and controller modes.
 Allowing the plant operator to override automatic equipment and control the process
manually in case of a controller hardware failure of other system malfunction.
Allowing the plant instrumentation engineer to Configure control system logic and later tune
control system parameters.
Figure 7. LCU Interfaces to Distributed System Elements
Security Design Issues for LCU
Security Requirements
The first priority of the user of any process control system is to keep the process running
under safe operating conditions. One way of designing a highly reliable control system is to
manufacture it using only the highest quality of components, conduct extensive burn-in testing
of the hardware, and implement other quality control measures in the production process. The
security objectives necessary in designing a DCS in the following hierarchy:
 Maximize availability of automatic control functions of the system. Failure of a single
control system element does not shut all automatic control functions.
 Automatic control to manual control, if a portion of the control system failure happens.
 If critical (both automatic and manual function) failure, operator can shut down the
process in an orderly and safe manner.
Overview of Security Design Approaches
There are three basic categories of security approaches currently in use.
i) Manual backup only.
ii) Hot standby redundant controller.
iii) Multiple active controllers.
i) Provide manual backup only: In this case, each LCU is designed to implement only
one or two control loops, and the operator take over manual control in case of a failure of the
LCU. The control output is fed back to the manual backup station and to the computation
section of the controller so that inactive element can synchronize its output with the active
element. This ensures that the out to the process will not be bumped when a switchover from
the active to the inactive device occurs.
Figure 8. Manual Backup Approach
ii) Provide a standby redundant controller: In this case, the LCU is backed up by
another LCU that takes over if the primary controller fails. In this way, full automatic control
is maintained even under failure conditions. The control output is fed back to both controllers
to allow bump less transfer to be accomplished.

Figure 9. Hot Standby Redundancy Approach


iii) Provide multiple active controllers: In this case, several LCUs are active at the same
time in reading process inputs, calculating control algorithms, and producing control outputs
to the process. Since only one output can be used at a time, voting circuitry selects the valid
outputs. Failure of one of the controllers does not affect the automatic control function. The
selected control output is fed back so that each controller can compare its own output with the
output generated by the voting device.

Figure 10. Multiple Active Redundant Controllers


Secure Control Output Design
Techniques to improve security of the control output circuitry:
 Minimum output in D/A converters.
 Safe level when LCU fails (both analog and digital).
 Independent power supply for control output and rest of LCU.
 Actual value of output to be read back by rest of the LCUs.
 Minimum number of components and electrical connections between control output
hardware and field terminating point.
Multiplexed Control Output Configuration

Figure 11. Multiplexed Control Output Configuration


In this scheme (in Figure 11), a single D/A converter is used to produce several control
outputs by including an analog multiplexer in the circuitry. To generate each output, the
microprocessor writes the proper values to the output register shown, and the D/A converter
generates a corresponding analog voltage.
At effectively the same time, the processor instructs the multiplexer to switch the output
of the D/A converter to the proper hold circuit. This hold circuit is an analog memory that
stores the output value and causes the current driver to generate the appropriate output current,
usually in the 4-20mA range. Then the processor writes the next output value into the register
and directs the D/A converter output to the next hold circuit through the MUX. This process
occurs on a cyclic basis at least several times per second.
Secure Control Output Configuration
In this case (in Figure 12), the D/A converter is dedicated to generating a single control
output. Also provision is made to allow the processor to “read back” the value of the control
output. This is done by means of a current-to-voltage converter and an A/D converter.
The processor uses this capability to verify that the control output has been generated
correctly. In some systems, a known reference voltage is switched into the A/D converter and
the processor checks the output value of the converter.
The processor can then take into account and correct any errors that occur in the process
of reading the A/D converter output and generating the control output values to the D/A
converter.
Figure 12. Secure Control Output Configuration
Pulsed Control Output Configuration

Figure 13. Pulsed Control Output Configuration


In this case (in Figure 13), the LCU processor is directly involved in the output
generation process by generating raise and lower command to an up/down counter in the output
channel.
This counter responds the commands by incrementing or decrementing a digital value
in memory. This value is fed to the D/A converter, which generates a control output through
the current driver.
The processor keeps track of the output through the current-to-voltage converter and
A/D converter circuitry shown and manipulates the raise and lower commands until the output
reaches the desired value.
Secure Digital Output Configuration

Figure 14. Secure Digital Output Configuration


An output read back capability also can be added to this configuration if desired. The
failsafe output selection section is much simpler in the digital output case than in analog one.
In the digital case, there are only two states (0 or 1) and selecting and generating the safe state
is a relatively straight forward process.
Redundant Controller Designs
Guidelines to follow designing redundant control system.
1. Redundant architecture should be simple as possible. At some point more hardware
reduces system reliability.
2. The architecture must minimize potential single points of failure. The redundant
hardware elements must be as independent as possible so that the failure of any one
does not bring the rest down as well.
3. The redundant nature of the controller configuration should be transparent to the user;
that is, the user should be able to deal with the redundant system in the same way as a
non-redundant one.
4. The process should not be bumped when failure occurs.
5. After a control element has failed, the system should not rely on control element till
replacement.
6. Hot spare replacement; replace failed elements without shut down.
Several approaches to designing a redundant LCU architecture:
i) CPU redundancy
ii) One-on-one redundancy
iii) One-on-many redundancy
iv) Multiple active redundancy
CPU Redundancy
The configuration shown in figure only the CPU portion of the LCU is redundant, while
the I/O circuitry is not redundant. The CPU is redundant because its failure affects all of the
control outputs. Only one of the CPUs is active in reading inputs, performing control
computations, and generating control outputs at any one time. The user designates the primary
CPU through the priority arbitrator circuitry, using a switch setting or other mechanism. The
arbitrator monitors the operation of the primary CPU, if it detects a failure in the primary, the
arbitrator transfers priority to the backup. During operation, the backup CPU periodically
updates its internal memory by reading the state of the primary CPU through the arbitrator.

Figure 15. CPU Redundancy Configuration


While both CPUs are connected to the plant communication system, only the primary
is active in transmitting and receiving messages over this link. The main operator and
engineering interface in this system is the high-level human interface. A CRT-based video
display unit that interfaces with the LCU as if it were non-redundant. Only the primary CPU
will accept control commands or configuration and tuning changes transmitted by the VDU.
One-on-One Backup Redundancy

Figure 16. One-on-One Backup Redundancy


The remaining three redundancy approaches provide for redundancy in the control
output circuitry as well as in the CPU hardware. Most of these architectures do not provide a
low-level operator interface for manual backup purposes. The approach provides a total backup
LCU to the primary LCU. The control output circuitry is duplicated in this case; an output
switching block must be included to transfer the outputs when the controller fails. As in the
first redundant configuration, a priority arbitrator designates the primary and backup LCUs and
activates the backup if a failure in the primary is detected. In this configuration, the arbitrator
has the additional responsibility of sending a command to the output switching circuitry, if the
primary LCU fails, causing the backup LCU to generate the control outputs. Communication
with the high-level human interface are handled in the same way as in the CPU redundant
configuration.
The main advantages of the one-on-one configuration, compared to the previous CPU
redundant approach, are that no manual backup is needed. It eliminates any questions that may
arise with a partial redundancy approach.
One-on-Many Backup Redundancy
More cost effective approach to redundancy. Single LCU is used as a hot standby to
back up any one of several primary LCUs. In other configurations, an arbitrator is required to
monitor the status of the primaries and switch in the backup when a failure occurs. In this case,
there is no way of knowing ahead of time which primary controller the backup would have to
replace. A general switching matrix is necessary to transfer the I/O from the failed controller
to the backup. This configuration is loaded into the backup LCU from the primary LCU only
after the primary has failed. This approach violates the second and fifth design guidelines list.
A better approach would be to store a copy of each primary LCU’s control configuration in the
arbitrator. When an LCU failure occurs, the arbitrator could then load the proper configuration
into the backup LCU.

Figure 17. One-on-Many Backup Redundancy


Multiple Active Redundancy
Three or more redundant LCUs are used to perform the same control functions
performed by one in the non-redundant configuration. In this one all of the redundant
controllers are active at the same time in reading process inputs, computing the control
calculation, and generating control outputs to the process. Each LCU has access to all of the
process inputs needed to implement the control configuration. An output voting device selects
one of the valid control outputs from the controllers and transmits it to the control process.
Figure 18. Multiple Active Redundancy
When a controller fails, it is designed to generate an output outside the normal range.
The output voting device will then discard this output as an invalid one. The voting device is
designed to select the signal generated by at least two out of the three controllers.
The main advantage of this approach is that, as long as the output voting device is
designed for high reliability, it significantly increases the reliability of the control system.
Process Input / Output Design Issues
Input / Output Requirements
The first dimension is simply the large variety of input/output signals that the control
system must handle in order to interface with sensors, analyzers, transmitters, control actuators
and other field mounted equipment.
The second problem in providing cost effective I/O hardware is the wide range of input
and output performance specifications that are imposed to facilitate interfacing with various
types of field equipment.
 Common and normal mode voltage rejection.
 Voltage isolation between terminal and system elements.
 Input impedance requirements.
 Ability to drive loads.
The third dimension to the I/O problem is the varying degree of I/O “hardening”
required in different applications.
 No hardening is required- application requires low-cost hardware.
 Application is in a hazardous environment- I/O hardware must be designed to be
explosion proof.
 Field mounted equipment is subject to lightning strikes or large induced voltage spikes-
I/O hardware must be designed to withstand high level voltage surges.
 Field mounted equipment is subject to Radio frequency Interference (RFI)- I/O
hardware must be shielded, filtered or isolated from RFI environment to minimize or
eliminate errors due to this type of noise.
Input / Output Design Approaches
» Separate unit DI/DO
» Augment in LCU
Communication Facilities

Figure 19. Conventional Point-to-Point Wiring


Conventional Point-to-Point Wiring: In conventional non-distributed control systems,
the connections that allow communication between the various system elements. This system
consists of a combination of continues controllers, sequential controllers, data acquisition
hardware, panelborad instrumentation, and a computer system.
The controllers communicate with each other by means of point-to-point wiring. This
approach to interconnecting system elements has proven to be expensive to design and check
out, difficult to change, burdensome to document, and subject to errors.
• Advantage:
– In conventional system communication between system elements travels at the
speed of light with zero delay.
– No overloading a channel.
• Disadvantage:
– Expensive to design.
– Difficult to change.
– Subject to errors.
Communication Facility as a “Black Box”: When DCS were introduced in the late
1970s; the use of digital communications was extended to control-oriented systems as well.
The communication system began to be viewed as a facility that the various elements and
devices in the distributed network share as “black box”, replace the point to point wiring and
cabling.
Figure 20. Communication Facility as a “Black Box”
• Advantage:
– Cost of plant wiring is reduced because 1000 of wires are replaced by the few
cables or buses used to implement shared communication system.
– Flexibility of marking changes increases because of it is software
configuration.
– Less time to implement large system since the wiring labor is nearly
eliminated, Configuration error reduced.
– Control is more reliable due to reduction in physical connection. So failure is
easily identified.
• Disadvantage:
– Delay occurs between communication systems.
Communication System Requirements
• Minimize delay and maximize security transmission.
• Transmission of process variables, control variables, and Alarm status information from
LCU to HLHI, and LLHI in the system.
• Communication of special commands, operating modes, control variables from the
HLHI to LCU for the purpose of supervisory control.
• Down loading of control system configuration, tuning parameters and user programs
from HLHI to LCUs.
• Transmission of information from data input/output unit to high level computing
devices for the purpose of data acquisition.
• Transfer of large block of data (data base) / programs from one high level computing
devices to another high level computing devices or low level computing devices.
• Synchronization of real time among all of the elements in the DCS.
Communication System Performance Requirements
• Maximum size of the system.
– Distance; number of devices.
• Maximum delay time through the system.
• Interaction between LCU architecture and communication facility.
• Rate of undetected errors occurring in the system.
• Sensitivity to traffic loading.
• System scalability.
• System fault tolerance.
• Interfacing requirements.
• Ease of application and maintenance.
• Environment specification.
Architectures Issues
Channel Structure
The first decision to make in evaluating or designing a communications facility is
whether to choose a parallel or serial link as the communication medium.
Parallel: Multiple conductors or wire to carry a combination of data and handshaking
signal. It provides a higher message throughput rate than does the serial approach.
Disadvantages:
– More cost.
– Arrive data at different time if distance between nodes become large (IEEE 488
BUS).
Serial: Uses only a single coaxial cable, fibre optic link, or pair of wires.
Advantages:
– Cable cost is less.
– Long distance.
– Use baseband signal.
Levels of Subnetworks
Sub network is defined to be a self-contained communication system:
• It has own address structure.
• Allows communications among elements connected to it using a specific protocol.
• Allows communication between elements directly connected to it and elements in other
sub networks through an interface device that “translates” the message addresses and
protocols of the two subnetworks.
• Local subnetwork- Located in central control room. Allow high level devices to
intercommunicate.
• Plant level subnetwork –Interconnect the control room elements with the distributed
elements in process.

Figure 21. Communication system Partioning-example1


Figure 22. Communication system Partioning-example 2
Communication System Standards
• CAMAC-Computer Automated Measurement And Control.
• IEEE 488 BUS
• PROWAY-PROcess data highWAY.
• IEEE 802 Network.
Unit V - INTERFACES IN DCS
Syllabus:
Operator interfaces - Low level and high level operator interfaces – Displays – Engineering
interfaces – Low level and high level engineering interfaces – Factors to be considered in selecting
DCS – Case studies in DCS.
Operator Interfaces
Introduction
Automated equipment to be used in a safe and effective manner, however, it is absolutely
necessary to have a well-engineered human interface system to permit error-free interactions between the
humans and the automated system. Two distinct groups of plant personnel interact with the control
system on a regular basic:
• Instrumentation and control engineers: These people are responsible for setting up the control
system initially and adjusting and maintaining it from time to time afterwards;
• Plant operators: These people are responsible for monitoring, supervising and running the
process through the control system during startup, operation and shutdown condition.
As generalized distributed control system architecture shows, a human interface capability can be
provided at one or both of two levels:
• Low Level Human Interface: LLHI connected directly to local control unit (LCU) or data input
output unit (DI/DO) via dedicated cabling.
• High Level Human Interface: HLHI connected to LCU or DI/DO via shared communication
facility.
The low-level human interface equipment used in DCS usually resembles the panelboard instrumentation
(stations, indicators, and recorders), and tuning devices used in conventional electric analog control
systems. The HLHI equipment makes maximum use of the latest display technology (e.g., CRTs or flat-
panel displays) and peripheral devices (e.g., printers and magnetic storage) that are available on the
market. The LLHI generally is located geographically close to (within 100-200 feet of) the LCU or
DI/DO to which it is connected.
Some examples of typical installations and their corresponding equipment configurations are
shown in the below:

Figure 1. Stand-Alone Control Configuration


Figure 1 shows a relatively small and simple installation. A single LCU located in the plant
equipment room performs all of the required control functions. Low-level human interface units located
in the equipment room and the plant control room, provide the complete operator and instrument engineer
interface for the control system. This type of equipment configuration is typical of a stand-alone control
system for a small process or of a small digital control system installed in a plant controlled primarily
with conventional electrical analog or pneumatic equipment.

Figure 2. Geographically Centralized Control Configuration


A typical structure of a complete plant wide control system is shown in the figure 2. Several LCUs
are used to implement the functions required in controlling the process; therefore, the control is
functionally distributed. The LCUs are all located in a central equipment room area, and so it is not a
geographically distributed control system. Both high-level and low-level human interface devices are
located in the control room area for operational purposes. Most of the operator control functions are
performed using the high-level interface; the low-level interface is included in the configuration primarily
to serve as a backup in case the high-level interface fails. A high-level human interface is located in the
instrument engineer’s area so that control system monitoring and analysis can be done without disturbing
plant operations. This type of installation is typical of early distributed control system configurations in
which equipment location and operator interface design followed conventional practices.

Figure 3. Geographically Distributed Control Configuration


Another example shown in figure 3, is a fully distributed control system configuration. In this
case, each LCU is located in the plant area closest to the portion of the process that it controls. Associated
low-level human interface equipment is also located in this area. The control room and instrument
engineering areas contain high-level human interface units, which are used to perform all of the primary
operational and engineering functions. The low-level units are used only as manual backup controls in
case the high-level equipment fails or needs maintenance. This configuration takes advantage of two areas
of equipment savings that result from a totally distributed system architecture;
i) Reduction in control room size (by eliminating panelboard equipment),
ii) Reduction in field wiring costs (by placing LCUs near the process).
These examples of system configurations illustrate the point that human interface equipment in a
distributed control system must be designed to meet a wide range of applications:
 Large as well as small systems.
 Centralized equipment configuration (often used in retrofit installations made long after original
plant construction) as well as distributed ones (likely in “grass roots” installation made during
plant construction).
 Variety of human interface philosophies, ranging from accepting CRT only operation to requiring
panelboard instrumentation in at least a backup role.
Operator Interface requirements
The operator interface in a distributed control system must be allow the operator to perform tasks
in the following traditional areas of responsibility: Process monitoring, Process control, Process
diagnostics, and process record keeping.
1. Process monitoring: a basic function of the operator interface system is to allow the operator to
observe and monitor the current state of the process. This function includes the following specific
requirements:
– Current values of all process variables in system.
– Identified by tag rather than address (descriptor for that tag associated TT075/B
“COLUMN TEMPERATURE 75 IN AREA B”).
– Display of process variable in engineering units with values.
– Variables (functions of basic PV) available with engineering units.
– Variables that are functions of or combinations of the basic PV (e.g., an average of several
temperature, a maximum of several flows, or a computed enthalpy).
 Alarming: Another monitoring function of the operator interface system.
– Alarm status identified by control and computing devices for each variable in DCS
reported in clear manner.
– Also report similar alarm status for computed variables.
– Display alarm along with process variables or easily accessible to the operator.
– Alert and acknowledge from the operator.
– Multiple alarms in short time period, operator must inform with priority.
– Multivariable alarm (abnormal depend on several variables) – appropriate mechanism to
report the same.
 Trend in time: Fast access to recent history of selected process variables are known as trended
variables (TV). Some specific requirements in trending are:
– Group the trended variables by related process function.
– Trend graph must be clearly labeled in engineering units.
– Able to obtain precise reading of both current and past values recent and trend variable.
– Also show auxiliary information; to evaluate status.
2. Process control: the process monitoring capabilities just described provide the necessary
information for the operator’s primary function – process control. The following specific operator
interface requirements come under the category of process control:
– Operator interface must allow the operator to have rapid access to all continuous and logic
sequences.
– For each continuous control loop, interface must allow the operator to changing control
modes, set points and monitor results.
– Interface allow the operator to perform logic control operations as starting and stopping
pumps, opening and closing of valves.
– Batch process, the operator interface must allow the operator to observe current status, and
interact to initiate new steps or halt the sequence.
– Both continuous and sequential control cases, interface system allow the operator to have
access to and be able to manipulate control (if any single point failure).
3. Process diagnostics: Monitoring and controlling the process under normal operating conditions
are relatively simple functions compared to operation under abnormal conditions caused by plant
equipment or in the instrumentation and control system. The operator interface system must
provide enough information during these unusual conditions to allow the operator to identify the
equipment causing problem, take measures to correct it.
– Ongoing tests and reasonable checks on sensors and analyzers that measure the process
variables of interest.
– Ongoing self-tests on components and modules within the DCS itself: controllers,
communication elements, computing devices, and the human interface equipment itself.
Diagnosing problems within the process itself has been a manual function left to the operator. A
conventional alarming system, for example, may include the most immediate failure symptom but
provide few clues as to the original source of the alarm condition. As a result, diagnostic functions that
automatic detect process faults are now often required in DCS. These functions may include:
– First-out alarming functions, which tell the operator which alarm in a sequence occurred
first.
– Priority alarming functions, which rank the current alarms by their importance to process
operation, allowing the operator to safety ignore the less important ones.
– More advanced diagnostic functions that use a combination of alarming information and
data on process variables to identify the item of failed process equipment and the mode of
failure.
4. Process record keeping: One of the more tedious that operating people on a process plant must
perform has been to walk the board; that is, to take a pencil and clipboard and periodically note
and record the current values of all process variables in the plant. The record keeping burden has
increased significantly in recent years due to logged information. Record keeping was one of the
first functions to be automated using conventional computers systems. This function often can be
implemented in the operator interface system without the use of a separate computer. Specific
record keeping requirements include the following:
– Recording of short-term trending information.
– Manual input of process data: The operator must be able to enter manually collected
process information into the system for record keeping purpose.
– Recording of alarms: These are logged on a printer, a data storage device, or both, as they
occur. Often, the return-to-normal status and operator acknowledgements must also be
logged. The information recorded includes the tag name of the process variable, the time
of alarm, and the type of alarm.
– Periodic records of process variable information: The values of selected variables are
logged on a printer, data storage devices, or both on a periodic basic (every few minutes or
hours).
– Long-term storage and retrieval of information: The alarm and periodic logs as described
above are accessible for short term periods like, a single eight-hour shift or a single day. In
addition, the same information must store on a long term basis (months or years).
– Recording of operator control actions: Some process plants require the actions of the
operator affecting control of the process to be recorded automatically.
5. Guidelines for human factors design: Operator interfacing has often been designed more for the
convenience for the equipment vendor or architect-engineer.
Guidelines for designing operator interface systems for industrial control include the following:
– Full range of expected operator population (male, female, large, small).
– Minor disabilities (color blindness, near sightedness).
– For operators not computer programmers.
– Rapid access to all necessary controls and displays.
– Arrange (equipment, displays) based on operational point of view, cluster.
– Use colors, symbols, labels to minimize operator confusion.
– Do not flood parallel information; give prioritized, arranged and meaningful.
– Aid with operator guides, menus, prompts or interactive sequences.
– Detect and filter erroneous operator input, tell operator what to do next.
– Control room equipment is consistent with proper selection and design of equipment.
Low-Level Operator Interface
The Low-Level Operator Interface (LLOI) in a distributed control system is connected directly to
the LCU and is dedicated to controlling and monitoring that LCU. LLOIs are used in a variety of
applications, in some cases in conjunction with high-level operator interfaces (HLOIs) and in others in
place of them. There are a number of motivations for using an LLOI:
• It provides an interface that is familiar to trained operators to use panelboard instrumentation,
since it is usually designed to resemble that type of instrumentation.
• It is usually less expensive than an HLOI in small applications.
• It can provide manual backup in case the automatic control equipment or HLOI fails.
LLOI instrumentation usually includes the following devices: control stations, indicator stations, alarm
annunciators, and trend recorders.
Continuous Control Station: One type of panelboard instrumentation used in process control
systems is the manual / automatic station associated with a continuous control loop. The continuous
control stations are split from LCU.

Figure 4. Continuous Control Station


Figure 4 illustrates a typical version of smart continuous control station in a DCS. Most
conventional control station has one bar graph indicators that display the process variable associated set
point (“SP”), and the control output as a percent of scale (“OUT”). In addition, the smart station includes
a shared digital display to provide a precise reading of each of these variables in engineering units. The
shared digital display also can be used to indicate the high and low alarm limits (“HI ALM” and “LO
ALM”) on the process variable when selected by the operator. Pushbuttons allow the operator to change
the mode of control and to ramp the set point (“SET”) or control output (“OUT”), depending on the
mode. To minimize requirements for spare parts, one basic control station should be used for all types of
control loops.
Manual Loader Station: A device is still needed to hold the 4-20mA control output signal if the
LCU fails or is taken off-line for maintenance or other reasons. A simple manual loader station is a low
cost alternative to the continuous control station for the purposes of backup. The manual loader station is
plugged in at the same point as the continuous control station but only allows the operator to run the loop
in manual mode. Both the continuous control station and the manual loaders station should be powered
from a different supply than that used for the LCU, to ensure continuous backup in case of an LCU power
failure.
Indicator Station: The indicator station is similar to the control station in that it provides both bar
graph and digital numeric readouts of the process variables in engineering units.an indicator station
requires no control push buttons. However, it often provides alarming and LCU diagnostic indications, as
does the continuous control station.

Figure 5. Logic Control Station


Logic Station: Figure 5 shows a control station for a logic control or sequential control system. It
consists simply of a set of push buttons and indicating lights that are assigned different meanings
depending on the logic functions being implemented. This type of station is used to turn pumps on and
off, start automatic sequences, or provide permissive or other operator inputs to the logic system. In some
systems, the logic control station performs a manual backup function similar to that performed by the
continuous control station in case of a failure of an LCU. More often, the logic station acts simply as a
low cost operator interface.
Smart Annunciators: Alarm annunciator in distributed control systems are often microprocessor
based, providing a level of functionality beyond the capability of conventional hard-wired annunciator
systems. These smart annunciators can provide such functions as:
 Alarm prioritization: The annunciator differentiates between status annunciation and true alarms.
 Annunciation and acknowledgement mode options: The operator receives a variety of audible and
visible alarm annunciation signals like horn, buzzers, flashing lights, and voice messages.
 First-out annunciation: The annunciator displays the first alarm that appears within a selected
group.
 Alarm “cutout”: The annunciator suppresses an alarm condition if other specified statuses
conditioned are fulfilled.
Chart Recorders: Although conventional round chart or strip chart recorders are often used to
record process variables in a DCS, digital recorders which use microprocessors are becoming more cost
effective and popular. The digital recorder gathers trend data in its memory and displays the data to the
operator using a liquid crystal panel or other flat display device. Each process variable can be recorded
using a different symbol or color to allow the operator to distinguish between the variables easily.
Because of the flexibility of the printing mechanism and the memory capabilities of the recorder,
intermittent printing of the process variables can supplement the display output without losing any of the
stored information.
High-Level Operator Interface
The high-level operator interface in a DCS is a shared interface that is not dedicated to any
particular LCU. Rather, the HLOI is used to monitor and control the operation of the process through any
or all of the LCUs in the distributed system. Information passes between the HLOI and the LCUs by
means of the shared communications facility. While the LLOI hardware resembles conventional
panelboard instrumentation, HLOI hardware uses CRT or similar advanced display technology in console
configurations often called video display units (VDUs). The HLOI accepts operator inputs through
keyboards instead of the switches, push buttons, and potentiometers characteristic of conventional
operator interface panels.
In general, the use of microprocessor-based digital technology in the design of HLOI system
allows the development of a hardware configuration. This configuration provides the user with several
significant advantages over previous approaches.
 Control room space is reduced significantly; one or a few VDUs can replace panelboards several
feet to 200 feet in length, saving floor space and equipment expense.
 One can design the operator interface for a specific process plant in a much more flexible manner.
There is no panelboard instrumentation operation.
 Using microprocessors permits cost effective implementation of functions that previously could
be accomplished only with expensive computers.
Architectural Alternatives
All HLOI units in distributed control systems are consists of operator display, keyboard or other
input device, main processor and memory, disk memory storage, interface to the shared communication
facility, and hard copy devices and other peripherals. However, the architectures of the various HLOIs on
the market vary significantly depending on the way in which these common elements are structured.

Figure 6. Centralized HLOI Configuration


Figure 6 shows architecture commonly used in computer-based control systems and early
distributed systems. In this architecture, there is a single central processing unit that performs all of the
calculations, database management and transfer operations, and CRT and keyboard interfacing functions
for the entire HLOI system. A separate communication controller interfaces the central processor with the
shared communications facility.
There are several advantages to this configuration. First, there is a single database of plant
information that is updated from the communication system. As a result, each of the CRTs has access to
any of the control loops or data points in the system. This is desirable; since it means that the CRTs are all
redundant and can be used to back each other up in case of a failure. Another advantage is that the
peripherals can be shared and need not be dedicated to any particular CRT/keyboard combination. This
can reduce the number of peripherals required in some situations.
The disadvantages of this configuration are similar to those of all centralized computer systems:
 It is an “all egg in one basket” configuration, and so is vulnerable to single point failures. In
some cases, redundant element can be provided.
 Any single processor, single memory configuration has limitations on the number of loops and
data points it can handle before its throughput or memory capacity runs out.
 The centralized architecture is not easily scalable for cost effectiveness; if it is designed properly
to handle large systems, it may be too expensive for small ones.

Figure 7. Overlap in HLOI Scope of Control


Figure 7 illustrates a two-to-one overlap configuration, in which three HLOI units control and
monitor a 600 loop process. With this approach, the loss of any single HLOI unit does not affect the
capability of the operator interface system to control the process. This design approach results in a more
expensive version of an HLOI than one designed to handle a smaller number of points. However, in this
case each HLOI unit is capable of backing up any other unit in the system.
In the below figure 8 shows fixed HLOI configuration that is a single HLOI unit consisted of a
communications controller, main processor, CRT and keyboard, and associated mass storage. The only
option for the user was whether to include a printer or other hard copy device. Because of this fixed
configuration of elements, the scope of control and data acquisition of the HLOI unit also was fixed.
Figure 8. Fixed HLOI Configuration
Figure 9 shows one example of a modular HLOI configuration. The base set of hardware in this
case is a communications controller, main processor, single CRT and keyboard, and mass storage unit.

Figure 9. Modular HLOI Configuration


However, the system is designed to accommodate optional hardware such as:
 One or more additional CRTs to allow monitoring of a portion of the process while the primary
CRT is being used for control purposes.
 Additional keyboards for configuration or backup purposes.
 Hard copy devices such as printers or CRT screen copiers.
 Additional mass storage devices for long term data storage and retrieval.
 Interfaces to trend recorders, voice alarm systems, or other external hardware.
 Interface ports to any special communication systems such as back door networks to other HLOI
units or diagnostic equipment.
 Backups to critical HLOI elements such as the main processor, communications controller, or
shared memory.
The modular approach to HLOI unit design significantly improves configuration flexibility.

Figure 10. Detailed Modular HLOI Configuration


The figure 10 shows a more detailed block diagram of the modular HLOI configuration. Usually,
an internal bus is used to allow communication of information among the modular elements in the HLOI.
Note that this configuration uses a direct memory access (DMA) port to allow the communications
controller to transfer data directly into the shared memory of the HLOI. The other elements of the HLOI
then can obtain access to this information over the internal bus.
Hardware Elements in Operator Interface
Since the HLOI system is based on digital technology, many of the hardware elements that go into
the system are similar to those used in other portions of the DCS. However, the performance requirements
of the HLOI place special demands on its elements; also the on-line human interface functions performed
by the HLOI require display and input hardware that is unique to this subsystem.
Microprocessor and Memory components: The microprocessor hardware selected for use in the
HLOI in commercially available systems tends to have the following characteristics:
 It uses one or more standard 16 or 32 bit microprocessors available from multiple vendors; these
are not single sourced special components.
 Its processors are members of a family of standard processors and related support chips.
 It is designed to operate in a multiprocessor configuration, using a standard bus for
communication between processors and an efficient real time operating system as an environment
for application software.
The last characteristic is important, since to accomplish the desired computing and data control functions
at the speeds required by the real time operating environment, most HLOI configurations must use
multiple processors.
Operator Input and Output Devices: A great variety of operator input devices have been
considered for use in industrial control systems:
 Keyboard – There are two kinds of keyboard in use; the conventional electric typewriter type and
the flat panel type.
 Light pen – This device allows a person to “draw” electronically on a CRT screen. In industrial
control systems, the light pen is more often used by the operator to select among various options
displayed on a CRT screen.
 Cursor movement devices – These allow the user to move a cursor around on a CRT screen.
Types of devices used include joy sticks, track balls, and the mouse.
 Touch screens – There are CRTs with associated hardware that allows the user to select from
menus or to “draw” on the screen by directly pointing with a finger.
 Voice input devices – These devices allow a user to speak directly to the HLOI and be understood.
They are most useful for specific commands such as selecting displays or acknowledging alarms.
As with input devices, several types of display and output devices have then considered for use in HLOI
systems:
 CRTs – This still is the dominant technology in the area of operator displays; CRTs come in either
color or monochrome versions.
 Flat panel displays – These include gas plasma, vacuum fluorescent, liquid crystal, and
electroluminescent displays: most are monochrome only.
 Voice output devices – Usually these are used to generate alarm messages that the operator will
hear under selected plant conditions. The messages can be prerecorded on the tape or computer
generated by voice synthesizer.

Figure 11. Two Types of Display Construction


Figure 11 shows these two approaches to display generation. The upper portion of the figure 11
shows typical characters that are defined and used in combination to create a complete display. The lower
portion of the figure 11 shows a segment of a display created using the bit mapped approach.
Peripherals: in addition to the processing, memory, input, and display devices required to
perform the basic operation interface functions, the HLOI configuration must include the following
additional peripherals devices to implement the full range of functions:
 Fixed disk drives – This type of mass memory uses non-removable memory media to store large
amounts of information that must be accessed rapidly.
 Removable mass memory – This type of mass memory, the floppy disk, uses removable memory
disks for storage of information that is to be downloaded to other element in the DCS.
 Printers and plotters – At least one printer is required to implement the logging and alarm
recording functions. The printer can also provide a hard copy of the CRT displays.
Operator Displays
Typical Display Hierarchy: A typical version of this hierarchy, illustrated in the figure 12 is
composed of displays at four levels:
1) Plant level – Displays at this level provide information concerning the entire plant, which can be
broken up into several areas of interest.
2) Area level – Displays at this level provide information concerning a portion of the plant
equipment that is related in some way, e.g., a train of operation process in a refinery or a boiler
turbine generator set in a power plant.
3) Group level – Displays at this level deal with the control loops and data points relating to a single
process unit within a plant area, such as a distillation column or a cooling tower.
4) Loop level – Displays at this level deal with individual control loops, control sequences, and data
points.

Figure 12. Typical Display Hierarchy

Plant-Level Displays

Figure 13. Example of a Plant Status Display


This display summarizes the key information needed to provide the operator with the “big picture”
of current plant conditions. This example shows the overall production level at which the plant is
operating compared to full capacity. It also indicates how well the plant is running. In addition, some of
the key problem areas are displayed. A summary of the names of the various areas in the plant serves as a
main menu to the next level of displays. At the top of the plant status display is a status line of
information provided in all operating displays. This line shows the current day of the week, the date, and
the time of the day for display labeling purposes. In addition, it provides a summary of process alarms
and equipment diagnostic alarms by listing the numbers of the plant areas in which outstanding alarms
exist.
Area-Level Displays

Figure 14. Examples of Area Overview Display


After obtaining a summary of the plant status, the operator can move down to the next level to
look at the situation in a selected plant area. The figure shows a composite of four of these types. The top
line of the display is the system date and status line described previously. The upper left quadrant
illustrates an area display type known as deviation overview, which displays in bar graph form the
deviation of key process variables from their corresponding set points. The lower left quadrant of figure
shows a variation of this approach in which a bar graph indicates the absolute value of the process
variable instead of its deviation from set point. Some versions of this display also show the set point and
the high and low alarm limits on the process variable. When one of these limits is exceeded, the bar graph
changes color.
The upper right hand quadrant of figure shows another approach to the area overview display.
Here the tag numbers of the various loop and process variables are arranged in clusters by group. If the
point associated with a particular tag is not in alarm, its tag number is displayed in a low-key color. If it
does go into alarm, it changes color and starts flashing to get the attention of the operator. Underlining
also can be used under the tag number, so that a color-blind operator. The lower right hand quadrant
shows a variation of this display. In addition to the tag number itself, the current value of the process
variable is displayed in engineering units to the right of the tag.
Group-Level Displays
To perform control operations, however, it is necessary to use the displays at the next lower level
in the hierarchy- the group level. Mimics of manual and automatic stations for continuous control loops
occupy the upper left hand corner of the display. The upper right section shows indicator stations that let
the operator view selected process variables. The bottom half of the display is devoted to plotting the
trends of one or more process variables as a function of time. Switching from one group display to
another is the equivalent the operator move around a panelboard to accomplish the monitoring and
control functions.
Figure 15. Example of a Group Display
Loop-Level Displays
The operator uses a few types of displays dealing with single loops or data points for control and
analysis purposes.

Figure 16. Example of an X-Y Operating Display


The figure 16 shows one example, X-Y operating display. Here one process variable is plotted as a
function of another to show the current operating point of pair of variables. The operator then can
compares this operating point against an alarm limit curve or an operating limit curve. In the example, the
combination of temperature and pressure for a particular portion of the process may be critical to safety.
Figure 17. Example of a Tuning Display
Example of a tuning display is shown figure. The display includes several elements that make the
tuning function possible:
 A “fast” trend-plotting capability.
 A manual/automatic station to allow the operator to control the loop.
 A list of the tuning parameters (e.g., proportional band, reset rate, derivative rate) associated with
the loop.

Design Consideration for Displays

– Not cluttered, simple as simple


– Not flashy and proper background for display
– Top few lines display common information
– Proper color conventions of industry
– For color blind use underline or blinking
– Display-select commands
– Cursor-movement commands
– Control-input commands
– Data inputs.

Engineering Interfaces
Introduction
The human interfaces that allow the plant operator to monitor and control the process, an
engineering interface that is totally independent of the operator interface. As in the case of operating
interface, normally provide the user a choice between two levels of engineering interface hardware;
1) Low-level, minimum-function devices that is inexpensive and justifiable for small systems.
2) High-level, full-function devices that are more powerful, but which are needed and justifiable for
medium, and large sized DCSs.
Engineering Interface Requirements
The instrument engineers had to select and procure the control and data acquisition modules,
mount them in cabinets, and do a significant amount of custom wiring between modules. Then the
engineers had to test and check out the entire system manually prior to field installation. Similarly, they
had to select operator interface instrumentation, mount it in panelboards, wire it up, and test it. They had
to prepare documentation for the entire configuration of control and operator interface hardware and the
corresponding control logic diagrams, usually from manually generated drawings.
A small number of general purposes, microprocessor-based modules have replaced dozens of
dedicated-function hardware modules. The use of shared communication, links has reduced or eliminated
custom intermodule wiring. CRT based operator consoles have replaced huge panelboards filled with
arrays of stations, indicators, annunciators, and recorders.
This engineering interface hardware must perform functions in the following categories:
 System Configuration: Define hardware configuration and interconnections, as well as control
logic and computational algorithms.
 Operator Interface Configuration: Define equipment that the operator needs to perform his or
her functions in the system, and define the relationship of this equipment with the control system
itself.
 System Documentation: Provide a mechanism for documenting the system and the operator
interface configuration and for making changes quickly and easily.
 System Failure Diagnosis: Provide a mechanism to allow the instrument engineering to
determine the existence and location of failures in the system in order to make repairs quickly
and efficiently.
Low-Level Engineering Interface
The low-level engineering interface (LLEI) is designed to be a minimum function, inexpensive
device whose cost is justifiable for small distributed control systems. It also can be used as a local
maintenance and diagnostic tool in larger systems.

Figure 18. Examples of Low-Level Engineering Interfaces


The LLEI is usually a microprocessor based device designed either as an electronic module that
mounts in a rack or as a hand held portable device. To minimize cost, the device usually is designed with
a minimal keyboard and alphanumeric display so that the instrument engineer can read data from and
enter data into the device. Some versions of the LLEI must connect directly to and communicate with
only one local control unit or data input / output unit at a time. The LLEI can be connected or
disconnected while the LCU or DI/OU is powered and in operation; it is not necessary to shut down the
process. In general, the LLEI is a dedicated device that is not used for operational purposes. Therefore, it
is not needed for engineering use; it can be disconnected from the DCS and locked up for safe keeping.
 System Configuration: When the system provides only an LLEI, the hardware in the system is
selected and configured manually. To simplify this task, provide a system engineering guide or
documentation. The primary purpose of the LLEI is to provide a tool for configuring the
algorithms in the system configuration (i.e., entering control strategies, setting tuning parameters,
selecting alarm limits, and so forth). In some low-level interface removable mass memory devices
is available and without controller engineer can edit control strategies only with power supply.
The LLEI is not support high-level language program such as BASIC and FORTRAN.
 Operator Interface Configuration:
– Minimum devices – manually
– Assign tag names, labeling station
– Difficult, time consuming, prone to error
 Documentation
– Manual – with standard forms
 Diagnosis Of System Problems
– Not always connected
– Rely on Self-test of equipment
High-Level Engineering Interface
The high-level engineering interface allows a user to reap the full benefits of the flexibility and
control capabilities of a DCS while minimizing the system engineering costs associated with such a
system. Although more expensive than the low-level version, this device is extremely cost effective when
used in conjunction with medium to large scale systems because of the significant engineering cost
savings it can provide.
The HLEI is implemented in the form of a CRT based console or VDU, similar to the high level
operator interface unit. Flexibility in accommodating hardware’s, like special keyboards or printers. Same
elements of operator interface used for engineering interface. Like the VDU, the engineering console can
interact with any other elements in DCS through the shared communication facility.
Dual Console Functions: The engineering console is a specialized device that is dedicated to the
engineering function. However, the engineering console can also be used as an operator’s console; a key
lock on the console implements the switch between the two console “personalities”.
The first key position permits only operator functions like display selection, control operations
such as mode selection and set point modification, and trend graph selection. The second position allows
engineering functions such as control logic configuration, modification and tuning, and system
documentation as well as operator functions. Some systems provide a third key lock position that allows
the user to perform tuning operations but does not allow the user to modify the control logic structure.
Implementing dual console “personalities” in a single piece of hardware is very cost effective for the user.
Since the engineering functions are needed only during initial system start-up and occasionally system
modifications are made.
Special Hardware Required: The operator’s console uses a flat panel, dedicated function
keyboard for ruggedness and simplicity of operation. The engineer’s console requires a general purpose
keyboard to promote speed of data entry and to support a wider range of human interface functions. This
keyboard usually is a push button type whose keys are laid out either QWERTY configuration or Dvorak
configuration. The vender may supply additional special purpose keys to allow the user to select special
characters, symbols, and colors employed in generating control configurations and displays. In additional,
some vendors also supply special color graphic printing or plotting devices, which allow more
sophisticated system documentation to be generated than is possible with standard printer that comes with
an operator’s console.
Portable Engineering Interface: In DCS, CRT based engineering interface device that is a
compromise between the full function engineering console and the minimum function. Usually, this is a
portable unit that includes a bulk memory device such as a floppy disk or cassette tape drive for storage of
system configuration data. This unit generally is designed to plug into and interface with a single LCU or
cabinet. It can be very useful and cost effective device for certain system configurations.
System Configuration
• HLEI – major role in automating process
• Control structures, computational algorithm stored in HLEI. Following information
– Number, type and location of H/W in LCU
– Definition of any H/W selected on each module
– Define input point to H/W module
– Number, type and location of all operator and engineering consoles in the system
– Number, type and location of any other device that communicate using shared
communication facility(special logging or computing device)
Some manually others automatic using broadcast messages:
• Control and computational information
– Tags, descriptors, definitions, addresses
– Logic state descriptors for digital system
– Signal conditioning in DI/OUs
– Communication linkages
– High-level language computation algorithms in LCU
• Control and computational logic configuration
– Fill-in-the-blanks function; through sequence of prompts and responses
– Graphics capability engineer draws using light pen, similar to CAD
– Enter debug and check high level language routines
• Storage of configuration
– Use of mass memory to store configuration information
– Control logics without presence of target H/W
– Verify engineer input to LCU; comparison from mass storage
– Failed devices replaced with new one, configuration downloaded to it
– Upload configuration from device to interface
Operator interface configuration
• Configure or change display structures
– Number of areas in plant, identifying tags and descriptors
– Number of groups in each area
– Assignment of control loops and input points to group
– Types of display at each level (preformatted or custom)
– Linkages between displays
– Assignment of points in system for special function
• Layout of display
– Graphic symbols
– Static background elements
– Dynamic graphics elements
– Dynamic alphanumeric elements
– Control stations
– Poke points- touch screen
• Operator input mechanisms
– Special function definition
– Graphics drawing and editing
– Symbol modifications
– Macro symbol operations
– Display transportability
– Expanded display definition
Documentation
• Following documentation automatically
– List of H/W modules and their location
– Control configuration and associated tuning parameters for each LCU
– Listing of tags, descriptors, H/W address of I/P or O/P modules
– Listing of special operator interface function with tag
– Operator display in system with drawing and display hierarchy
Diagnosis of system problems
• Most of hardware devices are microprocessor based, intelligent to perform on-line self-diagnosis.

Figure 19. Example of Typical Diagnostic Hierarchy

Figure 20. Example of a Diagnostic Display


General Purpose Computers in DCS
The advantages in flexibility, modularity, and user friendliness of the distributed approach to
implementing high-level control and computing functions were noted. The general purpose computer may
well have a role to play in a DCS depending on the background of the user and the needs of the particular
application. Some of the issues that would motivate a user to include a general purpose computer in a
DCS are:
 Software Investment: The user already may have invested considerable time and resources in
software packages that run on a specific computer. The cost of converting these packages to a
form that would run in a vendor’s distributed computing environment may not be worth the
benefits.
 Specialized Language Requirements: The user’s application may be implemented most readily
by means of a software language that is not available in the system provided by the distributed
control vendors.
 Extensive Computational Requirements: The speed and memory requirements of a particular
application may preclude the use of microprocessor-based devices. Some examples of large
computing application of this type include modeling large dynamic systems, running large linear
and non-linear programs, and identifying the dynamics of high order systems.
 Personal Computer for Small Systems: Some digital control applications require some high-
level functions but are too small in scope to justify the purchase of any high-level computing or
human interface devices from the control system vendor. In many of these situations, a personal
computer can provide the functions required (typically, a few data acquisition, logging, and
computational functions).

Case Studies in DCS


Iron and steel plant
Steel making processes are highly energy intensive and comprised of many complex unit
operations. Iron ore and coal need pre-processing before feeding into a reactor, and molten metal from
different reactors needs to be carefully drawn into a solid metal and then rolled into sheets. Each of these
operations has a stake in the quality of steel produced, and needs constant monitoring.
There are many systems monitoring and controlling each unit operation, but there are not many
system applications available that can be used for total integrated monitoring of plant operations that
includes energy optimization across plants, production scheduling and a plant-wide comprehensive
reporting system that users can view on computers, tablets or smart phones.
This paper discusses the actual realities and how integrated reporting can benefit steel
manufactures.
Steel Manufacturing and Automation
Automation in steel manufacturing is complex and varied as there are many operations to be
monitored. Here is a brief description of the process flow and automation involved in steel manufacturing.
Steel Plant Operations
 Crushed iron, coal and limestone from respective mines are brought to the plant by wagons /ships.
 A stacker helps in piling the ore and the bucket wheel reclaimer reclaims the ore and puts it onto
conveyor belts that transport the ore to the plant area.
 Iron ore fines are agglomerated into lumps in a sinter plant
 As raw coal has poor crushing strength and is volatile matter, the coal is baked in the absence of
air in coke oven batteries to produce Coke
 Iron ore, coke and limestone are fed into a blast furnace, and hot air from the stoves reduces iron
ore to molten iron
 Molten iron is sent to a Basic Oxygen Furnace or LD furnace to reduce the carbon content by
treating with pure oxygen. Excess carbon goes out as carbon monoxide, and molten steel is born.
 Molten steel is slowly rolled and cooled to a solid slab in a slab caster or continuous caster
 Slabs are taken to the hot roll, ill where they are reheated to bring them to a correct temperature
for rolling. They are then rolled to smaller thickness by passing over a series of rolls, and finally
made into a rolled coil of steel. The coil is shipped or sent to a cold mill.
 The cold roll mill reduces the thickness further. Annealing and galvanizing is done on the steel to
meet the specifications of its intended use.
 Recycled steel or scrap steel is melted in an electric arc furnace, and joins the processes as molten
steel.

Figure 21. Steel plant


Challenges in Steel Plant Automation
As there many control systems in play in a steel plant, getting a complete view of the entire plant
operations from ore handling to the transportation of finished steel requires a supervisory system like a
Manufacturing Execution System (MES). In plants that do not have an MES, data consolidation is done
manually is some form, leading to error and inconsistencies. There are obvious disadvantages in these,
such as:
 It becomes difficult for operators to collate information from all these different sources
 Operators lose track of productivity improvement measures and efficiency monitoring
 Key performance indicators across the plant are cumbersome to calculate, and many times are done
manually
 Synchronized archival and playback of past data from different units cannot be done for the
complete plant, and moreover, not all individual systems support archival and
 playback features
Benefits of Using DCS
Data collected from different sources is time stamped and stored in a central database. The
Maqplex dashboard uses the data to present reports, plots and predictions. Some of typical dashboard
presentations include:
 Real-time and archived graphical data plotting
 KPI indicators from sinter, blast furnace, basic oxygen furnace and mills
 Energy and mass balance for each unit operation
 Inventory management and waste tracking
 Condition monitoring of equipment
 Video analytics using thermal imaging
 Analytics with Hadoop
 Data/graphics on mobile platforms
Here is brief description of these features
a. Plant-Wide Data Collection
Maqplex has data adapters to collect data from PLC, control systems and even directly from
wireless devices or Ethernet networks. Data from the stacker reclaimer to the cold mill and wagon
tracking system can be archived on Maqplex.
b. Energy and Material Balance
As 6 million Kcal are used to produce 1 ton of steel, energy conservation plays a very important
role in steel manufacturing. As process data across the plant is collected, energy and material balance,
especially where gaseous fuels are used, can be computed for:
 Coke oven heating
 Blast furnace stoves
 Soaking pits
 Reheating furnace in hot strip mill energy Balance
 Reveals actual energy consumption by allocation of the actual use of energy in each of the unit
operations
 Identifies areas in which waste heat or gas are not harnessed.
 Generates energy consumption forecasts that can be used when integrated to smart grid
c. Scheduling
Scheduling of production is challenging in a steel plant with many dependent unit operations. Steel
plant operation is a batch process – coke oven batteries operate in cycles, blast furnace and BOF furnaces
operate in batches. So it is important that the batch production cycle is optimized to ensure there is
continuous output of finished steel with minimal latency time.

The solution fills a critical gap between the company‟s


enterprise resource planning (ERP) system and the plant‟s
real-time production control system (DCS) by optimizing the
daily production process.

Maqplex is seamlessly interfaced with „R‟ – an open source


statistical package that has over 3,000 built-in algorithms. The library of R algorithms gets updated
periodically as they become proven.
Scheduling can be performed at three levels:
 Hot molten steel – Blast furnace, BOF and slab caster constraints are considered to guarantee
continuous supply of hot metal to slab casters.
 The hot strip mill scheduler considers the activities of the slab yard, reheating furnaces and rolling
mills to optimize the production schedule
 The finishing mills scheduler optimizes the resource utilization required for annealing and
galvanization to meet
 Requirements of the end customer required output
d. Condition Monitoring of Equipment
Online vibration analysis can be performed on drive trains in hot strip and cold rolling mills.
Pattern recognition algorithms are used to detect any deviation from normal vibration levels to alert
operators
e. Thermal Imaging
Maqplex can display thermal images sent from fixed thermal cameras and perform video analytics
to detect abnormalities. Typical locations for thermal imaging include:
Steel from slab caster to hot strip mill is monitored on a thermal camera to compare the actual
temperature with the computed, pre-scheduled temperature at various points to optimise the process
 Exterior refractory inspections locate "hot spots" in furnaces and process vessels, indicating
thinning or missing refractory lining or insulation
f. Analytics with Hadoop
Maqplex is integrated with Hadoop. Real-time data collected from hundreds of sensors can be
archived in distributed servers and effectively analysed for potential breakdown of equipment, pattern
matching to compare current plant data with past data to check how plant upsets were handled. As
Maqplex can be integrated with the cloud, performing data analytics with Hadoop and the cloud becomes a
very cost-effective and feature-rich solution
g. Data on Mobile Platforms
Maqplex has a ready interface to mobile devices like tablets and smart phones. Graphics and plant
data can be viewed, and if need be, also controlled from mobile devices.

Cement Plant
Introduction
Cement is a component of infrastructure development and an important ingredient of civil
construction. The Indian cement industry is the second largest producer of cement in the world, ahead of
the United States and Japan. It is considered to be a core sector accounting for approximately 1.5%
of GDP and employing large man power. The cement market is increasing at of 20% a year due to
residential building constructions.
The paper focuses on offering an overview of industrial automation that gives a framework
about the equipment, programming used for automation and methodology involved in
implementing it a modern automation system. This paper Jianwei Xu Guangzhou Shijin summarizes the
recent developments within the cement industry and implementation of computational technologies into
the automation of production processes. Many recently built or renovated medium and large-scale
cement industries have adopted Distributed Control Systems (DCS). But, many of the smaller plants are
depending heavily on traditional instrumentation. Some such plants monitor and control the process
manually.
Since I-7000 series modules can communicate using the RS-485 (that is they can communicate
at a long distance). All of the modules in the system can be connected via twisted-pair wires. The
system has many advantages such as faster communication speed, high sampling resolution, intelligent
operation, photoelectric isolation, strong protection against interference and dual watchdog timers.
Since accomplishing the application to the vertical system one continues to use I-7000 modules to
implement the network in other subsystems because this kind of system can meet the product
requirements and obtain the best price/performance ratio. On the whole the implementation of the
system is an achievement in the automation and management.
The objectives of the modernization of the technological flow are: Reduction of energy, specific
fuel consumption Reduction of maintenance costs and unplanned shut downs Improving the work
condition, management, work safety, cement quality and client satisfaction, the renewing the technology
by Modernization of the great cooler made by introducing the IKN cooling technology for clinker,
Modernization of the shields and grinding balls at the cement mills.
Introduction of a new injector for all types of fuels: solid, liquid and gas commissioning of the
alternative fuel equipment for the burning of the solid alternative fuels (ECO-Fuel).
The technological flow from Cement Plant is controlled automatically by ECS system. This
software program enhances the operator to control (using a computer network) the whole technological
flow, which is the continuously monitoring system of the entire process.
Insufficient knowledge in process dynamics and interactions, and lack of adequate experience in
utilizing the existing control system are reasons for incorrect operations. The consequences of this may
result in low quality, interruption, equipment damage, and risk to human safety.
This paper by Vandna Kansal, Amrit Kaur presents the mamdani type fuzzy inference system
(FIS) for water flow rate control in a raw mill of cement industry. Fuzzy logic can be used for
imprecision and nonlinear problems. The fuzzy controller designed for flow rate control is two input and
one output system. It is essential to control water flow rate efficiently to produce high quality cement.
Raw mill is used to grind the raw materials which are used for manufacturing cement. In this paper, the
system is designed and simulated using MATLAB Fuzzy logic Toolbox. The experimental results of the
developed system are also shown.
In this paper, it is shown that mamdani type FIS provides an efficient control for water flow rate
control system. Mamdani FIS uses centroid method to calculate output value. Therefore it is complex
and time consuming. But fuzzy logic system is superior to conventional algorithms.
The paper by Moin Shaikh explores the changing roles of traditional distributed control systems
(DCS) and programmable logic controllers (PLC) used to automate cement manufacturing processes.
The two technologies initially served two different control requirements. However, improvements in
microprocessor-based controllers created conditions for two technologies to merge. The shift toward
commercial, off-the-shelf automation technology, software-based control verses hard control and use of
non-proprietary networks has created a new class of systems called hybrid process automation
systems.
The integration of energy and asset management information is now available in one central
location, increasing operability and streamline plant cost of operation. It is not a question of whether or
not cement plants will embrace new technologies; the plant economics will make it inevitable.
Moin Shaikh, (2009) (Company: Siemens Energy & Automation, Inc.) Cementing the
relationship between DCS and PLC: A review of emerging trends in plant control systems The
paper by E. Vinod Kumar discusses Process Automation as an important component of modern cement
production. ABB’s Distributed Control System (DCS) has been widely used across all industry segments
for many decades. ABB’s philosophy in this field has been a path of continuous evolution protecting
existing customer’s investments as well as bringing the latest IT technologies in a safe and secure
manner to the industrial world. The software engineering and visualization graphics is standardized by
using cement process control libraries.
ABB’s System 800xA DCS platform has many enabling hardware and software interfaces
and libraries to implement a tightly integrated modern cement plant automation system. The ABB
CPM solutions suite extends the plant automation capability to optimize the production process and to
provide seamless bidirectional connectivity with ERP systems for real time business decision making.
An efficient & informative control system like the ABB System 800xA provides great operational
advantage for modern cement plants.
In this paper ATUL, SNEHA GHOSH, S. BALA- MURUGAN [9] present a cement plant
simulator designed on LabVIEW platform. It generates process data and electrical parameters for all the
lines of workshops in the cement industry. The simulated design is universal and applicable to all kinds
of cement plants. Client-server architecture is established and the generated parameters are
communicated to Matrikon OPC client. Single Line Diagram of the electrical distribution of cement
plant has also been developed using MS Visio. An exclusive user interface has been designed to monitor
and analyze the electrical parameters in detail. The KWh/tonne consumption and percentage energy
consumed by the workshops have been observed and tabulated.
In the simulated design, all the process data for all the workshops and all the electrical
parameters for all the machines and equipments are generated. Thus, a plant manager can easily
monitor and analyze the entire plant and develop strategies to optimize the plant. Since Lab VIEW is
used, the graphical programming is very advantageous. The UI’s are very user friendly.
In this paper Michael J. Gibbs, Peter Soyka and David describe, Carbon dioxide (CO2) a by-
product of a chemical conversion process used in the production of clinker, a component of cement, in
which limestone (CaCO3) is converted to lime (CaO). CO2is also emitted during cement production by
fossil fuel combustion and is accounted for elsewhere. However, the CO2 from fossil fuels are
accounted for elsewhere in emission estimates for fossil fuels. The Revised 1996 IPCC Guidelines for
National Greenhouse Gas Inventories (IPCC Guidelines) provide a general approach to estimate
CO2emissions from clinker production, in which the amount of clinker produced, is multiplied by the
clinker emission factor.
In this paper A K Swain discusses the development of a novel raw material mix proportion
control algorithm for a cement plant raw mill, so as to maintain preset target mix proportion at the raw
mill outlet. This algorithm utilizes one of the most basic and important tools of numerical linear
algebra, the singular value decomposition (SVD), for calculation of raw mix proportion. The strength
of this algorithm has been verified by comparing the results of this method with that of the results of
the QCX-software developed by EL. Smidth of Denmark.
This article presents a novel raw mix proportion control system. To demonstrate the efficacy of
this method the results are compared with that of FLS of Denmark. The distinct features of the control
algorithm are that it works for any number of feeders is highly flexible and easily adaptable to new
situations operates on PID control strategy, giving faster response is easy to implement in any new plant
or any change of the takes care of the process dead time delays takes into account the limiting values for
feeder capacity and control action is highly economical
In this paper Eugene Boe, Steven J. McGarel & Tom Spaits, Tom Guiliani discuss about a series
of software automation applications implemented at Capitol Cement’s plant in San Antonio, Texas. These
control and optimization applications are implemented on two finish mills and one dry kiln line
(including the pre-calciner and grate cooler). The kiln-cooler controller is designed to maximize
production and reduce fuel consumption. It reduces NOx emission as part of a multivariable control
solution and improves clinker consistency. The project took basically 3 months to execute, and has
resulted in sustained benefits to production, stability, and quality. The project benefits have paid for it in
less than 6 months.
Production
A cement production plant consists of the three processes.
1. Raw material 2.Clinker burning 3.Finish grinding
The raw material and the clinker burning process are classified as the wet process and the dry process
respectively.
Transfer of Raw Material from Quarry to Different Silos
Most of the raw materials used for making cement are extracted through mining and quarrying.
Those are lime (calcareous), silica (siliceous), alumina (argillaceous), and iron (ferriferous). Limestone
is the prime constituent of cement; the major cement plants are located near the good quality lime stone
quarries. First, the lime stones and quarry clays are fed to “primary crusher house” for raw crushing.
Then the materials are transferred to “secondary crusher house”. After that the crushed materials are fed
to the stock pile, inside the stock pile there is a stacker/reclaimer which segregates the raw
material in to different stacks. The stacking and reclaiming systems operate independently. Four
additives: iron ore, bauxite, laterite and fluorspar are in the stack pile to get required composition of
cement. The conveyor C4 brings the adding the stack pile, then as per the requirement
limestone, iron ore, bauxite, laterite and fluorspar are transferred to different silos. Inside each silo
there are three level detectors which detect the level of materials inside it. Figure 1 shows the process.

Figure 22. Raw Material Transfer from Quarry to Different Silos


In the wet process, raw materials other than plaster are crushed to a diameter of 20 mm by a
crusher and mixed in an appropriate ratio. Then, with water added, the mixture is further made finer
by a combined tube mill with a diameter of 2 to 3.5 m and a length of 10 to 14 m into slurry
with a water content of 40%. The slurry is transferred to a storage tank then mixed to be
homogenized and sent to a rotary kiln for clinker burning. In the wet process, the slurry can be easily
mixed but greater energy is consumed in clinker burning due to water evaporation.
For the dry process of manufacturing cement, crushed raw materials are dried in a cylindrical
rotary drier, mixed, ground and placed in storage tanks then sent to a rotary kiln for clinker burning.
For the wet process, plant construction cost is low and products are high-quality: but the dry process
consumes less energy and its running cost is lower therefore it is preferred.
Actuators and Sensors
The sensors connected to the process are simple but can be complicated as an on-line X-ray analyzer.
Examples of traditional discrete (switch) sensors include the limit switch, temperature switch, pressure
switch, flow switch, level switches, photo eyes, and input devices such as push buttons and pilot lights.
Switches are digital devices that provide a simple contact which indicate their state of on or off.
Instruments such as thermocouples or pressure transmitters are analog devices capable of providing a
continuous range of measurement readings from 0 to 100% of range and reporting their data with a
changing voltage or current output. As for outputs, motor starters and solenoids are generally digital
devices, opening or closing depending on whether they are fed with power from a system output.
These intelligent instruments have microprocessors allow for functionality beyond the basic
measurement of the primary variable. They monitor and report additional process variables, support
multiple configurations, monitor and report. Motor starters now monitor various aspects of the operator of
the motor. They report current, phase voltage, motor power, motor power factor, and running hours and
can tell exactly why a trip occurred (i.e., overload, ground fault, or short circuit?).
Intelligent Devices and Subsystems
Intelligent subsystems are available with controller cards that will reside in the same card rack as
the PLC. These include scales and load cell controllers, radio frequency tag readers, motion controllers,
and forms of communication interfaces, from basic RS-232 to any network protocols. Actuators with
small amounts of I/O are equipped with the ability to communicate over I/O networks. For example
intelligent valve controllers that will reside on the I/O network.
Automation of Process
To make this process fully automatic PLC is provided to take real time decision depending
upon the field level input signals from sensors placed in different critical points and sends the
instructions to the output devices. Conveyor C1 starts rolling and the bulk materials i.e. lime stones are
taken from quarry by conveyor C1 to primary crusher (PC). After some time delay, required for
primary crushing C2 starts running and the raw materials are transferred to secondary crusher (SC). This
time on delay is defined by timer TT2. The secondary crusher (SC) is started together with C2.
Conveyor C3 will start after a time on delay (TT3) of starting the secondary crusher to transfer the
material from crusher house to stock pile. The pushbutton switches PB3and PB4 are provided inside the
stock pile. Now if PB3 is closed manually, the stacker/reclaimer system (S/R) starts directly or after a
time on delay (TT4) of starting either conveyor C3 or C4. Pushbutton PB4 is provided to stop the
stacker/reclaimer system manually. For safe operation each and every process should be turned off
sequentially, to achieve this five off-delay timers are used i.e. TT5, TT6, TT7, TT8, TT9. When conveyor
C1 are on, the timer TT5 is true. The Done bit of TT5 is latched with PC. When the primary crusher (PC)
is on, the timer TT6 is true. The Done bit of TT6 is latched with C2. TT7 is latched with the secondary
crusher (SC) which remains true till C2 remains on. Timer TT8 remains true till SC is running. The done
bit of TT8 is latched with the C3. TT9 remains true till conveyor C3 is on. There are also an off delay
timer TT10 which remains true till C4 is on. Now due to any fault if emergency plant shutdown is
required, the operator presses PB2 this will stop conveyor C1.
Material Transfer from Different Silos to Homo- geneous Silo
Normally there are various types of lime stone quarry so the quality of limestone differs. In
order to get homogeneous mixture those four additives with required percentage, as suggested are mixed
with the lime stone. This is done using a RBF placed in each silo and with the help of conveyor C15.
The RBFs and corresponding conveyors (i.e.C10, C11, C12, C13, C14) will move with a speed to
maintain the actual ratio of the five components i.e. user defined. C15 transferred the raw materials to
the vertical roller mill (VRM) as the raw materials should be finish-ground before being fed into the kiln
for clinkering.
This process is done in the VRM. The raw materials are simultaneously dried using hot air in
order to get good quality cement. The Hot air is comes from conditioning tower or from grate cooler.
Hot air along with the dust is introduced in the electrostatic precipitator (ESP). Then the exhausted air
existed in the air, through chimney with the help of ID Fan. After grinding, the materials are
transferred to homogeneous silo with the help of bucket elevator (BE). There is a screen vibrator (SV)
in between the BE and homogeneous silo for Proper blending and to sort-out the large granules. Then
the large granules are feedback to the VRM by Feedback conveyor C18. The process is shown in the
fig2.
Bucket elevator, screen vibrator and conveyor C16 are used to transfer the raw materials to
Homogeneous silo. The large granules are fed back to the VRM by conveyor C18, to remove the dust
particles, exhausted gas produced during cement making is sent to electrostatic precipitator (ESP). The
gas is sent to the co-generation power plant for captive power generation, then the exhausted clean gas is
removed by fan through chimney and the fine dust is fed back to the homogeneous silo.

Figure 23. Shows Raw Material Transfer from Silos


Pyroprocessing and Finish Milling
The raw mix is heated up to (1600-1700) °C to produce Portland cement clinkers. Clinkers are
created from the chemical reactions between the raw materials. The pyroprocessing takes place in a kiln.
It is basically a long cylindrical pipe which rotates in a horizontal position having (1600-1700) °C
temperature. The clinker is sent to the cooling section for cooling to room temperature. The cooled
clinker is then transferred to the storage silo. The Clinkers are hard, gray, spherical nodules with
diameters ranging from 0.32 - 5.0 cm. To reduce the size and to impart special characteristics to the
finished cement, the clinker is ground with other materials such as 5% gypsum and other chemicals
which regulate flow ability. The finished cement is finally transferred to the packaging section. The
process is shown in the figure 3.
Export
The export of Indian cement has enhanced over the years mostly after decontrol, this has given
boost to the industry. The demand for cement depends on industrial, real estate, and construction activity.
Since growth is taking place around the world in Indian export of cement is increasing. India has
potential to take up cement markets of in the Middle East and South East Asia.
Conclusions
Since energy cost of cement production is large, energy conservation is important. The wet
process is cheaper than the dry process as regards energy consumption. At the technical level of quality
and productivity, the dry process is superior. This innovative automation process is flexible and easily
adaptable. Automation provides and monitoring capabilities and provisions for programmable
troubleshooting, reduces the downtime. The automation process provides better control. The
environment of cement plant is temperature, with dust and electromagnetic interference (EMI).
The process automation improvise environment for the workers and prevents direct contact with
the gases and dust that are emitted during cement manufacturing. As the PLC performs the operation
intelligently and as it has centralized control features; it also helps to reduce the manpower and at
the same time it reduces the workers’load.

Pulp and Paper Plants


Today, the pulp and paper industry is under increasing pressure to harness the power of technology
and leverage advanced measurement and control capabilities in order to optimize raw material and energy
consumption, machine performance, product quality, and profitability. Mills also face greater demands in
the areas of safety, sustainability, and environmental compliance. With so much at stake in your
operation, you need a partner who understands the pulping and papermaking process, and supports your
strategic business objectives.
Producing high quality pulp and paper products in an efficient, reliable and cost-effective manner
requires a partner who can deliver the right instrumentation and analytical products, systems and services.
That’s Honeywell. With decades of pulp and paper know-how, Honeywell has the qualifications to help
enable your mill’s prosperity. Our team of field sales engineers sales partners, and technical support
specialists are available to serve you in key locations around the world. We back all of our products with
a comprehensive warranty and a support center staffed with engineers and technicians with solid product
knowledge.
Continuous technology improvement must be ongoing in pulp and paper production to obtain the best
possible performance and quality, and simultaneously ensure environmentally friendly operation. That’s
why leading paper companies worldwide choose Honeywell’s best-in-class field instruments for use in all
areas of the mill, including demanding flow, level, pressure, temperature, and liquid analysis applications
Pulping
Reliable instrumentation is required for the mechanical and chemical processing of wood chips.
Sensor and analyzer information helps ensure the right amount of chemicals, energy and fiber are used
throughout the batch and continuous cooking, thermo mechanical pulping, and bleaching processes. The
pulping process requires robust field instruments able to cope with the harsh environment. Chemical
pulping is the most demanding process incorporating high temperatures, corrosive and abrasive
substances, and vibrations.
Honeywell’s products not only offer precise measurement for tight control of all flow rates,
temperatures, pressures and pH, but are well known for being the most reliable in the plant. Bleaching
Depending on its density, chemical pulp is transported via pumps, flow distributors, or conveying spirals
to the bleaching tower. Bleaching then runs continuously at high temperatures, with chemicals being
added. Honeywell can help maintain a constant level in the bleaching tower, which is important for the
quality of the bleaching process. Our analytical products are ideal where pH and conductivity levels
fluctuate. And, it’s essential to measure pressure and flow rates with the sensors in contact with
aggressive chemicals. Honeywell’s reliable products are designed with these requirements in mind.
Inventory Preparation For continuous operation in a pulp and paper mill, large quantities of prepared
stock are necessary and can be held in storage towers and chests. In this application, Honeywell level
measurement delivers reliable information on the contents of the towers, while offering the process
compatibility required for minimal maintenance.

Paper Machine When coating paper, a slurry of coating additives travel from the tanks to the
coating kitchen for additional processing. Honeywell’s flow measurement offerings provide accurate
monitoring to ensure a safe level and prevent pump damage. Conductivity and pressure are also key
measurements offered by Honeywell for these applications. In the headbox, pressure and speed must be
constant to ensure a uniform paper sheet is formed. Honeywell’s accurate pressure measurement helps
maintain fan pump speed control. During the drying process, a condensate film forms on the inside wall
of the drying cylinder, affecting the heat efficiency of the transfer to the paper. By using Honeywell
products to measure the pressure difference between the inlet and outlet, the effect of the drying cylinder
is continuously monitored, so the production rate can be altered accordingly to get a perfectly dried sheet
Wet End After bleaching, pulp from different sources are mixed as needed for various products and
mechanically refined prior to delivery to the headbox. Sizing agents, dyes, and additives must be added
under strict measurement conditions to promote the desired paper characteristics. These chemicals are
more effective under controlled pH conditions.
Honeywell instruments measuring variables such as pH, conductivity and flow increase the
effectiveness of the reactions and therefore the quality of the end product. Honeywell’s analytical
instruments require less calibration and fewer replacements that other offerings. Waste Treatment
Accurate analytical measurements are also key to waste treatment in a combined pulp and paper mill.
Wastes coming together from different processes create a constantly changing waste stream that must be
carefully monitored so treatment and disposal can be made according to environmental regulations.
Honeywell’s pH and conductivity sensors and magnetic flowmeters are ideal for these regulated
requirements. Utilities There are various steam, boiler and recovery utilities in a pulp and paper mill that
need to be controlled. One of the most important is the recovery of spent cooking chemicals. Accurate
level measurement is critical in this lengthy process. Honeywell’s multivariable pressure and vortex flow
with integral pressure and temperature compensation offer effective and efficient measurements. In
addition, Honeywell’s control system offerings provide accurate and tight control of boiler applications
with minimal engineering required.
Honeywell offers industry-proven field instruments and control systems that set the standard for
performance and reliability, providing the safety, security and efficiency required by the most demanding
applications and environment. We have a proven track record of reducing risk, avoiding downtime, and
providing customers with long-term support and migration paths. Honeywell is known for it’s robust
distributed control systems (DCS), such as Experion PKS. But for those applications where a DCS is not
required, Honeywell offers an alternative solution.
MasterLogic PLC Honeywell’s MasterLogic PLC brings power and robustness to logic, interlock
and sequencing applications for those applications not needing all the features of a DCS. This compact,
modular PLC offers a redundant architecture at a lower cost than other major brands. Its advanced
technology enables higher speed processing and better control. MasterLogic PLC I/O is the standard I/O
platform for Honeywell’s PMD controller installation. • Powerful and versatile processors • IEC 61131-3
standard programming • Vast library of standard function blocks • Over 50 types of I/O modules • Smart
I/O modules (DIN rail) on open protocols HC900 Hybrid Control System The Honeywell HC900 is an
advanced hybrid control system with a modular, scalable design. It can serve as a versatile control
solution for applications such as sheet and cast film extrusion. The combination of precise control
functionality, direct sensor inputs, and integrated logic processing make the HC900 ideal for temperature
control. The system is available with options such as MD thickness control (via line or screw speed) and
CD bay control (for up to 96 bolts). • Tightly integrated analog and logic control • Open Ethernet network
connectivity • Automated loop tuning • PC supervision and/or local flat panel HMI • Setpoint schedulers
with multiple ramp/soak outputs.

5.5.3 Chemical Plant


Chemical plant monitoring and control systems
A chemical plant is required to adjust /control requirements (temperature, pressure, flow
amount.....) to produce products connecting each facility with piping to realize regulated process
(reaction, distillation, filtration. ).
This solution provides large scale plant monitoring control system led by DCS (distributed control
system) and utility equipment monitoring control system of power source and heat source equipment
according to its scale size.

You might also like