You are on page 1of 434

Advanced Control

Unleashed
Plant Performance Management
for Optimum Benefit

Terrence L. Blevins
Gregory K. McMillan
Willy K. Wojsznis
Michael W. Brown

ISA-The Instrumentation, Systems,


and Automation Society
Notice
The information presented in this publication is for the general education of the reader. Because
neither the author nor the publisher have any control over the use of the information by the reader,
both the author and the publisher disclaim any and all liability of any kind arising out of such use.
The reader is expected to exercise sound professional judgment in using any of the information
presented in a particular application.
Additionally, neither the author nor the publisher have investigated or considered the affect of any
patents on the ability of the reader to use any of the information in a particular application. The
reader is responsible for reviewing any possible patents that may affect any particular use of the
information presented.
Any references to commercial products in the work are cited as examples only. Neither the author
nor the publisher endorse any referenced commercial product. Any trademarks or tradenames
referenced belong to the respective owner of the mark or name. Neither the author nor the publisher
make any representation regarding the availability of any referenced commercial product at any
time. The manufacturer's instructions on use of any commercial product must be followed at all
times, even if in conflict with the information in this publication.

Copyright © 2003 ISA - The Instrumentation, Systems, and Automation Society

All rights reserved.

Printed in the United States of America.


109876543

ISBN 1-55617-815-8

No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or
by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior writ-
ten permission of the publisher.

ISA
67 Alexander Drive
P.O. Box 12277
Research Triangle Park, NC 27709

Library of Congress Cataloging-in-Publication Data is in process.


Dedication
This book is dedicated to Karen Blevins, Cathy MacDonell Brown,
Carol McMillan, and Susan Wojsznis who have provided encouragement
and support throughout our careers.
Acknowledgement

The authors wish to express their appreciation to Mark Nixon and Ron
Eddie from Emerson Process Management, for their enthusiastic support
and commitment of resources for this book, to Jim Hoffmaster, Bud Keyes,
Duncan Schleiss, John Berra, and Gil Pareja from Emerson Process Man-
agement for their inspiration and support in establishing the DeltaV
advanced control program, to Karl Astrom from Lund University, Tom
Edgar from the University of Texas at Austin, Dale Seborg from the
University of California, Santa Barbara and Tom McAvoy from the Univer-
sity of Maryland for their guidance in the pursuit of new technologies,
Mike Gray and Mark Mennen from Solutia Inc. for the initiation and
sustenance of advanced control applications and innovations, Ken
Schibler from Emerson Process Management for his help in setting the
direction of the book, Robert Cameron, Michael Mansy, Glenn Mertz, and
Gina Underwood from Solutia Inc. for their valuable comments, and
finally, Scott Weidemann from Washington University, and Jim Cahill,
Brenda Forsythe, and Cory Walton from Emerson Process Management
for their essential contributions to the videos and demos on the CD. The
authors extend their thanks to the developers of the advanced control
tools that were the inspiration for this book. This includes Vasiliki Tzovla,
Ron Ottenbacher, Dirk Thiele, Ashish Mehta, Yan Zhang, Peter Wojsznis,
John Gudaz, Ian Nadas and Mei Yang. Also, we would like to recognize
the valuable contribution of Tom Aneweer, Dennis Stevenson, Jay Colcla-
zier, Darrin Kuchle, Dick Seemann, Joe S. Qin, Steve Morrison, Mike Ott,
and Sai Ganesamoorthi.

The discipline of Process Control and Advanced Process Control is an


exciting, challenging and rewarding field of engineering. Some of us
moved into this discipline by chance, while others made a conscious deci-
sion to become Process Control Engineers. Regardless of our entry point or
xiii
motivation, we all appreciate the fact that engineers before us took the
time and effort to teach us the tools and techniques that allowed us to
achieve success. This book passes along the knowledge of many years and
many people and acknowledges the efforts of our engineering mentors.
We hope it will allow others not only to benefit from all the experience we
have benefited from so greatly but also to take the technology to the next
level.
About the
Authors

Terrence L. Blevins has been actively involved in the application and


design of process control systems throughout his career. For over fifteen
years, he worked as a systems engineer and group manager in the design
and startup of advanced control solutions for the pulp and paper industry.
Terry was instrumental in the establishment of Emerson Process Manage-
ment's Advanced Control Program. He is the Fieldbus Foundation team
leader for the Function Block Specification. In this capacity, Terry is
involved in the movement of Fieldbus Foundation function block work
into international standards. Terry is the US expert to the IEC TC65 WG6
and SC65C WG7 function block committees. He wrote the fieldbus section
included in the Process/Industrial Instrumentation and Controls handbook.
Terry has eight patents and has written over forty papers on process con-
trol system design and applications. He received a BEE from the Univer-
sity of Louisville in 1971 and a Master of Science degree in Electrical
Engineering from Purdue University in 1973. Presently, Terry is a principal
technologist in DeltaV Product Engineering and the team leader for
DeltaV advanced control product development.
Phone: (512) 418-4628
E-mail: terry.blevins@EmersonProcess.com

Gregory K. McMillan retired as a Senior Fellow after a 33 year career with


Monsanto and Solutia Inc, where he specialized in improving loop perfor-
mance, controller tuning, valve dynamics, opportunity assessments,
dynamic simulation, fermentor control, pH control, and reactor control.
Greg is the author of numerous articles and books, his most recent book
being: Good Tuning - A Pocket Guide. He has contributed to several hand-
books and is the editor of the Process/Industrial Instrumentation and Controls
handbook and the coauthor of a monthly column titled "Control Talk".

xv
xvi Advanced Control

Greg is an ISA Fellow and received the ISA "Kermit Fischer Environmen-
tal" Award for pH control in 1991, the Control Magazine "Engineer of the
Year" Award for the Process Industry in 1994, and was one of the first
inductees into the Control Magazine "Process Automation Hall of Fame"
in 2001. He received a B.S. from Kansas University in 1969 in Engineering
Physics and a M.S. from University of Missouri - Rolla in 1976 in Control
Theory. Presently, Greg is an affiliate Professor at Washington University
in Saint Louis, Missouri and is a consultant through EDP Contract Services
in Austin, Texas.
Cell Phone: (314) 703-9981
E-mail: gkmcmi@msn.com

Willy K. Wojsznis has been involved in developing advanced control


products over the last twelve years focusing on model predictive control
and auto tuning. Over the previous nearly 25-year of his career he was
developing computer control systems and applications in cement, steel,
mining and paper industries. His professional work resulted in a number
of successful and innovative advanced control products, fourteen patents,
and above thirty technical papers. He received control engineering degree
(EE) from Kiev Technical University inl964, M.S. in Applied Mathematics
from Wroclaw University in 1972, and Ph.D. from Warsaw University of
Technology in 1973. Presently, Willy is a part of DeltaV advanced control
group. He conducts applied research in the areas of optimization, adaptive
control and model predictive control.
Phone: (512) 418-7475
E-mail: willy.wojsznis@EmersonProcess.com

Michael W. Brown has spent his entire career in the application of


Advanced Process Control technologies in the continuous processing
industries. Over the previous 15 years of his career, he has served as an
Advanced Process Control Consultant, providing implementation exper-
tise and technology guidance for many operating companies. His exten-
sive knowledge and experience in the areas of model based predictive
control and real-time optimization have assisted many companies in cap-
turing the benefits of these performance improvement technologies. His
work has resulted in several technical papers, published in various control
journals. Michael is a Chemical Engineer, received a B.A.Sc. from the Uni-
versity of Waterloo in 1987 and completed his Masters in Applied Science
in 1989. Presently, Michael is the Applications Business Manager, with
Matrikon Inc., where he continues to work with industry to bring
Advanced Control Technology to the next level.
Phone: (905) 282-9248
E-mail: mike.brown@matrikon.com
Foreword

There has been a dynamic development of control over the past 50 years.
Many new methods have appeared. The methods have traditionally been
presented in highly specialized books written for researchers or engineers
with advanced degrees in control theory. These books have been very use-
ful to advance the sate of the art. They are however difficult for an average
engineer. The reasons are that it is necessary to read many books to get a
good coverage of advanced control techniques and that the level of mathe-
matics used requires a substantial preparation. This is a dilemma because
several of the advanced control techniques have indeed been very benefi-
cial in industrial and more engineers should be aware of them. Even if
many details of the new methods are complicated the basic underlying
ideas are often quite simple. Many methods have also been packaged so
that they are relatively easy to use. It is thus highly desirable to present the
industrially proven control methods to ordinary engineers working in
industry. This book is a first attempt to do this. The book provides a basis
for assessing the benefits of advanced control. It covers auto-tuning,
model predictive control, optimization, estimators, neural networks, fuzzy
control, simulators, expert systems, diagnostics, and performance assess-
ment. The book is written by four seasoned practitioners of control, hav-
ing jointly more than 100 years of real industrial experience in the
development and use of advanced control. The book is well positioned to
provide the bridge over the infamous Gap between Theory and Practice in
control.

Karl J. Astrom

xvii
Contents

ACKNOWLEDGEMENT xiii

ABOUT THE AUTHORS xv

FOREWORD xvii

Chapter 1 INTRODUCTION 1

Chapter 2 SETTING THE FOUNDATION 5


Practice, 5
Overview 5
Opportunity Assessment 12
Examples 15
Application, 20
General Procedure 20
Application Detail 26
Rules of Thumb 74
Theory, 76
Process Time Constants and Gains 76
Process Time Delay 79
Ultimate Gain and Period 80
Peak and Integrated Error 82
Feedforward Control 84
Dead Time from Valve Dead Band 84
Nomenclature, 85
References, 86

vii
viii Advanced Control Unleashed

Chapter 3 APC PATHWAYS 89


Practice, 89
Overview 89
Opportunity Assessment 94
Examples 103
Application, 106
General Procedure 106
Application Detail 108
Rules of Thumb 115
References, 116

Chapter 4 EVALUATING SYSTEM PERFORMANCE 119


Practice, 119
Overview 119
Opportunity Assessment 121
Examples 125
Application, 129
General Procedure 129
Application Details 131
Rules of Thumb 143
Guided Tour 144
Theory, 147
Using Statistics for Control Performance Evaluation 150
Extending the Concept to the Multi-variable Environment 153
Addressing Advanced Control 154
Diagnostic Tools 156
References, 160

Chapter 5 ABNORMAL SITUATION MANAGEMENT 163


Practice, 163
Overview 163
Opportunity Assessment 165
Examples 166
Application, 168
General Procedure 168
Application Details 169
Rules of Thumb 171
Guided Tour 173
Theory, 177
Introduction to Expert Systems 177
Rules 178
Inference Engine 180
Facts 181
References, 182
Table of Contents ix

Chapter 6 AUTOMATED TUNING 183


Practice, 183
Overview 183
Opportunity Assessment 185
Examples 187
Application, 197
General Procedure 197
Application Detail 200
Rules of Thumb 202
Guided Tour 206
Theory, 208
Introduction to Auto Tuners 208
Basics of Relay-Oscillation Tuning 210
Model Based Tuning 218
Robustness Based Tuning 221
Adaptive Control 225
References, 237

Chapter 7 FUZZY LOGIC CONTROL 239


Practice, 239
Overview 239
Opportunity Assessment 240
Examples 240
Application, 241
General Procedure 241
Rules of Thumb 242
Guided Tour 242
Theory, 244
Introduction to Fuzzy Logic Control 244
Building a Fuzzy Logic Controller 247
Fuzzy Logic PID Controller 251
Fuzzy Logic Control Nonlinear PI Relationship 254
FPID and PID Relations 257
Automation of Fuzzy Logic Controller Commissioning 258
References, 259

Chapter 8 PROPERTIES ESTIMATION 261


Practice, 261
Overview 261
Opportunity Assessment 263
Example — Dynamic Linear Estimator 265
Examples - Neural Networks 269
Application, 274
General Procedure 274
Application Detail 279
X Advanced Control Unleashed

Rules of Thumb 289


Guided Tour 289
Theory, 294
Dynamic Linear Estimator 294
Neural Networks 296
References, 305

Chapter 9 MODEL PREDICTIVE CONTROL 307


Practice, 307
Overview 307
Opportunity Assessment 310
Examples 316
Application, 337
General Procedure 337
Application Detail 339
Rules of Thumb 353
Guided Tour 355
Theory, 362
The Basics of Process Modeling 364
Identifying the Process Model 367
Unconstrained Model Predictive Control 369
Integrating Constraints Handling, Optimization and Model Predictive Control
373
References, 381

Chapter 10 VIRTUAL PLANT 383


Practice, 383
Overview 383
Opportunity Assessment 386
Examples 387
Application, 389
General Procedure 389
Online Adaptation 393
Application Detail 395
Rules of Thumb 399
Guided Tour 400
Theory, 403
References, 408

Appendix A ADDITIONAL OPPORTUNITY ASSESSMENT QUESTIONS 409

Appendix B BATCH-TO-CONTINUOUS TRANSITION 415

Appendix C DEFINITIONS 419


Table of Contents xi

Appendix D TOP 20 MISTAKES 425

INDEX 431
1
Introduction

The advent of powerful and friendly integrated software has moved


advanced process control (APC) from the realm of consultants into the
arena of the average process control engineer. The obstacles of infrastruc-
ture and special skill requirements have started to disappear and we are
poised for an accelerated application of APC.

It is well known that APC seeks to discover, incorporate, and exploit


knowledge about raw materials, process, product, equipment, instrumen-
tation, and final elements. What is not often recognized is the significant
increase in the knowledge base of both plant and field operations that
occurs as the APC system is developed. In fact, an appreciable portion of
the benefits are achieved by improvements made in operating procedures,
set points, sensors, and control valves as a result of the methodical analy-
sis, testing, modeling, and prototyping that are part of the best practices
used in the implementation of APC systems.

Until recently, most of this knowledge ended up with consultants, and the
success of the application often deteriorated once they departed. There is
now an opportunity for the engineers closest to the process and daily
operations to take a much more active role in the development and sup-
port of APC applications. It is a win-win situation in that the cost of APC
can be reduced by using consultants primarily in a higher-value-added
role of conceptual design and optimization. Even more importantly,
greater understanding, support, and involvement of onsite engineers can
increase the success rate, the on-stream time, and the longevity of an APC
application. This decrease in the cost and increase in the benefits will in
turn lead to a larger number of successful APC installations and a greater
interest in APC as a method of improving process efficiency and capacity.

1
2 Advanced Control Unleashed

However, much of the purpose and use of APC has been clouded in the-
ory. The theory is scattered among many books written for graduate
school programs in advanced process control. Application papers typi-
cally concentrate on the benefits of specific APC projects and serve more
as advertisements for particular consulting or software firms than as
implementation guides. Little if anything has been written for the practic-
ing engineer on how to select, design, configure, commission, and tune
APC systems. The purpose of this book is to demystify APC and make it
more accessible. To that end, the book focuses on practice and applications
backed up by enough theory to insure a deeper understanding.

Each chapter is organized to provide concise practical information that a


user can readily explore and reference to start and complete a successful
implementation. Each chapter has three major sections, entitled Practice,
Application, and Theory.

The PRACTICE section starts with an Overview that provides a concise


explanation of the technology and its importance. It provides the motiva-
tion and basis for investing more time in learning and pursuing the tech-
nology. Next is an Opportunity Assessment subsection that offers a simple
approach to determine whether the technology is applicable to a particular
unit operation. It consists of a set of concepts and questions to start the
thought process and discussions to find potential applications. The Exam-
ples subsection rounds out the section. The sample of applications pre-
sented here help to instill a better practical understanding of the use of the
technology in the process industry.

The APPLICATION section starts with a General Procedure subsection that


presents a "go-to" checklist to introduce the user to the normal sequence
of events for a successful application. This list provides a good reference to
make sure all bases are covered and is useful for planning, scheduling,
estimating, and monitoring APC projects. Next there is an Application
Detail subsection that summarizes most of what a user needs to know. A
building-block approach is used wherever possible, starting with a basic
"bare-bones" application and adding successively more capabilities to end
up with a full complement of advanced features. Next is a series of Rules of
Thumb concisely phrased to be readily referenced and remembered. A
brief explanation with any notable exceptions follows each rule. The sec-
tion ends with a Guided Tour to give the reader a feel for how advances in
software have made implementation easy enough that the user can focus
on the opportunity offered by these APC tools to discover, incorporate,
and exploit plant knowledge.

The THEORY section presents the major facets of selected approaches to the
deployment of each APC technology as part of a state-of-the-art tool set.
Chapter 1 - Introduction 3

For brevity, the section does not survey all the possible methodologies and
techniques, but focuses on those that are innovative and simple enough to
be integrated into a distributed control system.

This book covers a great deal of ground. Each of the technologies dis-
cussed here could easily fill a book in itself. However, users today don't
have the time or inclination to read a lot of material. Lists, hints, rules of
thumb, and concise explanations are employed to save the reader time and
to provide both a better perspective on the whole picture and an improved
ability to drill down to obtain specific implementation guidance. The book
concentrates on what is most important. Users can quickly get to the heart
of the matter without getting lost in the details associated with a specific
tool or suffering from information overload.

While a user can go directly to a given chapter to learn about a particular


technology, the authors recommend that Chapters 2 and 3 be read first.
They provide the necessary foundation on which to build an APC applica-
tion and the logic to select the most appropriate technology.

Included with the book is a compact disc that contains a set of examples of
the technologies discussed in the book. They demonstrate, by means of a
step-by-step procedure and a detailed dynamic process model, how to
configure, test, and run each APC application. Configuration and case files
use a virtual plant that has a complete scalable Distributed Control System
(DCS) with a suite of APC tools and a high-fidelity plant simulation.

A companion set of Power Point slides that illustrates all of the major Fig-
ures, equations, tables, lists and rules included in the book is on the CD.

These slides and the hands-on exercises make the book practical as a text-
book for courses on both basic and advanced process control. Chapters 2
and 6 receive the most extensive treatment because introductory courses
are most common. Also, students and users alike need to first concentrate
on getting the basic regulatory control system designed correctly and
tuned properly before moving on to more advanced topics. Most of the
material has been tested in an introductory course on process control for
junior and senior chemical engineers at Washington University in Saint
Louis. These students have demonstrated the ability to immediately apply
these APC tools to example problems after a brief tutorial, using their
computer skills and a powerful integrated Windows® environment. The
sound mathematical foundation of APC makes it easier to learn than basic
control, which is more heuristic.
4 Advanced Control Unleashed

The tutorials and presentations on the CD do not require any special soft-
ware or hardware beyond a PC with a media player, speakers, and a dis-
play with a screen area of at least 1024 by 768 pixels.

This book with its appendices and CDs should enable the average process
engineer to develop a good understanding of the representative principles
and techniques of APC. This knowledge will be helpful in setting objec-
tives, evaluating potential APC opportunities, and applying the most
appropriate APC technologies. Readers should feel free to contact the
authors at their e-mail addresses if they have any questions about the use
of the book, exercises, demos, slides, or APC tools described.

All royalties from this book will be given directly to universities, consortia,
and educational programs to promote and enhance the development and
use of advanced process control. A beneficiary of each year's royalties will
be chosen by the authors.
2
Setting the
Foundation

Practice
Overview
The advanced control projects with the largest benefits usually have made
significant improvements in the basic regulatory control system. While
advanced process control (APC) techniques can partially compensate for
such limitations as missing measurements, excessive dead time, and poor
signal-to-noise ratios, a solid foundation will provide the lowest total cost,
greatest total benefit, and the longest lifecycle for the advanced control
system. Deficiencies in the measurement and the final element can
increase the time required for process testing and identification by a factor
of 5 or more and can significantly reduce the improvement in process
capacity and efficiency provided by APC.

The core of a solid foundation for advanced process control is good mea-
surements and final elements. The measurement is the window into the
process and must be able to provide an undistorted view of small changes
in the process. The final element is the means of affecting the process and
must be able to make small changes to the process. This overview pro-
vides a perspective of how these objectives are best met by reducing the
reproducibility error, noise, and interferences in the measurement and
decreasing the stick-slip and dead band in the control valve.

Measurement
Reproducibility is the closeness of agreement of an output for an input
approaching from either direction at the same operating conditions over a
period of time. Repeatability is the closeness of agreement of an output for

5
6 Advanced Control Unleashed

successive inputs approaching from the same direction at the same operat-
ing conditions. Reproducibility includes the repeatability as it deteriorates
over time plus drift, and is the better number for control. Another impor-
tant consideration is the interference from changes in process fluids and
operating conditions. Unfortunately, the specifications given by manufac-
turers for such measurements as accuracy, linearity, or rangeability are
extraneous if not misleading because they are either not as important as
reproducibility, drift, and interference or are generated under fixed labora-
tory conditions.

If the measurement is noisy or not reproducible, the controlled variable


will change when there is no change. If the change in the process variable
is less than the resolution limit of the sensor or its digital representation or
conversion, there is no change. Consequently, it is important to make sure
the measurement reproducibility error, resolution, and noise band total
less than 1/5 of the allowable control band, or 1/5 of the permissible vari-
ability about the set point. Measurement resolution without any sensor
limitations is 0.05% of span for a twelve bit digital conversion, so it is nor-
mally not an issue except when large spans associated with thermocouple
and resistance temperature detector (RTD) input cards are used in loops
that need to control the temperature within 0.5° Centigrade of set point. It
is important to make sure that measurement time delay and time con-
stants do not excessively slow down or attenuate the view of the actual
changes in the process variable. Of lesser concern is an offset and nonlin-
earity in a measurement, since these can be compensated for by a shift in
the set point or a simple zero adjustment in the transmitter and signal
characterization, respectively.

Changes in process or ambient conditions show up as drift or slow noise.


For example, changes in a fluid's density will change the liquid head and
hence the level reading from a differential pressure measurement; changes
in a fluid's kinematic viscosity will change the meter coefficient and hence
the flow reading from a vortex meter; and changes in the activity coeffi-
cient of the hydrogen ion will change the glass electrode potential and
hence the pH reading. In this text, drift will be considered to be in effect a
long-term reproducibility error.

Final Element
The most common final element is the control valve. Controller outputs
also manipulate the speed of pumps and power to heaters. With final ele-
ments that are totally electronically set, there are no issues of stick and slip
as there are for control valves, and any dead band that exists is purposely
introduced and adjustable to reduce the response to noise. Also, the
response of the manipulated variable (flow for the pump and heat for the
heater) is linear with controller output. Variable-speed drives have essen-
Chapter 2 - Setting the Foundation 7

tially no time delay or time constant and rate limiting is normally adjust-
able and not an issue except for surge control. Heaters are inherently slow,
but most temperature processes are also slow.

Usually, a control valve will not move on its own or when the controller
output is constant unless the actuator is undersized or the positioner is
unstable. Also, if the valve were to drift, the positioner and process con-
troller would correct for it. Thus, long-term reproducibility and noise are
not normally issues for control valves. While noise is not generated in the
valve stroke, noise in the process variable can be passed on as rapid
changes in the valve signal, which, if they exceed the resolution limit or
dead band of the control valve, can cause excessive wear and tear and pre-
mature failure of the packing.

If a control valve sticks, there is no change in flow until the accumulated


changes exceed the resolution of the valve. If the valve slips, the change in
flow is more discrete than continuous and is much larger than demanded
by the corresponding change in controller output [2.2] [2.3]. For dead band
or backlash, there is no change in flow when the controller reverses direc-
tion until the change exceeds the dead band as shown in Figure 2-1.

Stiction, a combination of the words "stick" and "friction," adds time


delay to a loop for any change in the controller output; dead band adds
time delay only for reversals in direction; and slip causes the process vari-
able to pass right by the set point. The dead time from stiction and back-
lash will increase the error from load upsets due to the additional time
delay but doesn't slow down the set point response as much if the changes
in set point are rather large. Studies that introduce white noise or set point
changes instead of load upsets will report a negligible increase in inte-
grated error from dead band [2.2]. Other studies that concentrate on the
effect of the additional time delay from dead band on load upsets show a
50% increase in the peak error and a 100% increase in the integrated error
from 10% dead band [2.4]. Oscillations in loops with dead band tend to
dissipate unless there are interactions or aggressive tuning. Slip, on the
other hand, will cause a limit cycle if there is any reset action, because the
process variable will not come to rest exactly at set point and when reset
works to eliminate the offset the eventual change in flow is too large. The
stroking time of the control valve is not as big an issue except for very fast
loops and very large actuators.

Thus, in the normal scheme of things, slip is worse than stick, and stick is
worse than dead band, and dead band is worse than stroking time. For
sliding stem valves, stick-slip and dead band go hand in hand since the
common cause is excessive packing friction. In fact, if the slip is equal to
the stick, it is effectively the same thing as the resolution limit. The resolu-
8 Advanced Control Unleashed

tion of sliding stem valves can be estimated as half of the dead band [2.4].
In other words, where you have excessive dead band, you tend also to
have excessive stick-slip. However, in rotary valves, there are different
sources of stick-slip and dead band. A rotary valve could have a large
dead band but little stick-slip.

Figure 2-1. Definition of Dead Band and Stick-Slip

Until recent years, when you asked a control valve manufacturer to esti-
mate the dynamic response of a control valve, you were given the stroking
time of the actuator. Even now, if you ask for a response time that includes
the valve, it will be for a change of 10% in controller output at 50% posi-
tion so that the effect of stick-slip and dead band are largely removed [2.5].
In actual operation, the change in controller output per scan is typically
less than 0.5% and can occur at positions less than 20% where the friction
of the sealing surfaces increases the stick-slip. Tests done at these condi-
tions will unearth the real response problems. In valves, stick and slip go
together and can be identified while the loop is operating for fast measure-
ments, as shown in Figure 2-2. Here, stick is the amount of change in the
controller output where there is no change in the process variable and slip
is the rapid change in the process variable divided by the product of the
valve and process gain.

The effect of nonlinearity in the control valve is often misunderstood. In


order to analyze the effect on the control loop, you must look at the prod-
uct of the valve gain and the process gain. Since the process gain is
inversely proportional to flow for temperature and composition control
[2.5], the equal percentage characteristic, where the valve gain is propor-
tional to flow, helps to compensate for process-gain nonlinearity. For con-
trol loops where the process gain is not inversely proportional to flow or
where the installed characteristic deviates from the ideal equal percentage
Chapter 2 - Setting the Foundation 9

Figure 2-2. Identification of Stick and Slip in a Closed-loop Response

characteristic, the nonlinearity of the valve characteristic becomes impor-


tant and there is a range of permissible valve gain, as shown in Figures 2-
3a through 2-3c. Also, the operating-point nonlinearity from the installed
valve characteristic has a greater detrimental effect than the operating-
point nonlinearity associated with a process variable, because the operat-
ing point of the valve changes with load per the demands of the controller
whereas the operating point of the process variable is driven back to the
set point. Correspondingly, process nonlinearity in secondary loops in a
cascade or advanced control system becomes more important because the
set point is driven to new operating points to meet the demands of the
master, supervisory, or model predictive controller. The secondary loop
should be 5 times faster than the primary loop so that the primary loop
isn't affected by the nonlinearity inside the secondary loop.

Signal characterization of the controller output is a relatively quick fix for


unwanted valve nonlinearity because it is easy to implement in Fieldbus
functional blocks. What is not so easy to do is to calculate the actual shape
of the installed characteristic, unless the changes in the inlet and outlet
pressure to the control valve are measured. Also, the operator must be
made aware of the fact that the controller output now represents percent
desired flow instead of percent valve position. For maintenance and trou-
bleshooting, the characterizer output that represents the calculated valve
position target should be displayed along with any feedback of actual
valve position. In addition, signal characterization will increase and
decrease the effect of dead band or suction for operating points on the
steep and flat portions of the installed characteristic, respectively. [2.5]
Thus, a greater improvement in loop performance is realized on the flat
tail of the installed characteristic curve associated with rotary valves in
Figures 2-3a and 2-3b. However, the user must remember that these curves
can become so flat above 60 degrees opening that there is essentially no
10 Advanced Control Unleashed

change in flow for a change in rotation and the valve gain approaches
zero.

To summarize, the numbers that traditionally have been cited by the man-
ufacturer for valve performance, such as leakage, stroking time, linearity,
and rangeability, do not provide the information needed to measure con-
trol loop performance. The user needs to know the stick-slip, dead band,
and sensitivity of the installed valve assembly at operating conditions.

Figure 2-3a. Installed Characteristic of a Butterfly Valve

Figure 2-3b. Installed Characteristic of a Ball Valve


Chapter 2 - Setting the Foundation 11

Figure 2-3c. Installed Characteristic of a Sliding Stem Valve

Effect on APC
Advanced control tools such as feedforward control, online estimators,
and model predictive controllers can minimize to a significant degree the
effect of measurement deficiencies. Feedforward control can bypass the
irregularities and delay in the controlled variable but still must work
through the manipulated variable. Since the exact size of stick and dead
band is extremely variable, undercorrection is normal and the overall
improvement is minimal. Filters can reduce the effect of noise; and model
predictive control can reduce the adverse effect of noise, resolution, and
reproducibility by minimizing the error between a process vector created
from a model of the process and the set point vector [2.6]. However, its
model is based on the assumption that the control valve actually moved
for the recent past changes in the controller output. Thus, advanced con-
trol algorithms are more vulnerable to deficiencies in the control valve
than measurement. While a kicker algorithm can theoretically reduce the
effect of dead band and stiction, overcorrection will cause excessive move-
ment similar to slip [2.4]. There is no computational correction for valve
slip. The effect of slip is amplified by high valve sensitivity (valve gain)
and high process sensitivity (process gain). The only solution for slip, and
the best solution for stiction and dead band, is a change in the valve type,
assembly, and accessories or the use of a variable-speed drive.

To summarize, while it is desirable for the measurement to be linear and


accurate and the control valve's effect on the process to be linear and pre-
cise, it is most important that the signal-to-noise ratio, reproducibility,
time delay, and time lag of the measurement, and the stick-slip and dead
band of the control valve, be scrutinized in case they prevent the advanced
12 Advanced Control Unleashed

control system from doing its job. This scrutiny involves an analysis of the
type, location, and installation of the instrument and final element. Both
the degree to which deficiencies in the measurement can be compensated
for by advanced control techniques, and the permissible amount of stick-
slip and dead band, must be part of a cost-benefit analysis.

This chapter offers enough theory to provide a deeper understanding of


the origin of the relationships and rules presented, including the funda-
mentals necessary for graduate studies and to develop new tools for
advanced control. The Theory section covers the setup and direct solution
of the differential equations for material and energy balances to show the
fundamental relationships between process parameters and process time
constants and process gains. It also covers Frequency Response in enough
detail to show the source of the equations for the ultimate period and gain
and the basis of the first-order-plus-dead-time-approximation method for
loop analysis. The theory behind the various advanced control tools is dis-
cussed in succeeding chapters.

Opportunity Assessment
In this section, some questions are offered that could form an OA to find
improvements in a basic control system. Question (1) deals with the ability
to overdrive the manipulated variable on startup or for a major set point
change in a batch operation to reduce the amount of time it takes for the
controlled variable to reach set point. This question is also important in
performance of an advanced control system to help reduce the time lag in
the manipulated variable for the model predictive controller. There is, of
course, a tradeoff between rise time and degree of permissible overshoot,
but in general, for temperature and composition control of volumes with
mixing and for the start of a continuous or batch process, the output
should initially be saturated high but backed off from this limit before the
controlled variable approaches set point. Various algorithms and tuning
methods are available to provide overdrive. The fraction that the startup
time or a batch cycle can be reduced is proportional to the ratio of the
missing area of overdrive to the total area of the manipulated variable dur-
ing the rise time.

Questions (2) through (4) determine whether a control loop is creating


variability instead of transferring the proper amount of variability from
controlled variables to manipulated variables. Questions (5) and (7)
involve looking at an application and installation and spotting some obvi-
ous flaws. The next three questions look for opportunities to apply cas-
cade and ratio control (also known as flow feedforward). These questions
generally apply to both batch and continuous unit operations. Questions
(11) through (14) look for opportunities to reduce batch cycle time. Appen-
dix B expands upon these questions to take a closer look at the methods to
Chapter 2 - Setting the Foundation 13

reduce the time to reach a batch set point and eliminate operator attention
requests.

1. Could the valve position be initially driven beyond the resting


position when the controller settles out at set point, to reduce
continuous process startup time or batch process cycle time?
2. Is the variability less in the controlled variable of the loop when the
controller is in manual?
3. Is the variability less in other loops when the controller is in
manual?
4. Is the variability less in the process variable for an important
constraint if the controller gain or rate setting is decreased or
integral time is increased?
5. Would better reproducibility and less noise in measurement reduce
the variability in a process variable for an important constraint?
6. Have tight shutoff valves, high temperature packing, key lock
shafts, vane actuators, scotch yoke actuators, or valves without
digital positioners been used in control loops that affect important
constraints?
7. Have any of the top 20 mistakes been made in an important loop?
(See Appendix D for a list of the mistakes made every year for the
last forty years.)
8. Are there opportunities to linearize the manipulated variable for a
primary controller by creating a secondary loop that encloses the
nonlinearity?
9. Are there opportunities to attenuate a load upset to a primary loop
by creating a secondary loop that encloses the disturbance?
10. Are there flows that can be ratioed and used as a feedforward
signal to enforce a material balance for startup and to compensate
for changes in flow rate?
11. For batch operations, can phases be eliminated by going from
sequential to parallel actions, such as simultaneous heating, filling,
pressurization, and venting?
12. Can batch cycle time be reduced by a decrease in wait times, hold
periods, operator attention requests, manual actions, or lab sample
analysis time?
13. Can batch end points be automated by the use of a property
estimator, trajectory, or sustained rate of change?
14 Advanced Control Unleashed

14. Can batch cycle time be reduced by overdrive or an all-out run and
coast?
If the variability in a loop decreases when the loop is in manual, it indi-
cates that the loop was doing more harm than good, due to poor control
valve performance, inappropriate tuning, and/or interaction. If the valve
does not respond to small steps (e.g., 0.25% to 0.5%) in the controller out-
put, the oscillations are probably due to the control valve. If an increase in
the controller gain or a decrease in the integral time increases the variabil-
ity, it is mostly due to incorrect tuning. Lastly, if the variability in other
loops is less when a loop is put in manual, the variability is the result of
interaction.

If the variability in a loop increases when the loop is in manual, there are
load upsets that were being attenuated by the loop and it was doing some
good. If the variability stays the same, the fluctuations are mostly due to
noise or lack of measurement reproducibility.

There are also some obvious flaws that will stand out from some simple
tests. If there are significant non-uniform fluctuations in the measurement
regardless of the mode of the controller, then the selection and installation
of the transmitter are suspect. These problems are most often associated
with insufficient runs of straight pipe upstream or sensing line problems.
If the loop develops a sawtooth oscillation in the controller output when it
is moved out of manual, it is most likely due to excessive packing or seat-
ing friction, backlash in the linkage, shaft windup, and/or poor or missing
positioners. The selection and installation considerations that lead to these
problems will be discussed in the Application Detail section of this chap-
ter.

One of the last and most obvious opportunities is the use of cascade con-
trol and ratio control. The most common type of cascade control is a flow
loop that deals with the nonlinearity of the control characteristic and com-
pensates for pressure upsets so that the primary control loop can manipu-
late flow instead of valve position and not see the effect of pressure
swings. The next most common cascade control system uses a jacket, or
coil inlet or output temperature secondary loop, to insulate a primary
crystallizer or reactor temperature control loop from changes in coolant
temperature and the nonlinearity associated with the manipulation of
coolant makeup flow.

The largest and most frequent opportunities in basic control are summa-
rized in Table 2-1 and discussed in detail throughout the rest of Chapter 2.
Simple equations for the fundamental relationship between either the
standard deviation or the peak or integrated error for upsets can be used
for each type of improvement.
Chapter 2 - Setting the Foundation 15

Table 2-1. Largest and Most Frequent Opportunities in Basic Control

• Decrease stick-slip and improve the sensitivity of the final element (Standard
deviation is the product of stick-slip, valve sensitivity (valve gain), and pro-
cess gain):
Use properly tuned smart positioners, short shafts with tight connections, and
low-friction packing and seating surfaces to decrease valve slip-stick and dead
band.
If Graphoil™ packing must be used, aggressively tune/monitor the smart posi-
tioner.
Improve valve type/sizing or add signal characterization to increase valve sen-
sitivity.
Use variable speed drives where appropriate for the best precision and sensi-
tivity.
• Improve the short- and long-term reproducibility and reduce the interference
and noise in the measurement (Standard deviation is proportional to repro-
ducibility and noise):
Use vortex, magnetic, and coriolis mass flow meters to eliminate sensing lines.
• Use smart transmitters to reduce process and ambient temperature/pressure
effects.
Use RTDs and narrow-span temperature transmitters to decrease noise and
drift.
• Reduce loop dead time (Integrated error is proportional to the dead time
squared):
• Decrease valve dead time (stick and dead band).
Decrease transport (plug flow volume) and mixing delay (turnover time).
Decrease measurement time constants (sensor lag, dampening, and filter
time).
• Decrease discrete device delays (scan or update time).
• Tune the controllers (Integrated error is inversely proportional to the control-
ler gain and directly proportional to the controller integral time).
• Add cascade control (Standard deviation is proportional to the ratio of the
period of the secondary loop to the process time constant of the primary
loop).
• Add feedforward control (Standard deviation is proportional to the root mean
square (RMS) of the measurement, feedforward gain, and timing errors).

Examples

Neutralization Process
Figure 2-4a shows a two-stage neutralization process. The economic vari-
able is yield. The optimum yield is for pH between 6 and 8. A byproduct is
formed that is 1% of the total product when the pH goes above 8. Of
greater concern is the fact that the reaction time increases from 2 minutes
by a factor of ten for each pH unit below 6 pH. The first stage is a static
mixer with a residence time of 2 seconds and the second stage is a well
mixed vessel with a residence time of 20 minutes. The titration curve has a
16 Advanced Control Unleashed

particularly steep slope between 6 and 8 pH (1 ΔpH per 0.0001Δratio)and


will greatly amplify a valve stick-slip limit cycle, as shown in Figure 2-4b.

The following improvements can be made to reduce pH variability and


improve the yield.

1. Change the reagent valves from ball valves without positioners to


sliding stem control valves with digital positioners; otherwise the
stick-slip multiplied by the valve gain will be a limit cycle that
causes a dramatic reduction in yield. Make the second-stage
reagent valve 10 times smaller than the first stage.
2. Close couple the second-stage reagent valve to the recirculation
line just before reentering the vessel to eliminate the huge reagent
delivery-time delay from a small reagent flow dribbling down a
dip tube backfilled with process fluid.
3. Add a secondary reagent flow control loop for the second stage to
create a pH-to-flow cascade control system that compensates for
upsets in reagent pressure and linearizes the manipulated variable
for the pH loop. Do not add a cascade control system to the first
stage because the secondary loop is not 5 times faster than the
primary loop and the equal percentage valve characteristic has a
valve gain that is proportional to flow that helps compensate for
the process gain that is inversely proportional to flow. The large
well mixed volume of the second stage has a time constant
inversely proportional to flow that cancels out the effect of the
process gain, which is also inversely proportional to flow (see
Equation 2-24); so the linearization by the addition of a flow loop is
desirable for the second stage.
4. Add input signal characterization to translate the controlled
variable from pH to % reagent demand to linearize the first stage
since it will operate over a wide portion of the pH titration curve.
Signal characterization is not necessary for the second stage since it
should be operating well within the control band of 7 to 9 pH.
Provide a faceplate for the operator that displays the actual pH
rather than the linearized controlled variable of reagent demand
for the first stage.
5. Add a feedforward signal to the second stage pH controller output
to handle changes in feed flow and startup. Use a feedforward
summer instead of a feedforward multiplier to the second-stage
pH controller so that a nonlinearity, which is proportional to flow,
is not introduced by multiplying the controller output by feed
flow. Set up the summer so that a controller output of 50% is no
correction and feedback corrections of -50% and +50% are
Chapter 2 - Setting the Foundation 17

provided by the controller. Display the actual and desired ratio of


reagent flow to feed flow for the second stage and provide a
bumpless transfer to feedforward control.
6. Add a head start to the output of the first-stage pH controller to
help speed up the startup of the system. The head start is
calculated based on estimated ratio of reagent to feed flow
captured from the last run. The calculation must include the effect
of the nonlinear flow characteristic of the reagent valve. Since the
reagent flow is small, the pressure drop is relatively constant and
the inherent characteristic can be used as the installed
characteristic for the control valve. The head start is used to
initialize the controller output at startup. The pH controller
ultimate period should be less than 1 minute for the inline system.
7. Move the reagent flow meters upstream of the control valve for
both stages, to greatly improve the flow profile for the meter and to
reduce reagent delivery delay. If coriolis flow meters are not used,
make sure there are enough straight runs upstream and
downstream of the flow meter. Add an isolation valve close
coupled to the injection point that will automatically close just
before and reopen just after the reagent control valve closes and
opens, respectively, to irtinimize the backfill of process fluid into
the reagent piping.
8. Move the pH electrodes for the first stage 20 pipe diameters
downstream of the static mixer outlet to provide enough distance
for the streams divided by the mixer to recombine.
9. Move the pH electrodes for the second stage upstream of the heat
exchanger in the recirculation line to within 20 pipe diameters of
the discharge pump to minimize the transportation delay.
10. Use a middle-signal selector for each stage to choose the middle
reading from the three pH measurements, thus ignoring a single
signal failure of any type and reducing noise and spikes.
The benefits from the reduction in variability afforded by the listed
improvements will be estimated in Chapter 3. The improvements 1-9 are
illustrated in Figure 2-4c.

Distillation Process
Figure 2-5a shows a distillation column, feed tank, and a storage tank for
the distillate product. The series of plots in Figure 2-5b are indicative of the
nonlinear relationship between tray temperature and both the distillate-to-
feed ratio (Fd/Ff) and the impurity in the product. The process gain seen
by the temperature loop is the slope of the plot versus F d /F f . The inverse
of the slope of the plot of temperature versus impurity concentration
18 Advanced Control Unleashed

Figure 2-4a. Basic Neutralizer Control System Before improvements

Figure 2-4b. Nonlinearity and Sensitivity of pH

expresses the sensitivity of the impurity concentration to measurement


error.

Thermocouple cards with a 400°C span are used for the temperature mea-
surements. The distillate control valve has Graphoil™ packing and a
pneumatic positioner. The storage tank residence time is 4 hours and the
Chapter 2 - Setting the Foundation 19

Figure 2-4c. Basic Neutralizer Control System After Improvements

time delay in the temperature loop is 1 hour. The reflux-to-feed ratio is 10.
If concentration of impurities in the product in the storage tank exceeds
the spec by more than 0.1%, the product must be recycled. For every 0.1%
reduction in impurity the steam flow to the reboiler must be increased by
0.1%.

1. Add a digital positioner to the distillate control valve and tune it


aggressively to deal with the high friction from the Graphoil™
packing and reduce stick-slip. It is important to realize that settings
obtained from the instruction manual or from running a preset
automated step test are generally too sluggish.
2. Change the pairing of the loops at the top of the column so that the
temperature controller manipulates the distillate flow and the
distillate receiver level controller manipulates the reflux flow, since
the distillate flow is too small for level control.
3. Use tray 6 instead of tray 10 for temperature control because tray 6
has a larger and more symmetrical temperature response, per
Figure 2-5b, for a change in distillate flow. Also, the slope versus
impurity concentration is higher, which means the reproducibility
error and resolution of the measurement shows up as a smaller
concentration error as shown in Figure 2-5b. This is generally more
important than the increase in the time delay associated with the
lower tray.
20 Advanced Control Unleashed

4. Move the location of the sensor down into the tray so it always is
immersed in the liquid rather than in the vapor, or even worse, a
splashing liquid.
5. Replace the thermocouple and its DCS input with a 3- or 4-wire
RTD with a smart temperature transmitter.
6. Tune the overhead distillate receiver level controller with a high
controller gain to insure that the effects of small changes in
distillate flow translate into changes in reflux flow and thus
changes in the column temperature.
7. Tune the feed tank level controller with a low controller gain to
smooth out the changes in feed to the column. Consider the use of
error squared control. For a batch-to-continuous transition in an
undersized feed tank, use an adapted velocity limited feedforward
per Appendix B for optimum smoothing.
8. Add signal characterization to the controlled variable to
compensate for the nonlinearity in the process variable depicted in
Figure 2-5b. Provide a faceplate for the operator that displays the
actual tray temperature rather than the linearized controlled
variable of distillate demand.
9. Add a secondary flow controller to each column loop to
compensate for the nonlinearity associated with the control valve
and to prepare the column loop for feedforward control.
10. Add a flow feedforward signal to the temperature and level
controller outputs and display the actual and desired ratio of
distillate to feed. Make sure the feedforward action is active when
the temperature controller is in manual and the operator can easily
go to flow ratio control and adjust the ratio for the startup of the
column.
The benefits from the reduction in variability afforded by the listed
improvements will be estimated in Chapter 3. The improvements are illus-
trated in Figure 2-5c.

Application

General Procedure
1. Track down and correct the source of sustained oscillations. A
power spectrum analyzer may be required to find the loops with
the common period of oscillation. Beware of a slow scan time of
the I/O and controller that will cause a slower than actual period
and a smaller than actual amplitude from aliasing. For trends or
data obtained from data historians, make sure the data highway
Chapter 2 - Setting the Foundation 21

Figure 2-5a. Basic Column Control System Before Improvements

Figure 2-5b. Nonlinearity and Sensitivity of Tray Temperatures

reporting and the time intervals between data points for historical
data are not too slow and the trigger for exception reporting and
compression is not set too high. Also, the data must be saved for at
least a month to catch different process conditions and modes of
operation.
22 Advanced Control Unleashed

Figure 2-5c. Basic Column Control System After Improvements

The correction most often involves improving the responses of


control valves and the tuning of controllers. Stick-slip, dead band,
and time delay from the final element can be eliminated by the use
of a variable-speed drive. Wave form recognition, step tests, and an
understanding of the relative effects of valve response, controller
tuning, noise, resonance, interactions, and nonlinearities can pin-
point the root cause.

a. The most likely source of a sustained oscillation is a control


valve with stick-slip from excessive friction in either the
packing or sealing surfaces. This occurs most often in valves
with high-temperature or environmental packing, particularly
if it has been tightened. The next-greatest source of stick-slip is
high-performance valves designed to provide tight shutoff. It
can also occur in almost any control valve so severely
oversized that it rides the seat where the valve alternately
closes and then breaks free. In some cases, an aggressively
tuned digital positioner may be sufficient. It is important to
realize that these oscillations are always present because all
valves have some stick-slip (a finite resolution). It is just a
matter of degree. If the stick-slip is small enough, it is
smoothed out by signal filtering and data compression or
Chapter 2 - Setting the Foundation 23

washed out by back-mixed volumes. It is not surprising that


hidden oscillations have been found in most of the loops of a
plant [2.2].
b. The second most likely source of oscillations is a level
controller that has a control valve with either stick-slip or dead
band. For integrating processes such as level, sustained
oscillations occur for valves with excessive dead band besides
stick-slip, which is the source of limit cycles in processes with a
self-regulating response. Dead band, also known as backlash,
can result from linkages and shaft connections that are not tight
and is thus more likely to occur in rotary valves because they
involve either a translation of vertical actuator motion to rotary
motion or the connection of a rotary actuator shaft to the shaft
of the ball or disc. The Application Detail section explains how
dead band increases the susceptibility of the level loop to a
reset cycle and how the integral time must be increased [2.3].
The best fix, outlined in the Application Detail section, can be
relatively expensive in that it requires a new control valve
designed to minimize backlash and friction or a variable-speed
drive. An alternative that can mitigate but not eliminate the
limit cycle is a level-to-flow cascade loop.
c. Controllers can create periodic upsets from noise if the reaction
to the noise causes the controller output to exceed the dead
band of the control valve from the gain or rate setting being too
large. This most often happens in level controllers, where
controller gains can be quite large.
d. Controllers can amplify periodic upsets whose period is close
to the natural period of the control loop. Resonance occurs
from the feedback action of the controller being in phase with
the disturbance oscillation. It most often occurs for control
loops in series that have similar loop time delays such as liquid
pressure and flow, and inline equipment in series (heat
exchangers, static mixers, and desuperheaters).
e. Interacting controllers can cause sustained oscillations. Here,
the output of a controller affects another controller and vice
versa. A steady state relative gain analysis (RGA) can reveal
the nature of the interaction. However, the dynamics must be
considered as well since the interaction is particularly severe if
the periods of oscillation of the loops are similar. The best
solution is a change in pairing of the control loops per the
RGA. If this is not feasible, model predictive control (MPC)
should be used.
24 Advanced Control Unleashed

However, beware of stiff or singular processes that may not even be


handled by MPC. A singular matrix occurs when process gains for one
control variable are a factor of the process gains for another controlled
variable, making it impossible to achieve independent set points for these
controlled variables (see Chapter 9 for more details). These processes are
becoming more common as plants push the capacity limits of their plants
without making the necessary modifications to the process equipment.
a. A measurement resolution that approaches the size of the
control band can cause sustained oscillations. With the
disappearance of mechanical sensors, this primarily occurs for
temperature control loops that use thermocouple input cards
instead of narrow-range smart transmitters.
b. Although less common, sustained oscillations can also occur
from controllers tuned so aggressively that they bang back and
forth between output limits, and from nonlinear loops that
have a very high central gain region surrounded by
exceptionally low gain regions. This can occur for control
valves when an insufficient fraction of the system drop has
been allocated as a valve drop, strong acid and base titration
curves, and the temperature response of some monomer and
water distillations. For process nonlinearities, the addition of
signal characterization of the controlled variable and rate
action can eliminate or mitigate the limit cycle. For valve drop
problems, the size of the piping and/or the pump impeller
may need to be increased.
2. Track down the source of long settling times. Here, the oscillations
eventually die out but take too long or cause too much variability.
The most common cause is inappropriate controller tuning, such as
the use of too much reset action (too small an integral time) in
evaporator, reactor, or column temperature or concentration
controllers, or other loops dominated by a large time constant; too
much gain or rate action in level controllers on surge tanks; and too
much gain or rate action in liquid pressure, flow, inline
concentration (blending), or sheet gauge or moisture controllers or
in loops dominated by a large time delay (dead time dominant).
3. Check the sensor selection, installation, and location for
opportunities, per best practices, to improve the reliability,
reproducibility, rangeability, and resolution, and to reduce noise
and decrease loop time delay. Orifice meters and chromatographs
are some of the least reliable measurements and are the biggest
sources of excessively fast and slow noise. Chromatographs are
also the largest source of measurement time delay from sample
transportation and analysis cycle time. Look for ways to eliminate
Chapter 2 - Setting the Foundation 25

sensing lines and sample lines by the use of sensors that mount
directly in the pipeline or on the vessel [2.7].
4. Look for ways to reduce the time delay in control loops by changes
in the design of the equipment, piping, instrumentation, final
elements, and the pairing of controlled variables with manipulated
variables.
5. Tune the controllers for the best compromise between robustness
(stability), performance, and smoothness. It is important to realize
that the tuning rules change with the ratio of time delay to time
constant and that all loops will see both load upsets and set point
changes. Methods that focus on set point changes (servo control)
and noise introduced into the measurement are applicable to
aerospace, web, and parts manufacturing but not to processes for
the chemical, petroleum, food, and drug industries and
environmental control. Make sure the tuning method takes into
account the relative degree of dead time and provides the proper
capability for load rejection. Beware of any control loop analysis
that concentrates solely on set point response, and the introduction
of noise or upsets downstream of the process and directly into the
measurement [2.8]. These methods were developed from control
programs in system science or electrical engineering and tend to
ignore the effect of the process and equipment dynamics,
characteristics, and objectives.
6. Find opportunities to employ cascade control. Wherever there is a
reliable flow measurement and a primary loop whose time delay
and time constant are more than five times slower than a flow
loop, a secondary flow controller should be created. There are
some cases where the dynamics are not appropriate for cascade
control. Examples of undesirable choices would be inline pH-to-
reagent flow and liquid pressure-to-flow cascade control because
the primary and secondary loops have about the same time delay.
7. Look for opportunities to add feedforward control, especially flow
feedforward where manipulated flows are ratioed to a feed flow.
Make sure the feedforward signal does not arrive too soon and
cause inverse response.
8. For improvements that cannot be covered by the maintenance
budget, the benefits from the reduction in variability can be
estimated by the calculations in Chapter 3 to justify the project.
26 Advanced Control Unleashed

pplication Detail
This section will take a closer look at the methods to improve the response
of valves and measurements, reduce the total loop time delay, tune con-
trollers, employ cascade control, and add feedforward control.

Valve Selection
Control valves are often selected based on the lowest cost valve that has
the required materials of construction. Often tight shutoff is sought.
Nowhere in the valve specification is there a requirement that the control
valve move or respond to a change in signal. Consequently, rotary valves
are chosen because they are the least expensive and offer models with low
leakage rates. They are also thought to offer the highest rangeability. In
reality, the rotary valve has the least usable rangeability because the
installed characteristic gets too flat for small and large controller outputs.
Figure 2-3a shows how the characteristic is too flat below 5 degrees and
above 45 degrees for a butterfly valve. Figure 2-3b shows how the charac-
teristic is too flat below 10 degrees and above 60 degrees for a ball valve. If
you further take into account that the stick-slip significantly increases
when these valves are less than 15 degrees open, the actual usable range-
ability of these valves is less than 20:1, instead of the 200:1 and 400:1 stated
in the literature.

By contrast, the sliding stem valve installed characteristic doesn't get too
flat until it gets below 5% open or above 75% open as illustrated in Figure
2-3c. Also, its stick-slip is an order of magnitude or more less and usually
doesn't increase dramatically until the valve is less than 5% open. Also,
unlike the rotary valve, the trim movement of a sliding stem valve closely
matches the actuator shaft movement so that a digital positioner, whose
feedback is typically actuator-stem position, can, by aggressive tuning,
actually compensate for high packing friction. As a result, the real range-
ability of sliding stem valves with digital positioners is 40:1.

Of course, valve manufacturers who offer only rotary control valves will
develop clever ways of diverting attention from these issues or even pitch
the opposite by the use of labels like "high performance" and "high range-
ability" that ignore flat installed characteristics and stick-slip. The user
must realize that "high performance" indicates the ability of the valve to
provide tight shutoff and to withstand high temperatures. These same fea-
tures translate into excessive friction and low performance in terms of con-
trol. The use of a digital positioner cannot correct for the inherent stick slip
problems of rotary valves and can essentially deceive the user into think-
ing it is doing a great job by fancy plots and statistics of the step response
of the actuator stem position. Unfortunately, the ball or disc position does
not track the actuator shaft position, because of gaps in linkages, toler-
Chapter 2 - Setting the Foundation 27

ances of shaft connections, and shaft windup. A sensitive, low-noise flow


measurement is needed to test the response online [2.9]. Otherwise, the
valve has to be removed and a sensitive travel gauge mounted on the ball
or disc to detect actual movement for small steps or a slow ramp in the
valve signal.

New designs of sliding stem (globe) valves, such as that shown in Figure
2-6, reduce the amount of metal used in the body and pockets and crevices
where process material can stagnate and accumulate. This makes the valve
more competitive with the rotary valve for exotic materials, large line
sizes, and fouling or slurry service. Above 6 inches in line size, the cost of
sliding stem (globe) valves can become large enough to warrant further
investigation. If the reduction in stick-slip and loop variability offered by a
sliding stem valve doesn't provide an acceptable rate of return on the
additional investment, the user should take a closer look at rotary valves,
but avoid any valves originally designed for isolation or interlocks. Sepa-
rate automated "high performance" or "on-off" valves should be used for
isolation or interlocks, and low friction valves for throttling service. Since
many of the rotary valves are flangeless (wafer bodies), the lifecycle cost
should include not only the cost of increased variability, but the increased
difficulty of proper installation and alignment and the increased risk of a
safety incident and reportable release of hazardous materials.

Figure 2-6. Sliding Stem Valve with Streamlined Passages and Less Metal

Rotary valves must also pass the checks on the maximum pressure drop.
The rotary valve must meet the maximum pressure drop rating at shutoff
and the maximum allowable pressure drop to avoid choked flow, flashing,
cavitation, and exceeding the noise limit. In general, sliding stem valves
offer higher pressure drop ratings, higher allowable pressure drops to pre-
vent cavitation, and noise reduction trim, and are thus the first choice for
high pressure, boiler-feed water, steam and condensate systems.
28 Advanced Control Unleashed

If a rotary valve is still the best choice, make sure the connection of the
actuator shaft to the ball or disc stem is a splined connection, as shown in
Figure 2-7, to minimize the tolerance and associated play in the connection
so that the backlash is less than 0.5%. Key lock connections can cause a
backlash of 8%. Also, the shaft diameter should be large and the shaft
length should be short so that shaft windup does not cause a stick-slip
greater than 0.5% [2.10].

Figure 2-7. Splined Shaft Connection

To summarize the performance differences, sliding stem valves with low


friction packing and digital positioners have a stick-slip that approaches
the analog-to-digital converter resolution of 0.05% and a rangeability that
approaches 50:1. Rotary valves exhibit a stick-slip and backlash from 0.5%
to 10% and a rangeability of 10:1 to 20:1, even with a digital positioner and
low friction packing, with the greatest deterioration found in designs that
employ keyed connections, long slender shafts, and high friction sealing
of surfaces for tight shutoff. To help avoid the many traps of creative
advertising, the user should keep in mind the popular myths listed in
Table 2-2.

Table 2-2. Popular Myths About Control Valves


1. Rotary valves have the greatest rangeability.
2. High Performance valves offer high control-loop performance.
3. A digital positioner will make any control valve perform well.
4. A fast actuator will make any valve fast.
5. A piston is the fastest and most reliable actuator.
6. Tight shutoff is desirable for a control valve.
7. The most cost-effective valve is a rotary valve.
8. Fast loops should use boosters instead of positioners.

In the old days of analog controllers, theoretical studies based on Nyquist


plots showed that, for fast loops such as flow and liquid pressure loops, a
Chapter 2 - Setting the Foundation 29

booster would provide better control than a positioner. In real life,


unknown bench settings, excessive booster dead band, positive feedback
from high booster outlet port sensitivity, and the high breakaway torque of
tight shutoff valves makes the omission of a positioner risky business
[2.10]. Compressor surge valves have slammed shut due to booster posi-
tive feedback when a booster was used without a positioner [2.1]. In fact,
where large actuators need the high relay capacity of a booster, it should
be used on the outlet of a positioner and a bypass around the booster
adjusted for stability [2.1] [2.11]. In the chemical industry, positioners have
either improved, or at worst had a benign effect on, the performance of all
types of control loops [2.12]. The scan time and optional filter of a digital
process controller, and the tuning flexibility of a modern positioner, elimi-
nate the concern about the violation of the cascade rule where the second-
ary valve position loop is not sufficiently faster than the master process
loop. Actually, there never was much of a concern because flow loops
were tuned with mostly integral, rather than gain, action.

Pneumatic and electro-pneumatic positioners went out of calibration


within 6 months, the linkages were difficult to adjust, and remote indica-
tion of position required the mounting of a separate valve position trans-
mitter. Digital positioners hold their calibration, offer an indication of
actual valve position (AVP), and perform a wide spectrum of diagnostics
and tests that can be used for predictive rather than reactive maintenance.
The AVP can be monitored and compared to the controller output to deter-
mine dead band, stick-slip, and dead time for sliding stem valves. For
rotary valves, a low-noise-flow measurement is needed since the actuator
stem position does not reflect the actual ball or disc position.

A piston actuator can reduce the stroking time of large valves once a valve
starts to move. However, the design of most pistons exhibits poor resolu-
tion and dead band that will cause an exceptionally slow response to small
changes in valve position that can get worse if the cylinders are not prop-
erly lubricated. For small changes in valve position, a diaphragm actuator
is generally faster and more precise. Also, a diaphragm actuator does not
require lubrication or as much maintenance unless its temperature rating
is exceeded.

For concentrated slurries, particularly those with clumps of material or


rocks, such as waste treatment lime slurry, rotary valves may be the only
choice. In these cases, pulse interval modulation has been used to provide
an extended rangeability when the downstream back-mixed volume was
sufficient to attenuate the pulses [2.1].

Finally, pharmaceutical and food applications may require sanitary control


valves. Angle plug, ball, and diaphragm valves are used. The sliding O-
30 Advanced Control Unleashed

ring stem seal of an angle valve creates a difficult-to-clean void between


bonnet and stem, and does not insure product media will not leak past the
O-ring and into the environment. The rotary ball valve also has a void
between seals where product can accumulate, and has excessive operating
friction. By far the most popular choice is the Saunders diaphragm valve,
but both it and the pinch valve have a very limited rangeability of flow
and a quick-opening type of flow characteristic that is undesirable for con-
trol. A crevice-free rolling metal diaphragm control valve like that shown
in Figure 2-8 has an almost friction-free mechanism and sliding stem actu-
ation that affords exceptional rangeability and sensitivity [2.13].

Figure 2-8. Crevice-free Sanitary Valve with High Rangeability and Sensitivity

In all valves, there is a valve prestroke time delay: the time it takes for
enough air to move into or out of the actuator to change the air pressure
enough to start to move the actuator stem. The stroking time is the time
required to complete its transition to the new stem position after the actua-
tor starts to move. The tests to document the prestroke time delay and the
stroking time typically consisted of 10% or larger steps, done with the
valve disconnected. The results depended solely on the type and size of
actuator and the type and flow capacity of the actuator connections and
accessories; they did not include the effect of valve dead band or stick slip.

Until recently, valve manufacturers reported the actuator prestroke dead


time and stroking time when you asked how fast a valve would respond.
These days, a valve manufacturer will perform actual step tests of the
complete valve assembly but will chose a step size of 10% and an initial
valve position of about 50%, in order to show the fastest response.
Chapter 2 - Setting the Foundation 31

To get a realistic picture of valve response under actual loop conditions,


the smallest and largest step size at the minimum expected throttle posi-
tion should be used. For large valves used for compressor surge or pres-
sure control, step sizes of 10 to 50% are appropriate and the prestroke time
delay and stroking time dictate the speed of response. For most other con-
trol valves, step sizes of 0.5% or less should be used and the valve dead
band and stick-slip will dominate the response.

A ramp (a series of small steps held for the loop scan time) would better
duplicate the actual valve response for closed-loop control. The use of a
ramp is particularly important for pneumatic positioners and boosters
because they exhibit a drastic increase in response time as you approach
the resolution limit of the linkages and flapper nozzle assembly. To quan-
tify dead band, stick and slip, a series of steps, first for a reversal of posi-
tion and then in the same direction are used. Each step is held for the
prestroke dead time and stroking time identified from a 10% step. The
steps are continued until movement occurs. The actual valve movement in
excess of the size of the last step is indicative of the amount of slip.

Figure 2-9 shows how the response time changes as a function of the type
of control valve, shaft connections, actuator, and positioner. Diaphragm
actuators, sliding stem valves, and digital positioners have the fastest
response by far to a range of small step sizes, which is the goal of 99% of
all control valves. The combination of an electrical or hydraulic actuator
and a sliding stem valve can have an even better resolution. The only dead
band is what is introduced in the setup of the positioner to eliminate
dither. The prestroke dead time is essentially zero but the stroking time
increases proportionally to the step size and can become quite large for the
electrical actuator. Hydraulic actuators provide the fastest response for
small and large step sizes but are complex and expensive.

Valve Installation
The installation and location requirements for a control valve are generally
less than for a sensor. Ideally, control valves should have the same straight
run of pipe upstream and downstream as a differential head flow meter,
since both constitute a variable orifice [2.9]. Adherence to the straight-run
requirements rarely occurs in industry but is common in the flow test labs
used to establish flow characteristics and flow coefficients [2.9]. The repro-
ducibility error from an erratic flow profile is not as important for a final
element as it is for a measurement because the control loop will drive the
manipulated variable as necessary to reach set point. However, if the flow
is going to be computed through the control valve based on valve position
and pressure drop, the reproducibility of the resulting flow measurement,
and hence the straight-run requirements, become more important.
32 Advanced Control Unleashed

Figure 2-9. Response Times for Different Types of Final Elements (Valves,
Actuators and Positioners)

If the control valve is upstream of equipment with large pressure drops,


the minimum pressure in the valve is more likely to stay above the vapor
pressure that starts flashing. A location that offers a lower temperature,
besides a higher pressure, can prevent flashing. Cavitation can be pre-
vented by a staged pressure drop or a low-recovery control valve, such as
a sliding stem valve, so that the bubbles do not implode from the rise of
the discharge pressure above the vapor pressure. The staged pressure
drops can be done internally by special trim, two control valves in series,
or a restriction orifice [2.9]. If cavitation is unavoidable, the damage to pip-
ing can be avoided if the valve is mounted on a vessel nozzle so that the
bubbles implode inside the vessel vapor space [2.9].

A huge source of time delay occurs in concentration control loops when-


ever the volume between a control valve and the injection point into
equipment or piping is either partially filled or large compared to the flow
rate.

For partially filled lines, there is an excessive time delay even when the
valve stays open. A change in valve position causes a crest or valley in the
wave to travel down the pipe or channel. The transportation delay is the
distance divided by the velocity of the wave but the velocity is difficult to
estimate. For a very low flow down a vertical pipe, the velocity of a falling
film can be computed [2.14]. Whenever the control valve closes, manipu-
lated fluid in the downstream piping and injectors or dip tubes slowly
migrates into the equipment or destination and the process fluid backfills
the same volume. The result is a long delay between the closure of the
valve and the end of manipulated fluid flow and a similarly long delay
Chapter 2 - Setting the Foundation 33

between the opening of the control valve and the start of the manipulated
fluid flow into the equipment or destination.

For a pressurized, completely full pipeline, dip tube, and injector, the time
delay can be estimated as the volume between the valve and the injection
point divided by flow of the manipulated fluid. For pH control systems
where the manipulated fluid is a reagent, the flow can be as low as one
gallon per hour and just one gallon of volume downstream of the valve
can result in one hour of time delay every time the reagent control valve
closes [2.14].

The control valve should have block and drain valves so that it can be
removed safely and easily. For large continuous processes, it is desirable to
have a manual bypass valve to keep the unit online while the valve is
tested or repaired. Also, the outlet isolation valve can be closed and the
bypass valve, shown in Figure 2-10, opened for inline testing and tuning
of the digital positioner at process pressures and temperatures. For slurry
service, rotary valves can be mounted in a vertical-flow up pipe to pro-
mote self-draining and prevent solids buildup. For sliding stem valves, a
vertical mounting will cause additional wear of the packing from the
weight of the actuator.

Figure 2-10. Installation Requirements for a Flow Meter and Control Valve

The packing should be consistently tightened with a torque wrench to a


specification of less than 90 foot-lbs. A packing that is tightened exces-
sively, to supposedly prevent leakage, will cause a limit cycle and prema-
ture failure of the packing that can cause a release. For high temperatures
and highly hazardous materials, extension bonnets and bellows seals
should be considered instead of high-friction packing.
Next Page

34 Advanced Control Unleashed

The Variable-speed Drive Alternative


The cost of the variable-speed drive (VSD), also known as the variable-fre-
quency drive (VFD), has dropped to the point where it may now be an
attractive final element wherever there is a motor-driven pump or fan.
There is no stick-slip and the only dead band is that introduced by the user
in the electronics. In fact, there is also no measurable time delay, so that for
pressure and flow loops, the loop dead time is largely set by the controller
scan time [2.1]. For furnace pressure control, the use of a VSD on the
induced draft fan and a scan rate fast enough to be equivalent to an analog
controller can lead to much tighter pressure control and fewer pressure
trips [2.15].

There are common misconceptions, inadvertently promoted by a manu-


facturer, that the main advantage of a VSD is energy savings and that a
VSD is slow and lacks rangeability.

The time from minimum to maximum speed is adjustable within the limits
imposed by the impeller inertia and the motor horsepower. The factory
setting is conservative. The speed response is a velocity-limited ramp rate
with no time delay or lag. Consequently, the response for small speed
changes is fast. For example, if it takes a drive 9 seconds to go from 10% to
100% speed, it will take only 0.1 second to change the speed by 1%.

The minimum speed is also adjustable. The factory setting is especially


conservative. Cogging is not a problem for today's inverters, and over-
heating is not a problem for variable-torque loads typical for a pump or a
fan. Tests have shown that the rangeability of a VSD is greater than the
rangeability of a magnetic flow meter (25:1), instead of the 5:1 suggested
by the manufacturer. However, the minimum speed must provide a total
discharge pressure that is greater than the static pressure to prevent
reverse flow.

To summarize, the VSD provides a consistent and rapid response. In fact,


the response can be so fast that it scares operators because the flow can
reach a set point in less than a second. The controller output and set point
limits must be set to prevent excessively high or low flow rates and the
process variable filter must be set higher to prevent reaction to noise since
there is no dead band. Set point velocity limits may also be required. An
isolation valve may need to be added for shutoff and fail-safe action. The
inverter must be mounted in a climate-controlled room and be maintained
by technicians with special skills for troubleshooting complex electronics.
In addition, it may be necessary to increase the motor enclosure size to
meet the area classification because of the reduced cooling capacity of the
motor's fans at low speeds. Most motor vendors will only rate the motor
for the area classification if they supply the motor and VSD as a package.
Previous Page

Chapter 2 - Setting the Foundation 35

Finally, isolation transformers and best practices must be used to prevent


the introduction of noise from the inverter into the electrical ground sys-
tem. Consequently, the installed cost and the life-cycle cost of a VSD may
be more than a control valve if isolation valves must be added, the plant is
not set up for VSD maintenance, and the benefits from faster or more pre-
cise manipulation of flow are relatively small.

For strong acid and base pH systems, the requirement for precise adjust-
ment would best be met by a VSD, but the flow rates are often too small
for a centrifugal pump and the location of the pump on the ground creates
a huge reagent delivery transportation delay. Instead, an electronically set
metering pump is used with the piping designed to stay full.

Measurement Selection
The number of measurement devices is too large for a comprehensive
treatment. This chapter offers an important perspective, highlighting the
most desirable alternatives and the relative performance advantages and
limitations of the most common types for the most general types of appli-
cations. There are exceptions to every rule. The user should consult hand-
books, manuals, and the manufacturer before proceeding with an
implementation.

The objective is to select a measurement with the best reliability, reproduc-


ibility, rangeability and resolution, and the smallest amount of noise by
means of digital devices and the elimination of process lines, process and
ambient effects, and mechanical components. Smart transmitters should
always be used because they provide the best accuracy and rangeability,
compensate for extraneous effects, and offer online diagnostics. However,
the device with the inherently best principle of operation should be
favored over the one with the fanciest diagnostics that will constantly
remind you of a bad choice.

For flow measurements, inline meters such as coriolis mass flow meters,
magnetic flow meters, vortex shedding meters, and thermal mass flow
meters should be considered because these meters eliminate sensing lines,
external connections, and small holes that are the largest source of errors,
failures, leaks, and maintenance. To reduce the effect of flow profiles and
changes in process fluid, preferred order of selection is the order listed.

Coriolis mass flow meters require no straight runs, are not affected by Rey-
nolds Number or fluid properties, and have by far the best reliability,
reproducibility, rangeability, and resolution for the measurement of both
mass flow and density. The coriolis meter is the only true mass flow meter.
Magnetic and vortex flow meters are velocity volumetric devices and can
be used to compute mass flow only for a fixed and known composition by
36 Advanced Control Unleashed

the measurement of temperature and pressure. This is also true for pitot
tubes with differential pressure, pressure, and temperature transmitters
that are advertised as mass flow meters. The thermal mass flow meter is
not a volumetric meter but its reading will depend upon the heat capacity
and thus the composition of the fluid.

Coriolis flow meters above 2 inches get expensive, but still may be justi-
fied where the ability to measure and control a mass balance is important.
Often overlooked are the benefits of an accurate density measurement and
an approximate temperature measurement (the temperature sensor is on
the outside surface of the tube and is not in contact with the fluid) to create
online estimators of fluid composition. Unfortunately, the the fluid may be
too corrosive for the materials of construction offered, the fluid tempera-
ture or concentration of particles may be too high, or the pipe size too
large.

Coriolis flow meters are so accurate that they can be used to replace load
cells or weigh tanks for batch charges. Also, for flat titration curves and
constant composition feeds and reagents, simple mass ratio control with
coriolis flow meters has proven to be more accurate and more reliable than
a pH control. The hardware cost of a coriolis flow meter is high compared
to a differential head meter, but the installation cost may be less since there
are no straight-run requirements or additional measurements to compen-
sate for pressure and temperature. The project cost may be still higher, but
the life-cycle cost is often significantly better for the coriolis meter, as
shown in Figure 2-11, because the coriolis meter requires less maintenance
and accumulates benefits from tighter control.

Figure 2-11. Life-cycle Cost Comparison of a Coriolis and Orifice Meter


Chapter 2 - Setting the Foundation 37

Next to the coriolis mass flow meter, the magnetic flow meter has the fewest
interferences since it is not affected by either Reynolds Number or viscos-
ity and is relatively insensitive to flow profile. The main limitation is that
the fluid conductivity must be greater than 1 micromho/cm (0.1 for spe-
cial units). For erosive service, ceramic linings are offered.

The flow profile is a big factor for the vortex meter, particularly near the
low end of the meter's range. If the velocity drops below 1 fps, or Rey-
nolds Number is less than 20,000, or the viscosity is greater than 30 centi-
poises, or the concentration of particles is above 2%, the vortices are not
shed uniformly and the vortex frequency deviates from a proportional
relationship to flow. At low velocities the reading can become very erratic.

An orifice meter may have a good short-term repeatability specification


under laboratory test conditions. However, the effects of Reynolds Num-
ber, flow profile, orifice wear, and sensing lines significantly deteriorates
the long-term reproducibility. Measurement noise and the square-root
relationship also lead to a rangeability of just 4:1. Smart transmitters can
double this if the straight run of pipe upstream is sufficient. If a large line
size requires the use of a differential head meter, a large-bore direct
mounted smart transmitter should be used as shown in Figure 2-12a. For
less sensitivity to Reynolds Number, flow profile, and process conditions,
use a purged averaging pitot tube or a large-slot Annubar with a direct
integral-mounted smart transmitter with pressure and temperature com-
pensation. The purge is used to prevent the plugging of the small sensing
holes of a pitot tube.

Table 2-3 summarizes the relative performance capability of each of the


most common types of flow measurement. The rangeability and accuracy
of the measurement can be improved generally by a factor of two by the
use of a smart transmitter. The orifice, vortex, and magnetic flow meters
are volumetric meters, so flow ranges in pounds per hour are based on an
assumed velocity profile and density or concentration if temperature-cor-
rected. The numbers in Table 2-3 are largely from Reference 2.16, except
for the reproducibility percentages. They are based on the author's experi-
ence and reflect the effect of wear and the change in meter coefficient with
operating conditions such as Reynolds Number. The manufacturer should
be consulted for an estimated reproducibility for the actual range of oper-
ating conditions for the application. The drift from wear will be greatest
for slurries or corrosive service and high velocities.

The hardware and setup costs of the radar level measurement has decreased
and many of the calibration complexities have been automated to the
point where the lifecycle cost is often less than the differential pressure
(d/p) method of level measurement. It is the most accurate of the common
38 Advanced Control Unleashed

Table 2-3. Comparison of Common Flow Measurements [2.16]


Piping
Type Sizes Range Interferences Reproducibility
A/B**
Coriolis ¼-6n 80:1 1/1 solids, alignment, vibration 0.1% of rate
Magmeter ¼-78" 25:1 5/1 conductivity, electrical noise 0.5% of rate
Vortex ½-12" 9:1* 10/5 profile, viscosity, hydraulics 1.0% of span
Orifice ¼-78" 4:1 10/5 profile, Reynolds Number 5.0% of span
* Assumes a minimum and maximum velocity of 1 and 9 fps, respectively.
** A is the number of upstream pipe diameters and B is the number of downstream
pipe diameters of straight run required (see Figure 2-10).

types of level measurement. The main limitation of radar is that the fluid
must have a dielectric constant greater than 2. For tall narrow vessels, the
minimum 8-degree angle of divergence of the beam may result in the
gauge not being able discern the bottom of vessel. The gauge must be pro-
grammed to ignore any obstruction, including dip tubes and agitator
blades, and typically requires an empty tank at some point during the cal-
ibration procedure. Pulsed lasers that are not adversely affected by dust or
vapor can potentially become an even more accurate method of level mea-
surement [2.17]. The angle of divergence is less than a degree and there is
no dielectric requirement. However, a relatively clean sight glass window
is required.

The differential pressure level measurement depends upon density of the fluid
and the condition of the sensing and equalization lines. A second d / p
with both connections below the minimum level can be added to provide
a representative measurement of the density if the vessel is well mixed,
although the accuracy is usually good to only two significant digits. The
sensing and equalization lines can be eliminated by the use of capillary
systems or transmitters mounted directly on vessel flanges for both the
bottom total pressure and top equalization pressure. However, the error
introduced by capillary systems can be significant if there are bubbles in
the fill, or differences in the temperature or length of the capillary. The
communication of signals for the computation of level from dual transmit-
ters is best done digitally to eliminate digital-to-analog (D/A) and analog-
to-digital (A/D) converter errors. Even so, the error increases as the vessel
pressure increases and can become unacceptable because the bottom d / p
measures both liquid head and vessel pressure relative to atmospheric
pressure.

The admittance-probe type of level measurement depends upon the dielectric


constant of the fluid. A second probe can be added to provide a represen-
tative measurement of the dielectric constant if the vessel is well mixed.
Since the dielectric constant is rarely known accurately, the vessel level
Chapter 2 - Setting the Foundation 39

must be varied a known amount during the calibration. For hazardous


materials, there is usually no way to visually check the level and another
method of level measurement is needed.

The ultrasonic level measurement device beam is scattered by foam or dust


and is affected by anything that affects the speed of sound. Thus, readings
are affected by changes in the pressure, temperature, and composition of
the vapor space.

Nuclear level measurements are completely isolated from the process, but are
affected by density unless a second device is added. Strip sources are rec-
ommended to eliminate the need for compensation of the difference in
radiation path length from a point source to the strip detector. The license
procedure is considered a hassle and anything nuclear tends to scare peo-
ple even though the exposure is less than what they receive from the sun.

For temperature measurement, a 3- or 4-wire resistance temperature detector


(RTD) is preferred over a thermocouple because it has much less drift and
a much better repeatability and resolution, per Table 2-4. For very high
temperatures, however, a thermocouple may be the only option if an opti-
cal pyrometer is not feasible. A smart transmitter should be used for con-
trol loops instead of thermocouple or RTD input cards in the DCS. The
speed of response of an RTD is only slower than a thermocouple for a
bare-element installation of both because the thermowell and its air gap is
the largest source of measurement lag. Bare-element installations are rare
and even then the difference is only a few seconds, which is negligible
compared to the other thermal lags in most processes.

Table 2-4. Comparison of Common Temperature Measurements [2.19]


Range Drift Repeatability Resolution Power
Type
(°F) (*F/yr) (oF) (oF) (watts)
RTD -200 to1600 0.02 to 0.2 0.05 to 1 0.002 4x10-2
Thermocouple -300 to 3100 2 to 40 2 to 15 0.1 2x10-7

For analytical measurements, inline meters and probes that do not require
sample systems will pay off by the elimination of sample transportation
delays and the reduction in the life-cycle cost of a sample system. If a den-
sity measurement is sufficient, the coriolis meter offers the fastest and
most accurate and reliable response. For simple water mixture and com-
plex general mixtures, inline meters such as microwave and nuclear mag-
netic resonance (NMR) should be investigated respectively. [Microwave
can be used for simple 2 or 3 component water mixtures, while NMR can
handle aqueous and non-aqueous complex mixtures]. For coal, oil and
mineral slurries, the prompt gamma neutron activation analyzer can
40 Advanced Control Unleashed

deliver rapid, sample-free, elemental analysis on large top-size, bulk mate-


rials moving at meters per second [2.18]. For covalent chemical bonds,
Raman scattering emission spectrophotometers can provide fast analysis
of multiple components in multiple streams by the use of fiber-optic
probes [2.23]. The hardware and setup costs of these analyzers have
decreased with advances in computational capability for complex spec-
trum analysis. Once the system is successfully calibrated and commis-
sioned, the maintainability and reliability, not to mention the speed, are
much better than what can be achieved with chromatographs. However,
the extensive experience base of analyzer specialists (largely chemists)
with chromatographs, and a lack of appreciation for the value of a tradeoff
of some accuracy for better availability and elimination of dead time, have
slowed down the introduction of new inline meter and probe technolo-
gies.

Unfortunately, nothing has proven to be reliable and accurate enough to


replace the glass measurement electrode. The accuracy of iridium oxide
sensors is too adversely affected by oxidation reduction potentials, and the
Field Effect Transistor (FET) has not yet been made rugged enough to be a
viable general-purpose replacement for the glass electrode. Significant
improvements have been made in the reference electrode by the use of a
solid polymer with a large area active and imbedded electrolyte, instead of
a tiny porous junction that can plug and internal liquid or gel fill that can
be contaminated with process material. However, the greatest accuracy
and speed of acclimation requires the use of a pressurized flowing refer-
ence junction [2.18].

A three-probe installation with the middle reading selected as the con-


trolled variable offers the greatest accuracy and availability. This has been
the best practice for important pH loops in Monsanto and Solutia since
1986. Middle selection will inherently ignore a single failure or erroneous
reading of any type, will provide online diagnostics, and will reduce the
noise band without the addition of a lag. Two probes are quite common
but introduce more questions than answers since the electrodes rarely
agree. However, if the probe life is short due to chemical attack of the
glass, the preferred solution is then a piston actuated retractable assembly
with an automated immersion, wash, calibration, and soak cycle [2.18].
The probes should be immersed only long enough (e.g., 2 minutes) to get a
valid reading to reduce exposure time and any drift as the reference seeks
to reach equilibrium with the process. The automated calibration check
can be done on a shift or batch basis to reduce the time between successive
insertions to less than 5 minutes.
Chapter 2 - Setting the Foundation 41

Measurement Installation
Sensing lines should be eliminated wherever possible by direct mounting
a differential pressure (d/p) transmitter for flow measurement or pressure
measurement, as shown in Figure 2-12a and 2-12b, to the pipe connection.
Isolation, flush, drain, and equalization valves are necessary to minimize
the exposure to chemicals during the removal of the transmitter. An equal-
ization valve is used when both the high and low sides are connected to
the process to offer the opportunity for a zero adjustment and to protect
the d/p from "over-range" from just seeing the upstream pressure.

Figure 2-12a. A Direct-mounted d/p Transmitter for a Flow Measurement

Figure 2-12b. A Direct-mounted d/p Transmitter for a Pressure Measurement

The flow meter should be mounted upstream of the control valve as


shown in Figure 2-10 to minimize the distortion of the flow profile and to
provide a more constant pressure. For liquid flow, the upstream location
helps prevent a partially filled meter, besides reducing exposure to flash-
ing and cavitation. Bubbles adversely affect the accuracy of all flow meters
and the implosion of bubbles can cause serious damage in addition to
42 Advanced Control Unleashed

erratic readings. Isolation, flush, and drain valves are again recommended
for safe removal of an inline meter, although some practices for high haz-
ardous materials seek to minimize the number of connections and leak
points. A bypass valve allows the plant to run on manual while the instru-
ment is repaired. The isolation valves upstream of the flow meter shown
must be wide open when the flow meter is in service for the same reasons
that a control valve is undesirable upstream of a flow meter. If there are
solids, the meter can be installed in vertical pipe with flow up to help
drain the piping. For coriolis meters, a single straight tube is desirable to
eliminate erosion at the bends of a U-tube and unequal distribution of sol-
ids in a dual tube. The meter size should be chosen to provide the opti-
mum velocity to minimize the effect of solids concentration on accuracy.

Figure 2-10 and Table 2-3 show the relative straight run requirements for
different types of flow meters (the A and B values are in Table 2-3). An ori-
fice with a large beta ratio (high orifice bore to inside pipe diameter ratio)
has the greatest upstream straight-run requirement, followed by the vor-
tex meter operating at a low fluid velocity. The upstream requirements
also increase for multiple fittings in different planes or valves upstream
that are not completely open or are partially plugged. The upstream
straight-run requirement can be dramatically reduced by the use of
straightening vanes. The manufacturer should be consulted for actual
requirements based on your piping details and process conditions. The
coriolis flow meter has no upstream or downstream straight-run require-
ments.

Sample lines should be eliminated wherever possible for all types of elec-
trodes by the use of insertion or injector assemblies. Injector electrode
holders with manual or automatic retraction are now offered. Even though
these assemblies have built-in isolation, flush, and drain capability, the
user may choose to have the piping set up as shown in Figure 2-13a and 2-
13b to provide additional protection for hazardous materials. For three
electrodes, a series arrangement, as shown in Figure 2-13b, is favored to
help keep the velocity and concentration the same for all three meters. The
electrodes must be pointed down at a 30- to 60-degree angle, as shown in
Figure 2-13c, to prevent a bubble from becoming lodged in the tip or at an
internal electrode. The first electrode should be at least 20 pipe diameters
from the discharge of a pump or static mixer to reduce pressure pulsation
and facilitate some mixing. The electrodes should also be separated by 10
pipe diameters to help establish a more uniform velocity and they should
be inserted far enough into the line or vessel to get a representative read-
ing. The mounting of electrodes in a pipeline with a 5 to 9 fps velocity is
preferred to a vessel because the higher velocity in the pipe makes the
electrodes respond faster and keeps them cleaner. The bulk fluid velocity
in even highly agitated vessels rarely exceeds 1 fps except near the agitator
Chapter 2 - Setting the Foundation 43

blade tip. For solids and high caustic or temperature service, there is a
compromise to keep the velocity low to decrease erosion and chemical
attack and yet high enough to reduce coatings. The slot in the protective
shroud of the electrode tip should be oriented to shield the electrode from
abrasion and chemical attack but provide a sweeping action to decrease
fouling. Finally, the transportation delay between the vessel or the point of
reagent addition and the electrodes should not exceed 5 seconds.

Figure 2-13a. Parallel Installation of pH Electrodes

Figure 2-13b. Series Installation of pH Electrodes

Figure 2-13c. Orientation and Insertion of pH Electrodes

For temperature sensors, insertion length should be maximized in the cen-


ter of the pipe so that the tip temperature is close to the actual process tem-
perature despite the conduction of heat between the tip and the sensor's
external connection. This is particularly important for an RTD since the
44 Advanced Control Unleashed

resistance of the entire element responds to temperature, and for polymers


because of a greater temperature difference between the wall and the cen-
terline of the pipe and a lower convective heat transfer coefficient. How-
ever, an excessively long sensor may vibrate and fail. Computer programs
can calculate the conduction error and maximum length to prevent vibra-
tion for a given thermowell and sensor construction and process condi-
tions. The use of an elbow connection with the sensor or thermowell
facing into the flow as shown in Figure 2-14 provides the greatest accu-
racy. The least desirable choice is option 4, particularly for small diameter
pipes.

Figure 2-14. Orientation and Insertion of Thermowells

At the outlet of a static mixer, desuperheater, or multiple-pass heat


exchanger there should be about 20 pipe diameters between the equip-
ment outlet and the sensor location, as shown in Figure 2-15, for the
streams to recombine to provide a more representative temperature. Also,
it is desirable for the velocity to be between 5 and 11 fps to maximize the
convective heat transfer coefficient at the tip from a higher velocity and to
maintain a cleaner surface. The higher coefficient not only improves the
speed of response but also reduces the heat conduction error. For the static
mixer, the thermowell lag is the largest lag in the loop. For non-hazardous
and non-corrosive fluids at safe temperatures and pressures, a bare ele-
ment with a protective shroud could be considered for tighter temperature
control of blending operations by elimination of the thermowell lag.

For d / p measurements of level, either the sensing or equalization line


should be purged as shown in Figure 2-16a. Normally a liquid such as
water is used for the sensing line and a gas such as nitrogen for the equal-
ization line. If nitrogen is used for the bottom connection, a slotted bubbler
tube is used. To prevent drying out the tip of bubbler, which can lead to
solidification of material and plugging, water is sometimes purged along
with nitrogen. The purge flow rates are normally regulated and indicated
Chapter 2 - Setting the Foundation 45

Figure 2-15. Location of Temperature Sensor

the water purge and the nitrogen purge lines must each have a check valve
before being combined, and each line must have a check valve. The sens-
ing and equalization lines for low to moderate vessel pressures can be
eliminated by the use of separate d/ps for the total pressure and equaliza-
tion, direct mounted on the bottom and top nozzles as shown in Figure 2-
16b. A third d / p can be direct mounted at an intermediate nozzle to com-
pute fluid density, and a temperature sensor can be used to compensate
for the changes in the dimensions of the vessel, to provide a more accurate
level measurement. Flush connections or extended diaphragms are used
for the lower nozzles to help keep the diaphragm clean. The signals are
communicated digitally to a computer or a programmable electronic con-
trol system.

Figure 2-16a. Purged Sensing and Equalization Lines for Level Measurement

Open Loop Response


If a controller is put in manual, and a step change is made in the controller
output, the trend recording will show the open loop response. In indus-
trial processes, the open loop response is rarely oscillatory but follows a
46 Advanced Control Unleashed

Figure 2-16b. Direct-mounted Transmitters for Level Measurement

path of a self-regulating, integrating, or a runaway process as shown in


Figure 2-17. If oscillations occur, the root cause is hunted down and cor­
rected. Sometimes oscillations occur in the open loop response of a pri­
mary controller of a cascade control system, but this is easily fixed by
retuning the secondary controller to provide a smoother response. Indus­
trial practice now is to eliminate oscillations even in the closed-loop
response (the response when the controller is in automatic). The quarter
amplitude decay for closed-loop control in the literature is a benchmark
for minimum peak error and is only seen as part of a closed-loop tuning
procedure that then seeks to suppress the oscillation as described in the
next section on controller tuning.

Thus, for purposes of this chapter, the open loop response will be a first-
order response that can be characterized by a total time delay, a negative
or positive feedback time constant, and a steady state gain. The time it
takes the process variable to get out of the noise band after a step change
in the controller output (CO), is the observed time delay (τd), or dead time.
It excludes any time delay due to valve dead band or stiction. The Theory
section shows how to estimate the additional time delay from valve dead
band. The time it takes after the time delay for the response of the process
variable (PV) to reach 63% of the final change for a self-regulating
response is the negative feedback open loop time constant (τo), or time lag.

Alternatively, a tangent to the inflection point can be visualized on the


trend. The intersections of the tangent with the initial and final values of
the PV mark the end of the total time delay and the open loop time con­
stant, respectively. The final change in the process variable in percent
divided by the step change in the controller output in percent is the open
loop steady state (static) gain, more commonly known as the process gain
even though it is the product of the valve, process, and measurement static
gains as shown by Equation 2-1 and Figure 2-18. Included in the process
Chapter 2 - Setting the Foundation 47

gain for temperature and composition control loops is a flow-ratio gain


that is the inverse of the feed flow as shown in Figure 2-18 for pH. All flow,
liquid pressure, most gas pressure, temperature, and composition loops
have a self-regulating response.

For an integrating or a runaway response, the process variable does not


line out at a new steady state, but continues to ramp or accelerate, respec-
tively, until a physical limit is hit. For an integrating response, there is an
integrator gain that is the ramp rate in percent per minute divided by the
step change in controller output in percent, per Equation 2-2. It can be
approximated as a large open loop time constant and gain. In fact, pro-
cesses with a large time constant look like a ramp in the control region and
are called pseudo-integrators. For a runaway process, there is a positive
feedback time constant that is the time, after the time delay and any nega-
tive feedback time constant, for the open loop response to reach 172% of
the self-regulating response as shown in Figure 2-17.

The open loop response of each major component of the plant (control
valve, process, and measurement) in Figure 2-19 has a first-order-plus-
dead-time approximation. The controller also contributes a time delay
from the scan time and time constants from the signal filters. Material and
energy balances are used in the Theory section to show origin of the pro-
cess time constants and gains and how they change with operating condi-
tions. Equations in the Theory section also estimate the dead time from
mixing, transportation delay, and valve dead band.

Figure 2-17. First Order-plus-Dead Time Approximation for an Open Loop


Response
48 Advanced Control Unleashed

Figure 2-18. Steady-State (Static) Gains for a pH Loop

Figure 2-19. Block Diagram for a First Order-plus-Dead Time Approximation

Equations
Chapter 2 - Setting the Foundation 49

where:

Ko = open loop gain (%/%)


Ki = integrator gain (%/minute/%)
Kmv = steady-state gain of the manipulated variable (valve gain)
ICpv = steady-state gain of the process variable (process gain)
Kcv = steady-state gain of controlled variable (measurement gain)
τo = open loop time constant (minutes)

The total loop time delay (dead time) is the most important of the three
key variables that describe the open loop response of a control loop. It
delays the ability of a controller to see or react to a disturbance. The
minimum peak error is the maximum excursion of the process variable
during this time delay. The oscillation period is also proportional to the
time delay. Thus the integrated error is proportional to the time delay
squared for self-regulating processes with a large time constant that limits
the excursion within the time delay to less than the full change of the
process variable. The equations to approximate these relationships are
developed in the Theory section.

Perfect control is theoretically possible if the total time delay is zero and
there is just a single process time constant. However, in industrial pro­
cesses the total loop time delay is never negligible because even if the pro­
cess time delay is negligible, the addition of a measurement, valve, and
digital controller adds time delay. For flow, pressure, and level control,
most of the time delay in a control loop comes from the automation system
[2.1]. While the time delay cannot be zero, the objective, particularly for
loops with operating point nonlinearities, like the pH and column temper­
ature control examples, is to reduce the total time delay. This is because the
extent of the excursion on the titration curve or tray temperature curve,
and hence the effect of the nonlinearity, is decreased by a reduction in loop
dead time.

Pure dead times come from transportation delays (pipes, sample lines,
static mixers, coils, jackets, conveyors, sheet lines, and textile fiber lines),
valves (prestroke dead time, dead band and stiction), and anything digital
or with a cycle time (microprocessors and analyzers). Equivalent dead
time comes from time constants in series from instrumentation (sensor
time lags, thermowell time lags, and transmitter filter times and dampen­
ing adjustments), analog input cards, (analog filters), and process variable
filter times (digital filters). The exact values are not important, just the rel­
ative sizes. The engineer or technician should work on the largest, most
cost-effective sources of dead time [it is not just the job of the control engi­
neer but also the process engineer who is responsible for equipment
design and the selection of instruments for small automation projects and
50 Advanced Control Unleashed

the technician who often determines the sensor location and loop scan
times. Some plants don't have a control engineer].

A time constant is beneficial, if it is located in the process after the entry


point of load upsets, because it slows down the excursion of the process
variable from the disturbance. If a time constant is between the controller
output and the entry point of the load upset, it slows down the correction
for the disturbance from the controller. If a time constant is between the
process output and the controller input, it affects the recognition of the
disturbance. In the block diagram in Figure 2-19, the only beneficial time
constant is the second process time constant (τp2).

The largest time constant does not have to be in the process. For processes
not dominated by time delay, an increase in the time constant, no matter
where it appears in the loop, will allow an increase in controller gain, even
though the final effect is not beneficial. For example, a large time constant
in the measurement, such as a large thermowell lag or process variable fil­
ter time setting, will allow a higher controller gain and give the illusion of
better control because the controlled variable is an attenuated version of
the real process variable. Equation 3-2 can be used to estimate the effect.

All time constants much smaller than the largest time constant are con­
verted to equivalent dead time. While the fraction converted to dead time
depends upon the relative size of the small to the large time constant, very
small time constants can be summed up as totally converted to dead time
because it is difficult to find and estimate all the small time constants and
sources of dead time. Thus, the total time delay for a control loop is the
sum of all the pure time delays and the small time constants. Dead time
compensators and model predictive controllers can account for the effect
of time delay on the response to changes in the controller output, but the
minimum peak error for unmeasured load upsets and the initial delay
before the start of the set point response is still fixed by the total time
delay.

The open loop time constant is approximately the largest of the time con­
stants plus the portion of all of the small time constants not converted to
time delay. If each of the small time constants is less than 10% of the larg­
est time constant, so that each is essentially converted to equivalent time
delay, the largest time constant can be considered to be the open loop time
constant. The purpose here is to show the relative sizes and sources of
time delay. There are too many unknowns to calculate an exact value. In
industry, the open loop time constant, total time delay, and open loop gain
can not be accurately calculated, except possibly for flow and level, and
must obtained by plant tests.
Chapter 2 - Setting the Foundation 51

The open loop gain (sensitivity) can be too low or too high. If the valve
gain is too low (valve sensitivity is too low), the controller has little effect
on the process and the controlled variable will wander and be at the mercy
of upsets [2.5]. If the process or measurement gain (sensitivity) is too low,
the controlled variable is not representative of the process performance. If
the valve gain (sensitivity) is too high, the effect of stick-slip is excessive
and just the act of putting a controller in automatic can cause unacceptable
oscillations. If the process or measurement gain (sensitivity) is too high,
the nearly full-scale oscillations will scare most people even if the actual
performance of the process is acceptable. The classic example of this prob-
lem is the pH control of a strong acid and base system with a static mixer.
Try explaining to the operator that the actual change in hydrogen ion con-
centration is tiny for a system that oscillates between 2 and 12 pH.

The total loop time delay, open loop time constant, and open loop gain are
rarely constant but rather a function of operating conditions. For example,
the process gain for composition and temperature control is inversely pro-
portional to feed rate, and the process time constant is inversely propor-
tional to feed flow for back-mixed volumes, whereas the time delay is
inversely proportional to feed flow for plug flow. The theory section
shows the source of these process nonlinearities.

PID Controller Tuning


Proportional-Integral-Derivative (PID) controllers in the basic process-
control system must be tuned for both servo response (set point changes)
and regulation (load rejection). While either of these may be more impor-
tant than the other to a given application, loop tuning methods that focus
on one to the exclusion of the other are not seeing the whole picture.

Increasingly, the control loop set point is changed by either a master loop
for cascade control or a model predictive controller for advanced control,
or by a unit operation for an automated startup sequence, product transi-
tion, and batch operation. Even for those loops whose set point is not
changed from a remote source, the local set point is a handle used for star-
tup, sweet spots, and to relieve boredom. Operators are notorious for
moving set points despite claims to the contrary. Plus, the operator has to
start up the unit, which may consist of a series of set point changes as he
walks the unit up to operating conditions.

On the other hand, if there were no process upsets, you wouldn't need a
controller: You could find and manually set an output to a final element
that would be good indefinitely.

The traditional performance index has been integrated absolute error


(IAE), that is, the integral of the absolute error between the set point and
52 Advanced Control Unleashed

controlled variable. The integrated error (Ei) can be estimated from the
tuning settings and is equivalent to the IAE if the response is not oscilla-
tory. Of increasing importance is the settling time (Ts), which is the time it
takes for a loop to stay within a specified band around the set point after a
set point change or load upset to detect sustained oscillations. Since pro-
cesses and equipment have limits that can trigger interlocks, violate envi-
ronmental constraints, or initiate side reactions, the overshoot (E1) for set
point changes and the peak error for load upsets (Ex) are also important.
The decay ratio is the amplitude of the first peak (E1 or Ex) divided by the
second peak (E2). Finally, the rise time (Tr) (time it takes the controlled
variable to first reach a specified band around the set point) is important
for reducing batch cycle, startup, and transition time and decreasing the
open loop response time (T98) (time to reach 98% of the final value) for
master and model predictive controllers. Figures 2-20a and 2-20b show the
closed-loop performance indices for a set point change and a load upset,
respectively. In the Theory section, equations are developed to estimate Ei
and Ex for load upsets.

Figure 2-20a. Closed Loop Performance Indices for a 10% Setpoint Change

The proportional mode provides a contribution to the controller output that


is the product of the controller gain (Kc) and the control error, which is the
difference between the set point and the controlled variable. In Figure 2-
21, the controller is not actually connected to a process so the controller
response can be isolated from the process response. Figure 2-21 shows that
the response of the proportional mode to a step change in the measure-
Chapter 2 - Setting the Foundation 53

Figure 2-20b. Closed Loop Performance Indices for a 40% Load Upset

ment is a step change in the controller output. Consequently, any abrupt


change in the error will be passed on to the output. For a persistent control
error, the proportional mode does nothing more after the initial reaction.
The proportional mode promotes an immediate response, but will not
eliminate an offset. The size of the offset is inversely proportional to con-
troller gain, so for high gains the offset is small. However, digital displays
promote unrealistic expectations and some operators will get concerned
over insignificant persistent errors and ask for the controller to be tuned to
get rid of an offset. The direction of the change in controller output
depends upon the sign of the change in error, so the proportional mode
has some sense of direction of approach to set point.

An increase in the controller gain will greatly reduce the peak error, the
rise time, and the return time, but may increase the overshoot and the set-
tling time. High controller gains will amplify noise, increase interaction,
and pass on more variability from the controlled variability to the manipu-
lated variable. Figures 2-22a and 2-22b show the effect of the controller
gain setting on the response of an effectively proportional-only controller
to a load upset and a set point change, for a process with a time constant
that is 5 times larger than the time delay. An increase in controller gain
dramatically speeds u p the initial rate of approach to set point by over-
driving the controller output. It will also start to back off the controller
output from the output limit as soon as the controlled variable comes
within the proportional band, which is the percent change in the control
error (difference between the measurement and set point) necessary to
54 Advanced Control Unleashed

Figure 2-21. Contribution of the Proportional, Integral, and Derivative Modes

cause a full-scale change (100%) in the controller output. For a propor-


tional-only controller with zero bias and a 50% set point, the proportional
band is centered on the set point. High controller gains by themselves do
not cause a classic overshoot but instead an oscillatory approach to set
point. It is high controller gain combined with integral action that causes
overshoot.

Figure 2-22a. Response of a Proportional-only Controller to a 10% Set point


Change
Chapter 2 - Setting the Foundation 55

Figure 2-22b. Response of a Proportional-only Controller to a 40% Load Upset

The integral mode's contribution to controller output is the product of the


controller gain (Ke), the inverse of the integral time (Ti), and an integral of
the control error. Figure 2-21 shows that the response of the integral mode
to a step change in the measurement is a ramp. The time it takes to repeat
the contribution of the proportional mode is the integral time in minutes
or seconds per repeat. The integral mode is not satisfied with any offset. It
provides a gradual but driving action. The direction of the change in con-
troller output depends solely upon the sign of the error, so the integral
mode has no sense of direction or rate of approach to set point.

A decrease in integral (reset) time, which is an increase in reset action, has


a negligible effect on the peak error for step load upset and the initial
approach to set point, but will decrease the rise or return time. Unfortu-
nately, reset action increases the overshoot, the period of oscillation, and
settling time, unless the controller gain is reduced or derivative (rate)
action is added. Figures 2-23a and 2-23b show the effect of the integral
time setting on the response of an effectively integral-only controller to a
load upset and a set point change, for a process with a time constant that is
5 times larger than the time delay. The controller output gradually
approaches the output limit and does not reverse direction until well after
the controlled variable has crossed the set point despite anti-reset windup
protection. The integral mode does not amplify noise or increase interac-
tion but will increase the response time of a master or model predictive
controller.
56 Advanced Control Unleashed

Figure 2-23a. Response of an Integral-only Controller to a 10% Set point


Change

Figure 2-23b. Response of an Integral-only Controller to a 40% Load Upset

The derivative mode's contribution to controller output is the product of the


controller gain (Kc), the derivative (rate) time (Td), and the rate of change
of the controlled variable. Figure 2-21 shows that the response of the deriv-
ative mode to a step change in the measurement is a bump instead of a
Chapter 2 - Setting the Foundation 57

spike because of a built-in filter whose time constant is typically 1/8 of the
derivative time. The contribution will decay to zero since the offset has a
slope of zero. The direction of the change in controller output depends
upon the sign of the change or the acceleration of the error, so the propor-
tional mode has a definite sense of direction and rate of approach to set
point.

The PID algorithm can have the derivative action on either the control
error or on the controlled variable. The latter method was developed to
reduce the bumps to the controller output from rapid set point changes
made by the operator. Unfortunately, for set point changes made by
sequences, cascade, and advanced control systems, the derivative mode
works against the change requested because it only knows that the mea-
surement is starting to move and that any movement is undesirable. For
this reason, derivative action on control error is preferred so that deriva-
tive action is beneficial if the loop is dominated by a time constant and set
point velocity limits are readily available. An increase in derivative time
will decrease the peak error and reduce overshoot and the period of oscil-
lation. Too much derivative action can increase the rise, or return, time and
the settling time. The derivative mode can respectively speed up or slow
down the initial approach for a set point change by acting on the control
error or the controlled variable.

Figures 2-24a and 2-24b show the effect of the derivative time setting on
the response of a proportional-plus-derivative controller to a load upset
and a set point change, for a process with a time constant that is 4 times
larger than the time delay. The derivative mode is even more likely than
the gain mode to amplify noise, increase interaction, and transfer variabil-
ity to the manipulated variable. It can decrease or increase the response
time of master or model predictive controllers depending on how it is
used. It should not be used on dead time-dominant systems or any system
with abrupt or erratic changes. It works best on processes with large time
constants, low noise, and good measurement repeatability and resolution
so that the response of the controlled variable is smooth.

Flow and liquid pressure loops normally use proportional-integral (PI)


controllers because the loops response is too fast and abrupt for derivative
action. Level and gas pressure loops can use a high controller gain for tight
control. In most cases, the upper limit on controller gain depends upon the
degree of measurement noise and on how much variability should be
passed on to the manipulated flow. Derivative action is undesirable except
for those unusual level loops with large time lags because the combination
of a high gain and derivative action greatly amplifies noise. For level con-
trol of surge volumes, derivative action would be counterproductive as it
would pass on more variability from the level to the manipulated flow.
58 Advanced Control Unleashed

Figure 2-24a. Response of a Proportional-plus-Derivative Controller to a 10%


Set point Change

Figure 2-24b. Response of a Proportional-plus-Derivative Controller to a 40%


Load Upset

Temperature and composition loops are the prime candidates for PID con-
trollers.
Next Page

Chapter 2 - Setting the Foundation 59

Nearly all temperature controllers should use PID controllers because the
derivative mode provides a phase lead that compensates for the phase lag
of the thermal lags in the thermowell and process. Wherever the process
variable can accelerate, whether due to positive feedback or nonlinearity,
derivative is helpful since it reacts to a change in the rate change. In the
distillation-column example, the control point is on the knee of a plot of
tray temperature versus distillate-to-feed ratio. An increase in feed can
cause the drop in temperature to accelerate on the steep slope. However, if
thermocouple or RTD input cards are used instead of a smart transmitter,
the steps from hitting the resolution limit seriously reduce the amount of
derivative action that can be used. A temporary fix is to add a filter that
smooths out the steps. If there is an inverse response, where the initial
response is opposite to the final response, the derivative mode cannot be
used. This can occur in furnace temperature control where the controller
output is the firing demand that works within a cross limit to make air
lead fuel on a load increase.

Composition control loops on back-mixed volumes should use PID con-


trollers because the derivative mode provides anticipatory action impor-
tant for the slow response. In the pH control example, derivative action
should be used in the pH loop on the neutralizer but not on the static
mixer. However, many analyzers have an erratic response and analyzers
with an analysis cycle time, such as chromatographs, have staircase
responses that preclude the use of derivative action.

For tight control of processes with a time constant much larger than the
dead time, there is benefit in aggressive preemptive and anticipatory
action. Thus, these processes should maximize the gain and derivative set-
ting and overdrive the output to reduce the rise and return time. The inte-
gral time is increased (reset action decreased) since it has no sense of
direction and increases overshoot. If the controller gain is larger than 5,
there is enough muscle from the proportional mode, and the derivative
setting can be small or zero. Derivative mode is a necessity regardless of
gain setting for a process whose control variable can significantly acceler-
ate, whether due to a nonlinearity (pseudo-runaway) or positive feedback
(real runaway), such as a polymerization reactor or a fermentor in the
exponential growth phase. For these processes, the integral time setting is
increased to about 10 times the ultimate period so that gain and rate action
dominate the response.

Conversely, for tight control of processes with a time delay much larger
than the largest time constant, gain and rate action must be minimized
and integral (reset) action maximized, which means the integral (reset)
time must be minimized. The integral mode adds smoothness not inherent
Previous Page

60 Advanced Control Unleashed

in the process. The integral time factor is decreased and can be as small as
1/8 of the ultimate period for a pure dead time process.

Figures 2-25a through 2-25f show the effect of the three mode settings on a
PID controller for a process with a time lag 4 times larger than the loop
time delay. Note that a gain setting too large increases overshoot due to
the presence of reset action. An integral (reset) time setting that is too
small causes a greater overshoot and increases the period of oscillation.
Too much derivative action causes an oscillatory response and a shorter
period but no real overshoot. Figures 2-26a through 2-26d show the effect
of two mode settings on a PID controller for a process with a time lag 4
times smaller than the loop time delay. The doubling of the gain setting is
much more disruptive.

Figure 2-25a. Effect of Gain on Set point Response of PID for a Large Time
Lag-to-Time Delay Ratio

A starting point is needed on tuning settings, particularly for new plants


or existing plants with upgraded instrumentation and control valves. In
Table 2-5, the first number outside the parentheses provides a tuning, scan
time, and filter time setting suitable for a download. Within the parenthe-
ses is a range of typical settings [2.5]. An auto tuner should be run as soon
as the plant is u p to normal operating conditions. If the calculated settings
are outside the range noted in Table 2-5, the auto tuner test should be run
again. If the results are still outside of the range, it means either that it is an
unusual loop or that there is a problem with the measurement or control
Chapter 2 - Setting the Foundation 61

Figure 2-25b. Effect of Gain on 40% Load Upset to PID for a Large Time Lag-
to-Time Delay Ratio

Figure 2-25c. Effect of Reset on Set point Response of PID for a Large Time
Lag-to-Time Delay Ratio

valve. Whenever a loop is commissioned or tuned, it should be closely


watched for several days for a variety of operating conditions to make
sure the loop is stable and the performance is acceptable.
62 Advanced Control Unleashed

Figure 2-25d. Effect of Reset on 40% Load Upset to PID for a Large Time L a g -
to-Time Delay Ratio

Figure 2-25e. Effect of Rate on Set point Response of PID for a Large Time
Lag-to-Time Delay Ratio

Some loops will fail during an auto tuner pretest because the loop
response is too small or too large within the allowable time frame of the
pretest, or because the valve has too much stick-slip. Also, fast runaway
Chapter 2 - Setting the Foundation 63

Figure 2-25f. Effect of Rate on 40% Load Upset to PID for a Large Time L a g - t o -
Time Delay Ratio

Figure 2-26a. Effect of Gain on Set point Response of PID for a Small Time
Lag-to-Time Delay Ratio

and integrating loops cannot be safely taken out of the auto mode. For
these loops, a closed-loop tuning method is best because it is the fastest
method for a large time constant, it keeps the controller in service with
64 Advanced Control Unleashed

Figure 2-26b. Effect of Gain on a 20% Load Upset to PID for a Small Time Lag-
to-Time Delay Ratio

Figure 2-26c. Effect of Reset on Set point Response of PID for a Small Time
Lag-to-Time Delay Ratio

maximum gain, (safest for processes that can get into trouble quickly or
have a non-self-regulating response), and it includes the effect of poor
valve response in the tuning [2.29].
Chapter 2 - Setting the Foundation 65

Figure 2-26d. Effect of Reset on a 20% Load Upset to PID for a Small Time
Lag-to-Time Delay Ratio

Table 2-5. Typical Tuning Settings [2.5]

Application Type Scan Gain Reset Rate Method


(seconds) (seconds) (seconds)
Liquid Flow/Press 1(0.2-2) 0.3(0.2-0.8) 6(1-12) 0(0-2) λ

Tight Liquid Level 5 (1.0-30) 5.0(0.5-25)* 600(120-6000) 0(0-60) CLM

Gas Pressure (psig) 0.2(0.02-1) 5.0(0.5-20) 300(60-600) 3(0-30) CLM

Reactor pH 2(1.0-5) 1.0(0.001-50) 120(60-600) 30(6-30) SCM

Neutralizer pH 2(1.0-5) 0.1(0.001-10) 300(60-600) 70(6-120) SCM

Inline pH 1 (0.2-2) 0.2 (0.1-0.3) 30 (15-60) 0 (0-3) λ

Reactor Temperature 5(2.0-15) 5.0(1.0-15) 300(300-3000) 70(30-300) CLM

Inline Temperature 2(1.0-5) 0.5(0.2-2.0) 60(12-120) 12(12-40) λ

Column Temperature 10(2.0-30) 0.5(0.1-10) 300(300-3000) 70(30-600) SCM

* An error square algorithm or gain scheduling should be used for gains < 5
Methods: λ - Lambda, CLM - Closed-loop Method, SCM - Shortcut Method
66 Advanced Control Unleashed

Table 2-6. Closed-loop Tuning Method


1. Put the controller in automatic at its normal set point. If it is important not to make
big changes in the manipulated variable (analog output), narrow the controller out-
put limits.
2. Decrease reset action (increase reset time) by a factor of 10 if possible and trend
record the process variable (PV) and controller output (CO).
3. Add a PV filter to keep output fluctuations from noise within the dead band of the
control valve. Set the scan time per Table 2-5 based on the loop type.
4. Bump the set point in both directions and increase controller gain if necessary to
get a slight oscillation. Note the time delay between the bump and first excursion
out of the noise band.
5. Stop when loop has about a quarter amplitude oscillation or the gain has
reached your comfort limit, and note the period. For gain settings greater
than one, the oscillation will be more recognizable in the controller output
(CO). Make sure CO stays on scale within the valve's good throttle range.
6. Reduce the gain until the oscillation just disappears so that recovery is as smooth
as desired and the overshoot and settling time are acceptable.
7. For a temperature loop with a smooth response (no chatter, inverse response,
square wave, or interaction), use rate action. If the gain is larger than 10, reset and
rate action are not needed. If the manipulated flow will upset other loops, decrease
the gain or use error squared control. If a high gain is used, add set point velocity
limits, and configure the set point to track the PV in manual and remote output
(ROUT) so the loop can restart.
8. If rate is used, add set point velocity limits, set the rate equal to 1/10 of period, and
make sure noise in the output does not cause the valve to dither. Make another set
point change and adjust the gain to get a smooth response.
9. Use Equations 2-3 and 2-4 to calculate the integral (reset) time setting from the
noted time delay and period of oscillation for quarter amplitude decay. After entering
the integral time, make another set point change and make sure the rise time, over-
shoot, and settling time are acceptable.
10. If gains less than 5 are used for level, make sure the integral (reset) time is greater
than 10 minutes, and add error squared control and/or a velocity limited feedfor-
ward as defined in Appendix B.
11. Return the output limits to their proper values if narrowed for testing.

Set point changes are used because they are more likely to cause an oscilla-
tion than a change in the controller output: A step change in a set point is a
step change in the error seen by the controller, whereas a step change in
the controller output is smoothed out by the time constants in the loop.
After the new tuning settings are entered, the loop should be checked for a
step change in the controller output and the load rejection capability of the
new settings monitored and compared to historical data for the old set-
tings.
Chapter 2 - Setting the Foundation 67

Equations
Tu = 0.7*To (2-3)

Ti = To / [0.1+2*{(4 * τ d )/T u } 2 ] (2-4)


where:

Ti = integral (reset) time


To = oscillation period for quarter amplitude decay
Tu = ultimate period
τd = time delay

Equation 2-4 extends the utility of the closed loop by compensating for
large and small time delay-to-time constant ratios. It estimates an integral
(reset) time to be respectively 1/8 and
times the oscillation period for
a self-regulating process with a pure time delay and with a large time
constant respectively. [1/8 factor is for a pure time delay and ½ factor is a
large time constant] For integrating and runaway processes, Equation 2-4
yields an integral (reset) time that approaches 10 times the ultimate period
as the negative-feedback time constant becomes large compared to the
time delay or positive-feedback time constant. For pure time delays, the
Lambda tuning method provides a more accurate calculation of the tuning
settings for a desired closed-loop time constant. For very slow loops, the
shortcut tuning method, whereby the user only needs to see the time delay
and the initial rate of change of the PV, can be used to save time, as
detailed in Reference 2.4.
The Theory section shows that a self-regulating process with a pure time
delay and a large time constant will oscillate at 2 and 4 times the time
delay, respectively. See Reference 2.1 for equations to estimate how the
ultimate period increases from 4 times the time delay for integrating and
runaway processes.

Cascade Control
A cascade control system consists of a primary controller that manipulates
the set point of a secondary controller that in turn manipulates a final ele­
ment. The secondary controller can be operated in the automatic mode
with a local set point or in the cascade mode with a remote set point.
Industrial systems are designed to make sure the remote set point of the
secondary controller (output of the primary controller) is equal to the local
set point when the secondary controller is switched from the automatic to
cascade mode so that there is a bumpless transfer.

For optimum performance, the time delay and time constant for the sec­
ondary controller should be 5 times faster than the respective values for
68 Advanced Control Unleashed

the open loop response of the primary controller. If this is not possible and
the periods start to approach each other in value, the loops will fight and
may resonate. The interaction can be reduced by artificially slowing down
the primary controller by an increase in its process variable filter and scan
time or by a decrease in its gain. Unfortunately, this also slows down the
ability of the primary controller to react to load upsets that originate in the
primary loop. It may still be worthwhile to go to cascade control, however,
because most of the upsets are in the secondary loop and/or the lineariza-
tion of the primary loop is beneficial.

If the time delay of the secondary (inner) loop is small, its ultimate period
is small and any oscillation is effectively attenuated, per Equation 3-2, by
the primary (outer) loop time constant. If the inner open loop time con-
stant is large compared to its time delay, the secondary controller gain can
be relatively high, which gives this controller muscle to rapidly correct for
inner loop upsets. Also, the maximum excursion (peak error) for the inner
loop is reduced. Furthermore, by going to cascade control, one of the two
largest time constants that created equivalent time delay and was detri-
mental to performance of the single loop is now a beneficial term for the
inner loop [2.30]. In this scenario, the inner loop can correct for inner loop
upsets before they are even seen by the outer loop, so it doesn't matter that
the primary controller may need to be detuned if the inner loop time con-
stant is not much smaller than the outer loop time constant. The plot of
simulation results in Figure 2-27 shows how the ratio of the peak error for
a cascade loop to the peak error for a single loop decreases as the inner-to-
outer loop ratio of time delays decreases and the ratio of time constants
increases [2.30]. The results shown in the figure are for self-regulating
inner and outer loops. The improvement is greater for integrating and
runaway loops.

One of the most beneficial uses of cascade control is in reactor control, as


shown in Figure 2-28, where the reactor temperature primary controller
output is the remote set point of a secondary coolant temperature control-
ler. The secondary loop can correct for changes in coolant temperature or
pressure or changes in the heat-transfer coefficient of the coils or jacket.
Also, the steady-state gains for the primary loop are now constant and the
secondary loop takes responsibility for the nonlinearity of the process and
control valve gains for the coolant system.

An important benefit of cascade control is the linearization of the manipu-


lated variable, which facilitates the addition of flow feedforward. The tem-
perature and composition control of distillation columns has been greatly
improved by ratioing reflux, distillate, or steam flow to the feed flow.
Often the column is started up on ratio control until the column reaches
operating conditions and the temperature sensors are completely
Chapter 2 - Setting the Foundation 69

Figure 2-27. Reduction in Peak Error for Inner Loop Upsets by Cascade Con-
trol

Figure 2-28. Reactor Temperature to Coolant Temperature Cascade Control

immersed and representative of tray composition. At this point, the tem-


perature controller goes on auto and corrects the feedforward. If the tem-
perature-to-flow cascade were not used and the feedforward signal went
directly to a control valve, the nonlinearity of the control valve would
need to be included in the ratio or feedforward gain by a calculation of
flow versus valve position. It is difficult to compute the actual flow
through the control valve because the pressure drop available to the valve
70 Advanced Control Unleashed

changes and the lack of straight runs results in poor repeatability of any
calculation. While it is true that the nonlinearity of an equal per cent flow
characteristic compensates for the inverse relationship between process
gain and feed flow for a temperature loop, this compensation is far from
exact and can be better done by output signal characterization or a feed-
forward multiplier instead of a summer.

Cascade control can do more harm than good if the signal-to-noise ratio,
rangeability, or reliability of the secondary measurement is poor or the sec-
ondary loop is actually slower than the primary loop. This is the case for a
static mixer pH-to-reagent flow cascade loop, particularly when the
reagent flow measurement is an orifice meter. If the reagent flow loop
used a coriolis flow meter, and its scan time was small, the stiction and
dead band of the valve was small, and the digital positioner and flow con-
troller were tuned for a fast response, it would be a different story; cascade
control might be beneficial and facilitate reagent-to-feed ratio control.

The secondary controller should be tuned for fast response. If the inner
loop is much faster than the outer loop, it is permissible for the closed-
loop response of the secondary controller to be quite oscillatory because
the amplitude of the oscillations is effectively attenuated by the time con-
stant of the outer loop. Also, an offset in the inner loop is theoretically of
little consequence to the outer loop since the inner loop only exists for the
purpose of the outer loop. This implies that the secondary controller
should use mostly gain and rate action. This is true for valve positioners.
These are secondary controllers whose remote set point (desired valve
position) is the output of the primary process controller. Pneumatic posi-
tioners were high-gain proportional-only controllers. Digital positioners
often use some form of proportional-plus-rate algorithm. Reset is never
used in a positioner because it would cause overshoot and the offset from
proportional-only control is small due to the high gain action. The second-
ary coolant temperature controller in Figure 2-28 should be tuned with
mostly gain and rate action. However, in many situations reset is used in
the secondary controller because operators get concerned about offsets,
secondary set point limits may need to be enforced, and the ratioing of
flows needs to be exact, especially for inline blending. Reset action is used
in flow controllers to not only eliminate offsets but to also help make the
set point response of the flow loops match up for better timing.

Feedforward Control
In feedforward control, a controller output is calculated to compensate for
a measured disturbance. It provides a preemptive action to enforce a mate-
rial or energy balance. The block diagrams in Figures 2-29a and 2-29b
show the use of feedforward, with and without cascade control. The feed-
forward signal must arrive at the same point in the process simulta-
Chapter 2 - Setting the Foundation 71

neously with the load upset and must be equal but opposite in sign to the
load upset. Feedforward control is never perfect and should be corrected
by feedback control if a suitable feedback measurement or estimator is
available. The total error can be approximated as the root mean square of
the errors in the feedforward measurement, feedforward gain, and feed-
forward timing. In general, the feedforward measurement goes through a
dead time block and a lead-lag block for proper timing and is finally
biased by a process controller output for feedback correction. The feedfor-
ward gain can be the gain in the lead-lag block or a separate gain. If the
controller output goes directly to a control valve, the feedforward calcula-
tion must also go through a signal characterizer that computes the valve
position for a desired flow from the installed valve characteristic. If the
valve nonlinearity is beneficial for feedback control, the characterization is
done before the bias from the feedback controller is added to the feedfor-
ward signal; otherwise it is done after the feedback correction as shown in
Figure 2-29a. It is vastly preferred that the controller output be the remote
set point of a flow controller, as depicted in Figure 2-29b, so that valve
nonlinearity, dead band, and pressure upsets are not issues.

Figure 2-29a. Feedforward Control Block Diagram

The most common form of feedforward control is flow feedforward, where


the load upset is a feed flow. The feedforward gain is simply the ratio of
the manipulated flow to the feed flow. Sometimes, the feed set point is
used instead of the feed-flow measurement to eliminate noise. The ratio is
found from steady-state simulations or calculations. In the Theory section,
the ratio of coolant flow to feed flow and bypass flow to feed flow are
72 Advanced Control Unleashed

Figure 2-29b. Feedforward and Cascade Control Block Diagram

computed from an energy balance for a heat exchanger. Usually, the ratio
is entered by the operator but there is an opportunity to put the energy or
material balance calculation online if there are reliable temperature or
composition measurements.

A common feedforward control system for sheets, webs, films, and con-
veyors is speed ratio control, where the roll speed is ratioed to another roll
speed or an extruder speed. The ratio is corrected by a controller of aver-
age gauge thickness. The timing requirement is very tight: Speed ratio
control of one roll to another must be done within milliseconds.

In model predictive control, extensive online testing is done to establish


the dynamics of the feedforward signal and the correct timing is built into
the algorithm solution. When feedforward control is used in the basic con-
trol system, it is up to the user to adjust the dead time and lead-lag blocks.
The consequences of poor timing are illustrated in Figure 2-30. If the feed-
forward correction arrives too soon, feedforward creates inverse response
and does more harm than good. If it arrives too late, it creates a secondary
disturbance. If the lead-lag gain is too high, the whole response goes in the
opposite direction of the effect of the load upset.

Normally, the feedforward time delay and lag are adjusted to make sure
the feedforward doesn't arrive too early due to a time delay or time lag in
the path of the load upset. The lead is adjusted to compensate for a time
lag in the path of the manipulated variable. The lag is increased as neces-
Chapter 2 - Setting the Foundation 73

Figure 2-30. Effect of Feedforward Timing and Gain

sary to make sure that noise from the feedforward measurement doesn't
show up as dither in the final element.

The greatest benefits of feedforward control occur in loops with large time
delays and large time constants because the integrated error and the tim-
ing window are large. Distillation columns are the prime candidates, fol-
lowed by reactors, crystallizers, and evaporators. About the toughest
application in which to get the timing rate is liquid pressure control,
because the process time constant is so small. Feedforward control is
essential for relatively fast periodic upsets, although the better solution is
to eliminate the root cause, which is typically another control loop. If the
period of the disturbance is less than twice the ultimate period of the loop,
the feedback controller cannot correct for the upset within the settling time
and may amplify the upset and do more harm than good.

When feedforward control is used, there are some important best practices
to consider that are outlined in Table 2-7.

There are opportunities to adapt the feedforward signal to make it


smarter. Appendix B details the adaptation of a velocity limit for a batch-
to-continuous transition for optimum smoothing of surge tank level con-
trol. Since the feedforward gain or ratio is based on parameters that
change with time and on measurements that have errors, there is an
opportunity to home in and make corrections using an integral-only con-
troller whose controlled variable is the feedback correction and whose out-
put is a trim of the feedforward gain or ratio. The integral-only controller
74 Advanced Control Unleashed

Table 2-7. Best Practices for Feedforward Control


1. Display the feedforward signal, feedforward gain, and feedback correction. For flow
feedforward, display the desired ratio and actual ratio of flows.
2. Make the feedforward gain adjustable by the operator. For flow or speed feedfor-
ward, allow the operator to set the desired ratio.
3. Allow the feedforward signal to be active in manual operation. For flow or speed
feedforward, allow the operator to start up on flow or speed ratio control or to go to
flow or speed ratio control for feedback measurement problems and maintenance.
4. Display any feedforward timing errors, particularly when the feedforward time delay
is too short, and automatically calculate and correct for any known transportation
delays for sheet lines and pipelines.
5. Make sure the transfer from feedforward to feedback control or vice versa is bump-
less.
6. For dead time-dominant processes, such as sheet lines and pipelines, make sure
the feedforward timing is accurate. For flow or speed ratio control, make sure the
manipulated variable response is in unison with the disturbance variable response.

can slowly optimize the feedforward gain to account for drift and
unknown parameters.

If there are unmeasured load upsets, the feedforward can ask for the
manipulated variable to go in the wrong direction. For example, if the col-
umn temperature makes a sharp turn downward below the set point due
to an unmeasured load upset, an increase in feed that would increase
reflux flow by flow feedforward would make the unmeasured upset
worse. Experienced operators will decrease the reflux flow. Just one of
these situations is enough for the operator to lose confidence in the feed-
forward. An adaptive strategy could compensate for this feedforward mis-
take by looking at trajectories of feedback and feedforward measurements.
It would be easier to do this with model predictive control because a better
feedforward trajectory is available, although a trajectory of the bias correc-
tion would need to be generated to indicate the path of the unmeasured
load upset.

Rules of Thumb
Rule 2.1. — The largest opportunity forfinalelements is to eliminate stick-slip
and dead band. The effect of slip is worse than stick and stick is worse than
dead band.

Rule 2.2. — The control valve with the best response is a sliding stem control
valve with a diaphragm actuator and a digital positioner. The dead band and
rangeability limits in the variable-speed drive must be relaxed. If the slid-
ing stem valve has high temperature or environment packing, the digital
positioner must be aggressively tuned. If a rotary valve must be used,
Chapter 2 - Setting the Foundation 75

make sure it has a splined connection between the disc and actuator shaft
and short large-diameter shaft. For some extremely fast critical applica-
tions, such as polymer pipeline and incinerator pressure control, a vari-
able-speed drive is essential.

Rule 2.3. — The largest opportunity for measurements is the selection and instal-
lation of a sensor for better reproducibility, less noise, and minimal interference.
The reproducibility can be estimated as the repeatability and drift. The
need for reliability and resolution is a given and less of a problem today
than reproducibility and noise.

Rule 2.4. — The best flow measurement is the coriolis flow meter. The main lim-
itations are that the coriolis meter may not be offered in suitable materials
or size, or may not have the temperature rating needed.

Rule 2.5. — The best temperature measurement is a smart transmitter connected


to a 3-or 4-wire RTD sensor installed in a piping elbow. The main limitation is
that the RTD may not be suitable for very high temperatures.

Rule 2.6. — The best level measurement is a radar gauge. The main limitations
are a fluid with a dielectric constant of less than 2.0 and a tall, narrow ves-
sel.

Rule 2.7. — Check the life-cycle cost, including the cost of variability, before
choosing a less expensive control valve or measurement. The hardware cost is
generally a small part of the life-cycle cost.

Rule 2.8. — Use smart transmitters. The improved accuracy and diagnostics
are well worth the extra cost.

Rule 2.9. — Use Fieldbus for major upgrades and new installations. The
reduced cost of commissioning and wiring, the expanded diagnostics, and
improved accuracy from the elimination of A/D error are significant.

Rule 2.10. — Use a closed-loop method if an auto tuner pretest fails or is not safe.
The closed-loop method keeps the loop in auto and includes the effects of
valve stick-slip and dead band.

Rule 2.11. — For a process with a large time constant, use more gain and rate
action to overdrive the manipulated variable, to decrease rise time, peak error, and
return time. If the measurement is smooth, you can use rate action to
reduce overshoot.
76 Advanced Control Unleashed

Rule 2.12. — For dead time-dominant loops, significantly decrease the integral
(reset) time setting. It can be as small as 1/4 of the time delay or 1/8 of the
ultimate period for a pure dead time process.

Rule 2.13. — For integrating and non-self-regulating processes, greatly increase


the integral (reset) time. It may need to be as much as 10 times the ultimate
period for slow integrators and runaway reactors.

Rule 2.14. — Go for the largest and least expensive ways to reduce loop dead
time. The automation system is the largest source of time delay for flow,
level, and pressure loops.

Rule 2.15. — Use cascade control to correct for secondary loop disturbances
before they affect a primary loop, or to linearize the manipulated variable for feed­
forward or ratio control. If the secondary loop is not 5 times faster than the
primary loop, the scan time or filter time must be increased or the gain
decreased for the primary controller to slow down the primary loop.

Rule 2.16. — Use feedforward control for loops with a large time delay, time lag,
or periodic disturbance, to provide a preemptive correction for load upsets. The
timing and gain must both be right and the feedforward signal must not
arrive too soon.

Theory

Process Time Constants and Gains


If the differential equations for a material or energy balance are set up in
the following form, the process gain (Kp) and time constant (τp) terms, and
the relative effects of process design and operating conditions, can be
identified for an input Xi' which is a manipulated or disturbance variable,
and an output (Y), which is the process variable.

For a negative-feedback or self-regulating process, the differential equa­


tion is: [2.1]

τ p *dY/dt = Kp*Xi - Y (negative feedback) (2-5a)


For a positive feedback or runaway process, the differential equation is:
[2.1]

τ p *dY/dt = Kp*Xi + Y (positive feedback) (2-5b)

If there is no process feedback, we have an integrator gain Ki. [2.1]


Chapter 2 - Setting the Foundation 77

dY/dt = Ki*Xi (2-6)


The general equation for energy or material balance with generation and
consumption terms for a reaction is:

rate of accumulation = input - output + generation - consumption (2-7)

To illustrate the methodology, the differential equations will be set up and


converted to the proper form for a reactor and its coolant system. The rate
of accumulation of energy in a reactor is the sensible heat of the flow in
minus the flow out of the reactor plus the heat of reaction and minus the
heat loss to the jacket [2.1] [2.20].

Mr*Cr*dTr/dt = Ff*Cf*Tf - Fo*Cr*Tr + (dQr /dTr)*Tr - U*A*(Tr -Tj) (2-8)

If we combine terms, we can isolate the coefficient of the process feedback.

Mr*Cr*dTr/dt = Ff*Cf*Tf - (Fo*Cr + U*A - dQr /dTr)*Tr + U*A*Tj (2-9)

If we divide through by the coefficient of Tr we get the process time


constant and gain for reactor temperature control.

τp = (Mr*Cr) / (F o *C r + U*A - dQ r /dTr) (2-10a)

If dQ r /dT r < Fo*Cr + U*A, and dQ r /dT r > Fo*Cr + U*A, then we have a
negative-feedback time constant and positive-feedback time constant,
respectively. The time constant is not constant but is proportional to
reaction mass and inversely proportional to the outlet flow and the
product of the heat transfer coefficient and area for the jacket.

For a batch reactor (no outlet flow) with a negligible heat release (dQ r /dT r
= 0), we have the general form of the equation to approximate the thermal
time lag of a closed volume. It can also be used for a thermowell by substi­
tution of the proper parameter values.

τp= (Mr*Cr) /(U*A) (2-10b)

The process gain depends upon the input under consideration. If the
manipulated or disturbance variable is feed flow:

Kp = (Cf*Tf) / (F o *C r + U*A - dQ r /dTr) (2-11)

If the manipulated or disturbance variable is jacket temperature:

Kp = (U*A) / (F o *C r + U*A - dQ r /dTr) (2-12)


78 Advanced Control Unleashed

In either case, we see that the process gain is not constant and is inversely
proportional to the outlet flow and the product of the heat transfer
coefficient and area for the jacket.

For a jacket coolant system with recirculation where there is equal volume
displacement of coolant return flow by coolant makeup flow, so that jacket
flow is constant, we have the following equation for the temperature of a
mixture of makeup and recirculation flow:

T m = T o - (To- T c )*(F c / Fj) (2-13)

The process gain (Kp) for the control of the jacket inlet temperature (Ti) by
manipulation of coolant makeup flow (Fc) in a secondary loop is the
partial derivative dTm / dFc:

dT m /dF c = (T o -T c )/F j (2-14)

We see that the process gain is proportional to the temperature difference


between the jacket outlet and the makeup coolant and is inversely
proportional to the jacket flow. The process time constant for this
secondary loop is negligible and the largest time constant becomes the
thermal time lag of the thermowell.

If we were to choose the outlet jacket temperature as the controlled vari­


able of the secondary loop, and the jacket is sufficiently mixed to do an
energy balance on the entire coolant mass in the jacket, we would end up
with:

Mj*Cj*dTj/dt = Fj*Cj*Ti - Fj*Cj*Tj + U*A*(Tr-Tj) (2-15)

If we combine terms, and divide through by the coefficient of the process


feedback, we can identify the process time constant and gain for jacket
temperature control.

τp = (Mj*Cj) / [(Fj*Cj + U*A) (2-16)


Kp = [(Fj*Cj) / (Fj*Cj + U*A)] *dTi/ dFc (2-17)

If we substitute in Equation 2-14, we end up with the process gain


proportional to the temperature difference and inversely proportional to
the jacket flow.

Kp = [(To - Tc)*Cj ] / (Fj*Cj + U*A) (2-18)


Chapter 2 - Setting the Foundation 79

For a total mass balance, the rate of accumulation of mass is equal to the
mass flow in minus the mass flow out:

p*Ar*dL/dt = F i -F o (2-19)
Here we see that the integrator gain (Ki) for the level response is inversely
proportional to the product of the fluid density (p) and the cross sectional
area of the reactor (Ar).

To derive the material balance for component A for a first-order reaction,


the rate of accumulation of the mass of component A is equal to the mass
flow in minus the mass flow out of component A minus the consumption
of the mass of component A by the reaction. The concentration term (CA) is
the mass fraction of component A.

Mr*dCAo/dt = CAf*F - M*k*CAo (2-20)

If we combine terms, and divide through by the coefficient of the process


feedback, we can identify the process time constant and gain for reactor
concentration control.

τp = Mr /(F o + Mr*k) (2-21)


Kp = CAf / ( F o + Mr*k) (2-22)
Similar to what we saw for reactor temperature control, both the process
time constant and gain are inversely proportional to the outlet flow and
the reaction rate.

Process Time Delay


For sheet lines, webs, fibers, and conveyors, the time delay is simply the
distance between the manipulated variable and the process variable
divided by the speed. For example, the transportation delay for average
thickness control of a sheet is the distance between the die bolts and the
thickness sensor divided by the sheet speed.

For plug flow, the entire residence time is a transportation delay. The time
delay is the volume divided by the flow. The flow in pipelines, sample
lines, static mixers, coils, and heat exchanger tubes can be considered to be
essentially plug flow.

For perfect mixing, the entire residence time is a process time constant. In
well-mixed volumes of proper geometry and with baffles, the portion of
the residence time that shows up as time delay can be estimated as half the
80 Advanced Control Unleashed

turnover time, as shown in Equations 2-23 and 2-24, and most of the resi­
dence time becomes a process time constant [2.14].

τd = (0.5*ρ*ρ*V)/(Fo + Fa) (2-23)


τp = [(ρ*V) / F o ] - τ d (2-24)

Lastly, time constants in series create time delay. When the flow of
material or energy can reverse direction depending upon the sign of the
driving force, the time constants are interactive. Conductive heat transfer,
gas flow in pipelines, and the tray response in columns all have interactive
time constants. As the number of equal interactive and non-interactive
time constants in series increases, the time delay increases, from 0.02 to
0.16 and 0.14 to 0.88 times the sum of the time constants respectively. The
portion of the sum not converted to time delay is the process time constant
for a first order-plus-dead time approximation. Thus, interactive time
constants do not create much time delay and the time delay-to-time
constant ratio is always rather nice. A large number of non-interactive
time constants creates an extremely difficult to control dead-time-
dominant system.

Ultimate Gain and Period


The ultimate gain (Ku is the controller gain that causes equal amplitude
oscillations. It is equal to the inverse of the product of the open loop
(static) gain (Ko) and the amplitude ratio at a phase shift of -180 degrees
(AR-180) [2.20].

(2-25)

The amplitude ratio for a sine wave applied to a self-regulating process


with an open loop negative feedback time constant (τo) is: [2.20]

(2-26)

If Equation 2-26 is substituted into Equation 2-25, we have the ultimate


gain as function of the open loop negative feedback time constant (τo) and
the natural frequency (ωn).

(2-27)
Chapter 2 - Setting the Foundation 81

Since the natural frequency in radians per minute (ωn) is 2Π divided by the
ultimate period (Tu), we can express the ultimate gain (Ku)as a function of
the ultimate period.

(2-28)

For a time constant much larger than the time delay (τ »τd), the ultimate
gain is:

(2-29)

Since for this case the ultimate period is about 4 times the time delay (Tu \?\
4 * τd), the ultimate gain can be simplified to a ratio of the time constant to
time delay.

(2-30)

Since the controller gain is a factor of the ultimate gain (Kc = 0.25*Ku), the
controller gain is proportional to the time constant and inversely
proportional to the time delay and the open loop gain.

(2-31)

If the time delay is much larger than the time constant (τ d » τ o ) , it can be
shown that Equation 2-27 reduces to the ultimate gain being the inverse of
the open loop gain. This relationship can also be realized from the
amplitude ratio being 1 for a pure time delay.

(2-32)

The phase shift (f) from a time delay is: [2.20]

\?\ = -360 * fn* τd (2-33)


82 Advanced Control Unleashed

If we substitute in the relationship between natural frequency in cycles per


minute and the ultimate period (fn = 1/Tu), we have:

Tu = (-360/\?\)*τd (2-34)

For a time constant much larger than the time delay there is a -90 phase
shift from the time constant, which leaves only -90 phase shift (\?\) needed
from the time delay to reach the -180 total phase shift; the ultimate period
becomes simply 4 times the time delay.

For τo » τd:
Tu = 4 * τ d (2-35)

If, on the other hand, the time constant is so much smaller than the time
delay that essentially all -180 phase shift (\?\) comes from the time delay, the
ultimate period approaches 2 times the time delay.

For τ d » τ o :
Tu = 2 * τ d (2-36)

The following curve fit shows how the ultimate period changes from a
multiple of 2 to 4 of the time delay and as a function of the relative sizes of
the time constant and time delay.

(2-37)

Peak and Integrated Error


Since a controller can neither sense nor compensate for an upset until after
one dead time, the minimum peak error (Ex) is the exponential response
after one dead time.

Ex=[l-e(-τd / τ
o
)
]*Eo (2-38)

If the time delay is less than the time constant, we can simplify the
relationship.

For τd < τo:

(2-39)
Chapter 2 - Setting the Foundation 83

If the time delay is much less than the time constant, we end up with the
ratio of the time delay to the time constant.

For τd < < τo:

(2-40)

The minimum integrated error (Ei) is approximately the peak error (Ex)
multiplied by the time delay and is thus proportional to the time delay
squared.

For τd < <τo:

(2-41)

If we use Equation 2-31, we see the peak error (Ex) is inversely


proportional to the controller gain (Kc):

For τd < < τo:

(2-42)

If we further realize that the integral time(Ti)is a factor of the ultimate


period that is a multiple of the time delay, we have a relationship where
the integrated error (Ei) is proportional to the integral time and inversely
proportional to the controller gain.

For τd < < τo:

(2-43)

For dead time-dominant loops, the peak error approaches the open loop
error (Eo), and the integrated error approaches the product of the open
loop error and the integral time.

For τ d » τ o :

(2-44)
84 Advanced Control Unleashed

Feedforward Control
The equations for feedforward control are derived from the steady-state
material or energy balance at the point of control. For a heat exchanger
where the bypass is throttled for temperature control, the feedforward
equation for the hot bypass flow (Fhb) is: [2.19]

Fhb= [(Tp- Thi) /(Thi -T h o )] *( F h b ) (2-45)

Similarly, a steady state energy balance can be used to develop the


following feedforward equation for the cold inlet flow (Fd) for a heat
exchanger without a bypass. [2.19]

Fd = [ F h o * C h * ( T h i - T s p ) ] / [Cc*(Tci- T c o )] (2-46)

Dead Time from Valve Dead Band


The time delay(τ d v )from a controller output that must transverse the
dead band after a change in direction is simply the valve dead band (DB)
divided by the rate of change of the controller output (ΔCO / Δt): [2.19]

(2-47)

The rate of change of controller output depends upon the controller tuning
and the error. If we consider the effect of just controller gain (Kc) for a loop
dominated by a large time lag so that the amount of reset action used is
small [2.19]:

ΔCO / Δt = Kc * (ΔCV / Δt) (2-48)

If we use Equation 2-31 for the controller gain with a detuning factor (Kx)
and realize that the rate of change of the controlled variable (ΔCV / Δt) is
simply the pseudo-integrator gain(Ki= Ko / τo) for the large open loop
time lag (τo) multiplied by the change in the actual valve position (ΔAVP),
we have: [2.19]

(2-49)

If we cancel out the integrator gains and approximate the change in actual
valve position on the average as the controller output minus one half of
the dead band (ΔAVP =ΔCO- DB), we end up with an expression to
estimate the valve time delay from the valve dead band and the observed
loop time delay for a step change in controller output. [2.19]
Chapter 2 - Setting the Foundation 85

(2-50)

Nomenclature
A = heat transfer area (ft2)
Ar = cross sectional area of reactor (ft2)
AVP = actual valve position (%)
AR180 = amplitude ratio at a phase shift of -180 degrees
CAo = mass fraction of component A in reactor outlet
CAf = mass fraction of component A in reactor feed
Cc = heat capacity of cold fluid (btu/1b*°F)
Cf = heat capacity of feed to reactor (btu/lb*°F)
Ch = heat capacity of hot fluid (btu/lb*°F)
Cr = heat capacity of liquid in reactor (btu/lb*°F)
Cj = heat capacity of liquid in jacket (btu/1b*°F)
CO = controller output (%)
CV = controlled variable (%)
DB = dead band (%)
Ei = integrated error (e.u.)
Eo = open loop error (e.u.)
Ex = peak error (e.u.)
fn = natural frequency (cycles/hr)
Fa = agitator pumping rate (1b/hr)
Fc = coolant makeup flow (1b/hr)
Fd = cold fluid inlet flow to exchanger (1b/hr)
Ff = feed flow to reactor (1b/hr)
Fhi = hot fluid inlet flow to exchanger (1b/hr)
Fho = hot fluid outlet flow from exchanger (1b/hr)
Fhb = hot fluid bypass flow around exchanger (1b/hr)
Fj = jacket coolant flow (1b/hr)
Fo = outlet flow from reactor (1b/hr)
k = reaction rate constant (btu/lb*hr)
Kc = controller gain
Kp = process gain (e.u./e.u.)
Ki = integrator gain (e.u./hr)
Ko = open loop static gain
Ku = ultimate gain
L = liquid level in reactor (ft)
Mr = mass of liquid in reactor (lbs)
86 Advanced Control Unleashed

Mj = mass of liquid in jacket (lbs)


MV = manipulated variable (e.u.)
PV = process variable (e.u.)
ρ = liquid density in reactor (lb/ ft3)
\?\ = phase shift from time delay (degrees)
τd = time delay (hr)
τdm = measurement time delay (hr)
τdo = observed time delay from a step test (hr)
τdp = process time delay (hr)
τdv = valve time delay (hr)
τp = process time constant (hr)
τo = open loop time constant (hr)
ωn = natural frequency (radians/hr)
Tr = temperature of liquid in reactor (°F)
Tc = temperature of coolant supply (°F)
Tci = temperature of cold fluid at inlet to exchanger (°F)
T co = temperature of cold fluid at outlet of exchanger (°F)
Tf = temperature of feed to reactor (°F)
Thi = temperature of hot fluid at inlet to exchanger (°F)
Tho = temperature of hot fluid at outlet of exchanger (°F)
Tj = temperature of liquid in jacket (°F)
Ti = integral time (hr/repeat)
Tm = temperature of coolant mixture at jacket inlet (°F)
To = temperature of coolant at jacket outlet (°F)
Tsp = temperature set point of hot fluid from the exchanger (°F)
Tu = ultimate period (hr)
Qr = heat release from reaction (btu/hr)
U = heat-transfer coefficient (btu/ oF*ft2)
V = volume of reactor liquid (ft3)
Xi = process input i (manipulated or disturbance variable) (e.u.)
Y = process output (process variable) (e.u.)

References
1. McMillan, Gregory K., Tuning and Control Loop Performance, 3 r d Edition, 1994,
ISA.
2. Ruel, Michael, "Stiction: The H i d d e n Menace," Control, N o v e m b e r 2000, p p .
71-76.
3. Ruel, Michael, "Control Valve Health Certificate," Chemical Engineering,
November 2001, p p 62-65.
Chapter 2 - Setting the Foundation 87

4. McMillan, Gregory, "Valve Response Control, Improvement," Encyclopedia of


Chemical Processing and Design, Vol. 61,1997, pp 99-113.
5. McMillan, Gregory, Good Tuning: A Pocket Guide, ISA, 2000.
6. McMillan, Gregory, "A New Era for Model Predictive Control," Control, May
2000, pp 45-50.
7. McMillan, Gregory, Mertz, Glenn E., and Trevathan, Vernon L., "Trouble Free
Instrumentation," Chemical Engineering, November 1998, pp. 82-94.
8. Shinskey, Francis G., "Process Control: As Taught versus As Practiced,"
Advances in Instrumentation and Control, November 2001.
9. Coughran, Mark, T., "Valves: Testing for Peak Performance" InTech,
November 1994, pp 58-61.
10. McMillan, Gregory K. and Weiner, Stanley, et. al., How to Become an Instrument
Engineer - Part 1.523, ISA, 1994.
11. Mamzic, C.L., "Improving the Dynamics of Pneumatic Positioners," ISA
Journal, 1958, pp 38-43.
12. Arant, James B., "Control Valve Notes and Positioner Myths," Control,
November 1993 pp 34-40.
13. Ciancarelli, Joseph D., "Sanitary Control Valves for the Biotech Industry,"
InTech, 1991, May.
14. McMillan, Gregory K., pH Measurement and Control, 2nd Edition, ISA, 1994.
15. McMillan, Gregory K., "Pressure Control; Without Dead Time I might be Out
of a Job," A Funny Thing Happened on the Way to the Control Room, ISA, 1989.
16. Battikha, N.E., The Condensed Handbook of Measurement, ISA, 1997, pp 27-31.
17. Vujicic, Michael, and Ortengren, Mike, "Laser on Level," InTech, March 2000,
pp 56-58.
18. McMillan, Gregory K., editor, Process/Industrial Instruments and Controls
Handbook, 5* edition, McGraw Hill, 1999, pp 6.23-6.55 and 10.107-10.134.
19. McMillan, Gregory K., Advanced Temperature Measurement and Control, ISA,
1999.
20. Hariott, Peter, Process Control, McGraw Hill, 1964.
3
APC
Pathways

Practice
Overview
The objectives for this chapter are to help the user identify advanced pro-
cess control (APC) opportunities, estimate benefits, select the best technol-
ogy, sustain the solution, and track the benefits. Industry is driven by cost
and benefits analysis and is generally not interested in a great technology
looking for an application. This chapter combines the methods that have
been used extensively to identify opportunities for APC in the chemical
and the pulp and paper industries [3.1] [3.2] [3.3] [3.4] [3.5].

There is an interaction between steady-state design and controllability. As


the thermodynamic efficiency is increased, the controllability is decreased
[3.6]. The driving forces, such as pressure drops and temperature differ-
ences, available for manipulation by the control system are minimized.
The result is an increased variability and an optimum that is more difficult
to hold. Also, the existing plants in the petroleum and chemical industry
are pushed to run at twice the original plant nameplate capacity. The result
is shifting bottlenecks and unforeseen equipment constraints and operat-
ing conditions.

Yet, process design as taught and practiced focuses on steady-state opera-


tion. Engineering companies design exclusively for the steady state and
have essentially no experience with variability or dynamics. The emphasis
on minimizing changes in design to minimize risk and maintain the bud-
get and schedule often results in a plant built just like the last one. Not sur-
prisingly, plant variability audits show that most plants have the same
problems. On top of this, people in the plants are resistant to change [3.5].
89
90 Advanced Control Unleashed

The strongest advocates of maintaining the status quo are often the most
experienced people in operations. For batch operations this corresponds to
fixed feed rates, hold times, and valve positions.

Process control deals with change. If there were no variations in raw mate-
rials, utilities, process and ambient conditions, equipment performance,
production rates, or desired operating points, there would be little need
for control loops. In actual plant operations nothing is completely con-
stant. Thus, most of the opportunities for process control deal with
change.

A control loop moves variability in a process variable and a set point to a


manipulated variable. Advanced control also initiates change to optimize
plant operation. The knowledge of the root cause and route of undesirable
change and the ability to transfer and generate change is essential to get
the greatest benefits from advanced control.

Capital expenditures for most plants have come under increased scrutiny.
In order to make the improvements that provide the biggest bang for the
buck, the engineer needs to know what is important despite an over-
whelming number of choices of control technologies and an explosion of
information and data. This chapter provides a perspective on how to
choose the most appropriate technology.

The time frame, pattern, and sequence of changes are important for track-
ing down the sources and assessing the impact of variability. The faster the
change, the more difficult it is for a controller to correct. However, changes
with a short period are more effectively attenuated by process variable fil-
ters and back-mixed volumes. Table 3-1 summarizes the most common
sources of change, and their relative speeds.

Changes in raw materials, and in recycle, catalyst, and equipment perfor-


mance tend to result in a shift in actual and desired operating conditions
that occurs over a period of hours to days. Changes in ambient conditions
tend to be periodic and can be moderately slow (day to night), fast (rain
storms and fronts) and very slow (season to season). Sequences, batch
operations, and parallel trains starting up and shutting down tend to
cause step changes but with a long period, whereas valve stick-slip and
on-off control cause square waves that range from fast to moderately slow.
Split range points are discontinuous operating points as one control valve
starts to open as another closes or goes wide open. The stick-slip and
change in valve gain as a controller output crosses a split range point often
initiate a limit cycle back and forth across the split range point even for the
best of control valves.
Chapter 3 - APC Pathways 91

Noise and repeatability errors in a measurement are fast and are passed on
to the manipulated variable. Analyzers are a notable exception as they are
a source of slow frequency variability from a long sample transportation
delay and processing (cycle) time.

Controllers with inappropriate tuning or feedforward timing and interac-


tions will cause oscillations with a period proportional to the loop time
delay. If the period of an upset approaches the period of a control loop, the
controller will amplify the oscillation and the control loop does more harm
than good. One of the largest and most frequent sources of variability is a
level controller on a surge volume or feed tank that passes on too much
variability to a manipulated flow instead of allowing the level to swing.

Market changes are relatively slow, but the implementation is fast. Once a
rate change is decided upon, operators tend to make large step changes in
feed set points that are not simultaneous or coordinated. Transitions are
accomplished even more quickly to minimize the cross contamination of
products. As inventories decrease, market volatility increases, and the
demand for more flexible manufacturing increases, the frequency of rate
and product changes will increase to the point where steady-state opera-
tion may be a distant memory.

Operators tend to have favorite operating points or "sweet spots." Shift


changes are often a major point of disruption. Operators also tend to get
bored and actually love being in charge of inventory control where they
have to changes flow set points to keep woefully undersized surge or feed
tank volumes from overflowing or running dry. Unfortunately, the set
point changes made by operators are fast, inconsistent, impatient, and do
not take into account the effect of time delays, time lags, and interactions.

Figure 3-1 shows the pyramid of technologies for advanced control. The
base is the solid foundation of basic process control discussed in Chapter
2. The next layers of loop and process performance monitoring provide
the tools to quantify the opportunities, sustain the performance, and track
the benefits of APC. Loop monitors can identify when and where an auto
tuner needs to be run again. Rules can be added on top of the
performance-monitoring layers to provide better explanations and
automatic corrections. Expert systems can be developed to deal with
abnormal situation management, such as equipment and automation
system failures and degradation.

Next, a layer of property estimators should be added to provide faster and


more reliable measurements than online or lab analysis. A plant should
take full advantage of opportunities for estimators before the implementa-
tion of higher levels of advanced control so that the impact on composition
92 Advanced Control Unleashed

Table 3-1. Sources and Speed of Plant Upsets


1. Process
a. Raw Materials (slow)
b. Recycle (slow)
c. Catalyst (slow)
d. Steam (fast)
e. Coolant (fast)
2. Equipment
a. Fouling (slow)
b. Failures (fast)
3. Environmental
a. Day to night (slow)
b. Rain storms and fronts (fast)
c. Season to season (slow)
4. Market
a. Rate changes (fast)
b. Product changes (fast)
5. Operators
a. Manual operation (fast)
b. Sweet spots (fast)
c. Inventory control (fast)
6. Discrete
a. On-off control (fast)
b. Sequences (fast)
c. Batch operations (fast)
d. Startup and shutdown (fast)
e. Interlocks (fast)
7. Valves
a. Stick-slip (fast)
b. Split range (fast)
c. Failures (fast)
8. Measurements
a. Noise (fast)
b. Reproducibility (fast)
c. Failures (fast)
9. Controllers
a. Tuning (fast)
b. (b) Timing (fast)
c. Interaction (fast)
d. Failures (fast)

and quality can be identified. Similarly, a layer of model predictive control


should be running before the layers of optimization are added so that
moves to operating points consider the projected effects on constraints.
Constraint pushers can easily be added to perform a simple optimization
of single variables, such as the maximization of feed flow or the minimiza-
tion of a reflux or steam flow. Linear programs can be set up to find the
optimum intersection of constraints for a more general optimum.

Finally, real-time optimization models must be in place and reconciled or


adapted to match the plant to find complex optimums that are highly
Chapter 3 - APC Pathways 93

dependent upon process relationships and to make sure a business solu-


tion is feasible before tactical schedulers are put online to reduce invento-
ries and automatically adjust to market demands. For maximum
sustainable benefits, the pyramid should be built from the bottom up.

Figure 3-1. Pyramid of Advanced Control Technologies

The advanced control technology with the best track record to date for
increasing plant efficiency and capacity is the model predictive controller
[3.7]. The built-in knowledge of the process dynamics and constraints
from extensive plant testing, the ability to readily control the amount of
variability transferred to the manipulated variables by move suppression,
plus the ability to readily add some optimization, are the features respon-
sible for its success. The objective of any advanced control program should
be to get at least to the level of model predictive control with a constraint
pusher for a simple optimization of a single variable. It is expected that as
process modeling tools become easier and less expensive to use and main-
tain, real-time optimization will start to deliver more consistent benefits
and applications that vie for the top of the pyramid will be more common
[3.8].

Real-time optimization (RTO) technology has proven itself on a technical


basis and can deliver benefits often greater than the MPC layer. However,
many RTO applications are not delivering the promised benefits, mostly
because of the requirements for highly skilled and significant mainte-
94 Advanced Control Unleashed

nance, a very real challenge given today's levels of resources in manufac-


turing plants. No matter the promise of a technology, if it cannot be
supported and maintained within the resource limitations of a plant, it
will fall into disuse and eventually be shelved. Understanding and accept-
ing the resource requirements of any technology is a key element in APC
technology selection.

Opportunity Assessment
Figures 3-2a and 3-2b show how reducing the variability in a process vari-
able associated with a constraint translates to the opportunity to move the
variable closer to the constraint. The traditional way of depicting the
improvement is to show the current peak at the optimum location for the
given degree of variability, as illustrated by Figure 3-2a. The benefits are
then based on how much the peak can be moved if the standard deviation
is reduced and still provide the same small number of violations of the
constraint. In practice, the peak is typically set by the operator much more
conservatively so that even if the variability is not reduced, just a better
understanding and automation of the positioning of the peak relative to
the constraint affords an additional opportunity, as shown in Figure 3-2b.
Manual set points are chosen based on opinions and war stories rather
than data. The operator is usually the largest and most active constraint to
optimum operation.

Figure 3-2a. Effect of Variability on Allowable Proximity of Process to Con-


straint

The variability in a process variable must be translated to an economic


variability, such as product profit, utility cost, or raw material cost. This
can be done by calculation or by a cross-correlation analysis to identify the
change in the economic variable for a change in the process variable. The
Chapter 3 - APC Pathways 95

Figure 3-2b. Effect of Operator Conservatism on Proximity of Process to Con-


straint

tools and procedures for identification of this economic gain are similar to
those used for identification of the process gain for property estimators
(Chapter 8). The process must start with an online measurement or calcu-
lation of capacity, yield, and utilities to provide the economic variables. In
some cases, missing or inaccurate measurements will be found in the pro-
cess of putting the economic variables online. It also provides a head start
for true performance monitoring (Chapter 4) and real-time optimization
(Chapter 10).

Any opportunity assessment (OA) must start with an opportunity sizing,


to determine the gaps between average and peak performance for the last
most representative year of operation based on discussions with opera-
tions and a detailed review of the cost sheets, data historian, and simula-
tion results. The peak performance can be the best batch or period of
operation from the cost sheet or data historian or the best operating condi-
tions or cycle time found by simulation or Multivariate Principal Compo-
nent Analysis (PCA), as discussed in Chapter 4. The advantage of a high-
fidelity process simulation is that the model can go where the plant has
not traditionally operated to find and explore new process opportunities.
Of course, the simulation must be validated against present and proposed
plant operation. The process simulation should be put online to compare
the predicted versus the measured values of key operating variables.
Chapter 10 will show how to automatically adapt the parameters of an
online model to match plant operation.

The gap can also be quantified by a statistical approach. The fractional


reduction in the standard deviation can be approximated based on the cur-
96 Advanced Control Unleashed

rent standard deviation and the minimum possible standard deviation


from the process control improvement. This reduction in the standard
deviation is multiplied by amplitude ratios and process gain factors and
then translated to a shift in the mean to a more optimum operating point,
or a decrease in the fraction of data points that exceed constraints or occur
at less optimum conditions. The shift in the mean or the decrease in the
fraction of deviant data points is multiplied by a production rate and a
conversion and economic gain factor to establish a theoretical gap for the
opportunity sizing.

After confirmation of the opportunity sizing, a detailed review of the pro-


cess and control systems is done over a period of two days with all ques-
tions oriented toward improvements that could eliminate part of the
identified gaps. The list of improvements generated from the questions
and ensuing discussions are compiled. The OA concludes with the assign-
ment of a percentage of gap elimination to each improvement and a work
plan.

Some questions are offered to help an OA to find improvements in a basic


control system. Questions (1) through (20) ascertain how important it is to
reduce variability in an operating limit (constraint) and what process vari-
ables are involved. The questions generally apply to both batch and con-
tinuous unit operations. Appendix A presents the more extensive list of
questions that were used by Monsanto and Solutia for opportunity assess-
ments in the last decade in all of the major processes, OAs that led to pro-
cess control improvements worth $60 million a year in benefits. ICI has
cited 2% to 6% of operating costs as the benefits achieved through
improved process control. Besides the cost savings, ICI is convinced it has
increased the safety and reduced the environmental impact of its plants
[3.9].

1. Could the yield in a reactor be increased by operation closer to a


minimum constraint for excess reactants?
2. Could steam flow to an evaporator or a column be decreased by
operation closer to a product quality constraint?
3. Could steam or fuel flow to a dryer be decreased by operation
closer to a high moisture constraint or low differential temperature
constraint?
4. Could fuel flow to a calciner or kiln be reduced by operation closer
to a high pressure, low oxygen, or low differential temperature
constraint?
Chapter 3 - APC Pathways 97

5. Could feed rates or batch sizes for reactors, evaporators, and


crystallizers be increased by operation closer to high level and
pressure constraints for vent system loading and entrainment?
6. Could feed rates or batch sizes for fermentors be increased by
operation closer to the high level constraint for entrainment and
CO2 stripping rate?
7. Could feed rates or batch cycle times for fermentors be reduced by
operation closer to the high agitation or pressure limit associated
with excessive cell fracture and CO2 solubility, respectively?
8. Could feed rates be increased (residence time decreased) or batch
cycle times reduced by operation closer to the minimum
conversion time constraint?
9. Could feed rates to hydroclones be increased by an operation
closer to the low outlet pressure limits?
10. Could feed rates or batch sizes to centrifuges be increased by
operation closer to the high basket loading limit?
11. Could feed rates to batch and continuous reactors, evaporators,
and crystallizers be increased by operation closer to a utility or
heat transfer surface constraint?
12. Could reaction rates be increased by operation closer to a high
temperature or pressure magnitude or rate-of-change constraint
for quality or safety?
13. Could purge rates and yield loss be reduced by operation closer to
the high limit on impurities or inerts?
14. Could feed rates, run times, or number of batches between the
cleanout of crystallizers be increased by operation closer to the
mixing and high differential temperature constraints for the rate of
formation of fines and coating of heat-transfer surfaces?
15. Could the reagent use to a neutralizer be reduced by operation
closer to a low or high pH constraint?
16. Could fuel be reduced to a boiler or furnace by operation closer to
a low oxygen constraint?
17. Could energy be reduced to compressors, refrigeration units, and
heaters by the operation of feed or temperature control valves
closer to the high position constraint associated with the loss of
valve sensitivity and range of operation?
18. Could the waste of reagent from the cross neutralization of acids
and bases, or the waste of energy from alternating use of coolants
98 Advanced Control Unleashed

and steam, be reduced by the minimization of the crossing of the


split range point?
19. Could the amount of scrap be reduced in a sheet line by operation
closer to the cross and machine direction constraints on gauge?
20. Could the roll speed be increased on a sheet line by operation
closer to the sheet moisture and extruder speed constraints?
21. Could the amount of sheet giveaway to the customer be reduced
by tighter control of average thickness and profile near the edges?
22. Can the production rates of unit operations be coordinated to take
maximum advantage of surge volumes to increase plant capacity?
23. Could a reflux flow be reduced to decrease the steam flow to a
reboiler?
24. Could a recycle flow be reduced to decrease utility use and
pressure drops and to increase residence time and surge volume
capacity?
Most reactors operate with an excess of one or more reactants because the
downside of a deficiency due to less than ideal mixing or variability in the
feeds or the composition measurement is low reaction rates and the forma-
tion of byproducts that can lead to waste treatment problems and hazard-
ous operation, besides a loss in production. The amount of excess reactant
is often based on operating practice and how many times an operator has
had actual or perceived problems from operating too close to the con-
straint.

In order to estimate the benefits from reducing feed variability, the reduc-
tion in variability in the excess component of interest must be computed
from an online analyzer, estimator, or attenuation calculation. The online
estimator must include the time constant associated with mixed volume to
show the smoothing effect. It does not need to have the time delay associ-
ated with an analyzer. The cycle time of chromatographs and lab analysis
is too slow and will alias the relatively fast reactor concentration oscilla-
tions from feed variability.

Alternatively, an attenuation calculation can be done. The instantaneous


excess at the reactor inlet based on feed rates can be calculated and the
inlet amplitude (Ai) and period (To) of the oscillation noted. The outlet
amplitude (Ao) of the oscillation can be estimated from the equation for
the amplitude ratio from the frequency response for a first-order lag. The
time constant can be estimated as the residence time of the portion of the
volume that is well mixed minus the turnover time. This calculation or an
estimator can be put online to help monitor the sustainability of any
improvements made to the basic control system.
Chapter 3 - APC Pathways 99

(3-1)

(3-2)

(3-3)

(3-4)

where:
Ai = amplitude of the oscillations at the inlet to the volume
(e.u.)
Ao = amplitude of the oscillations at the outlet of the volume
(e.u.)
AR = amplitude ratio
fo = frequency of oscillation (cycles / minute)
ω = frequency of oscillation (radians/minute)
To = period of oscillation (minutes)
X = time constant for back-mixed volume (minutes)
The attenuation of variability in utilities from the volume of a unit
operation can also be estimated from Equation 3-4, although it may
require an energy balance to compute the effect of the utility on process
temperature. For processing with negligible back mixing, such as static
mixers, desuperheaters, plug flow reactors, heat exchangers, extruders,
fiber spin lines, and sheet or web lines, there is no appreciable attenuation
of variability. If the product is a solid that is not chopped or cut into small
pieces, such as a sheet or roll of yarn, variability that enters anywhere into
the process is captured in the product. For liquids, gases, and chopped
solids, the product can be blended in storage vessels. Equation 3-4 can be
used to estimate attenuation from the blending. For storage tanks, there
may be sufficient mixing from eductors or the entrance of streams that a
portion of the volume can be considered to be back mixed. If the retention
time is long enough, the contents will be mixed by diffusion (movement of
molecules from high to low concentrations and vice versa). Thus, large
storage tanks provide a significant degree of blending and attenuation of
oscillations despite the absence of an agitator.
100 Advanced Control Unleashed

Variability is reduced by the use of feedback and feedforward control.


Chapter 4 details the equations to calculate the best possible standard
deviation for feedback control (Sfbc) from the process capability (Scap). The
standard way of computing the process capability involves successive dif-
ferences separated by the scan time. If the scan time is small compared to
the loop dead time, this index provides an indication of uncontrollable
noise. Since feedback control cannot make a correction in less than the
total loop dead time, a more accurate indication of the process capability
would include the effect of dead time. The calculation interval could be set
equal to the time delay [3.4].

Unfortunately, the user does not usually know the time delay and it is
rarely if ever constant. The time delay can be measured online as the time
it takes for the controlled variable to get beyond the noise band whenever
there is a change in the set point or manual output. Alternatively, the dead
time can be captured whenever an auto tuner is run on the loop. For flow
and liquid pressure, the time delay can be estimated as one half of the loop
scan time and the valve time delay, which is the time between a change in
the controller output and the actual valve position as measured by a smart
positioner for a sliding stem valve. In practice to date, accurate estimates
and identification of the time delay have not normally been available. The
key point is that the calculation interval should be slowed down for the
process capability calculation to be more representative of the actual capa-
bility of feedback control.

Once the standard deviation for feedback control is calculated, Equation 3-


5 can be used to compute the shift in the mean value as equal to the frac-
tional improvement in the standard deviation multiplied by the distance
between the limit and the mean value. This shift provides the same distri-
bution of data points beyond the limit. In most cases, the mean value is
more than two standard deviations away from the limit so that the actual
number of data points beyond the limit is quite small.

The improvement from feedforward control for a large measured distur-


bance is usually greater than the improvement from feedback control
alone. The minimum possible standard deviation for feedforward control
(Sffc) can be estimated as the standard deviation observed when the mea-
sured disturbance is constant. It can also be estimated via Equation 3-6 as
the root mean square of the standard deviation of the feedforward mea-
surement, gain, and timing errors if the measured disturbance is by far the
largest upset. Since feedback control will work on the error left over from
feedforward control, the effects are multiplicative as shown in Equation
3-7.
Chapter 3 - APC Pathways 101

The result is a much more representative standard deviation for advanced


process control (Sapc) that includes the combined improvement from both
feedback and feedforward control. Equation 3-8 computes the shift in the
mean value that is possible from advanced process control. Figure 3-3a
shows how this new set point translates to benefits as seen on a trend
recording. If the limit is associated with a manipulated variable, the
expression can be multiplied by a process gain to convert to the shift in the
controlled variable as shown in Equation 3-9. Figure 3-3b illustrates how
operation closer to a limit for a manipulated variable, such as feed rate,
that is a process input translates to benefits in production above what is
set by the operator.

Equations 3-10 and 3-11 express the mean and shift in terms of an old and
new set point. It is important to realize that these are estimates for the low-
est possible variability in the controlled variable. In some cases, it may be
undesirable to transfer all this variability from the controlled to the manip-
ulated variable. Model predictive control excels at determining how much
variability is transferred and getting the most out of both feedback and
feedforward control. Thus, the full benefit offered by these calculations is
only approached by the application of model predictive control built on a
solid foundation of good valves and measurements.

Figure 3-3a. Benefits from Operation Closer to a Controlled Variable Unit

Equations

(3-5)

(3-6)

(3-7)
102 Advanced Control Unleashed

Figure 3-3b. Benefits from Operation Closer to a Manipulated Variable Limit

(3-8)

(3-9)

(3-10)

(3-11)

where:
Kp = process gain (ΔCV/ΔMV)
SP new = new set point
SPold = old set point
Sapc = standard deviation possible for advanced process control
Sfbc = standard deviation possible for feedback control
Sffc = standard deviation for feedforward control
Sffd = standard deviation of feedforward dynamics error
Sffg = standard deviation of feedforward gain error
Sffm = standard deviation of feedforward measurement error
Stot = total standard deviation
ΔCV m = shift in mean value of controlled variable
CVM = mean value of controlled variable
CVL = limit (constraint) for controlled variable
MVM = mean value of manipulated variable
MVL = limit (constraint) for manipulated variable
Chapter 3 - APC Pathways 103

beneficial operation closer to limits, may not be easy or feasible. For


example, for fermentors, the growth rate of the cells depends on the
availability of oxygen and the removal of carbon dioxide. An increase in
pressure can increase the partial pressure of oxygen and the availability of
oxygen for the cells, but it increases the solubility of carbon dioxide, which
also decreases the pH of the broth. An increase in mixing can reduce areas
of stratification of cells and improve liquid and gas contact, but it fractures
the cells. Thus there are many interactions between dissolved oxygen
control, pH control, and dissolved carbon dioxide control, and the
variability in one loop can cause excessive variability in other loops.
Normally the cells are very sensitive to their local environment and the
mixing is minimal, so yields can easily be adversely affected by variability.

The questions listed do not cover any reduction in down time or penalties
associated with exceeding equipment and environmental limits. For exam-
ple, the life of glass linings and the rupture disks depend upon the number
of temperature and pressure cycles. Rupture discs can cause reportable
emissions and a brief excursion in an effluent stream below 2 pH or above
12 pH can classify a whole volume as hazardous waste.

In some cases, the opportunities are not so obvious or directly calculated.


Consider the case of a solvent recovery distillation tower at the back end
of a process. It performs a final cleanup of a recycled solvent and often
exhibits flooding conditions. When the tower floods, it consumes a signifi-
cant amount of the operator's attention and it can take several hours to
unload the tower. Efforts were often made at justifying an APC applica-
tion on the tower, but the economics associated with simply the variability
in the recycled solvent would not merit the expenditure. A second look at
the data across the whole plant, as opposed to only the solvent tower,
clearly showed that when the tower was flooded, the operator was
focused on the flooding problems and not maximizing the feed rate and
reactor yield to the plant. Since the plant was in a sold-out position, free-
ing up the operator to maximize the primary economic drivers delivered
significant benefits and easily justified the APC application on the solvent
tower, with a payback on the order of a few weeks. When computing the
benefits of an application, consider the impact of the application on other
parts of the plant—good or bad.

Examples

Global Production Maximum


Many examples of a process parameter that has a global production maxi-
mum can often be found in a plant. For example, in a plant that is produc-
ing ammonia, the inlet temperature to the synthesis converter has a major
impact on plant production for a given feedstock input. As illustrated in
104 Advanced Control Unleashed

Figure 3-4, any variation in this temperature from best operating point will
result in less production at a given feedstock input. The slope is the pro-
cess gain used to calculate the benefits as you approach the optimum
operating point.

Mean Variation Production


(Dead (DegC) Loss(%)
415 0 0
41S 25 0.103
415 50 0.415
460 25 1.102
460 50 1.412

Figure 3-4. Impact of Synthesis Bed Temperature on Ammonia Production

Production Limited by Operating Constraint


Continuing with the ammonia plant example, there is a maximum pres-
sure at which the synthesis process may be operated based on the physical
construction of the process. The pressure is maintained through the adjust-
ment of purge gas flow. If the control of synthesis pressure is poor, then
the set point must be maintained far from the maximum pressure. By
decreasing the variation in the pressure control, it is possible to increase
production for a given feedstock rate, as illustrated in Figure 3-5.

Shifting Bottlenecks
Stressed-out plants with old equipment, difficult processes, and recycle
streams often have a shifting and confusing bottleneck. The proper APC
pathway can solve and demystify the problem. Often overlooked are the
significant side benefits of process knowledge gained from the implemen-
tation and day-to-day operation of an APC system.

Figure 3-6 shows the major unit operations for the production of a nylon
intermediate chemical. Since the solids content of the streams is high, the
plant is over 40 years old, and the plant is sold out and running at more
than four times the original nameplate capacity, one or more pipelines,
reactors, evaporators, crystallizers, and centrifuges are always shut down
for washout or maintenance. Centrifuges periodically trip on vibration or
spill over solids (slop) into the recycle. The equipment and operators are
Chapter 3 - APC Pathways 105

Mean Variation Production


(ATM) (ATM) Loss(%)
150 0 0
149 0.5 0.457
148 1.0 0.944
146 3.0 2.038

Figure 3-5. Impact of Synthesis Loop Pressure on Ammonia Production

so stressed that the control room is always in crisis mode with incessant
alarms and flipping of screens. Most of the controller outputs are satu-
rated high. War stories rule. Engineer burnout and turnover are high. The
young engineers thrown into the fray defer to operator opinion rather
than applying chemical engineering principles. Management is afraid to
do anything because capacity has actually decreased after each debottle-
necking project.

A large amount of water is introduced into the process from washout con-
nections but is largely not metered. The equipment must be cleaned and
started up manually, so the operator is naturally reluctant to push capac-
ity. The surge volumes that are used as feed and recycle tanks are seriously
undersized for the present production rates, the additional water load,
and centrifuge slopping. The operators continually adjust feed rates or
add water to keep tanks from running dry. Sometimes the recycle tanks
overflow from the additional water load and product is lost to the sewer.

The feed rate to each of the final-stage crystallizers is set by the operator
based on a visual inspection of a sample from the recirculation line for the
amount of solids. The interpretation of the concentration and particle size
is subjective and the main goal of the operator is to reduce how often the
crystallizer must be taken down and the crystal buildup on the walls and
heat-transfer surfaces removed. A supervisory control system is periodi-
cally turned on that finds the lowest production rate and drags the whole
plant down to match it. Since the feed to the last stage of crystallizers is
106 Advanced Control Unleashed

typically set low from the margin of operator error and lack of confidence,
the tail wags the dog, and the plant capacity decreases each year despite
capital projects to replace and add equipment to reduce bottlenecks.

The solution is to remove concentration, production rate, and inventory


control from the operator and automate the washout and startup of the
unit operations. This gets operations and technical support out of the fire-
fighting mode and into a higher and more value-added supervisory and
advisory role.

Some pipeline and pump sizes are increased so control valves are not wide
open, and some on-off valves and magnetic flow meters must be installed
to measure and control water addition. Coriolis mass flow meters are
installed not only to provide more constant solids loading of the feed to
the unit operations but also to provide estimators of solids concentration
and crystal buildup on walls. Nuclear gauges are added to each centrifuge
to measure the cake mass in the basket. The combination of mass feed flow
and basket solids load control improves the production rate and on-stream
time of the centrifuges. The scheduling and the sequencing of valves for
automated cleaning and startup makes the performance of the equipment
more reproducible and predictable. The use of variable speed drives to
throttle the feed to the hydroclones increases the pressure drop available
for separation of solids in the hydroclones and eliminates significant slip-
stick since sliding stem valves are not suitable for the high solids concen-
tration. Model predictive control is used to control the solids concentration
in each reactor, evaporator, and crystallizer and to manage the overall
inventory control and purge rate. Rampers and pushers are used to maxi-
mize feed rates without a projected violation of a level or process con-
straint. Finally, real-time optimization is used to track bottlenecks,
coordinate feeds, optimize recycle and purge flows, reach an understand-
ing of the root causes, separate fact from fiction, and provide data to lead
to successful and viable projects.

Application

General Procedure
1. Install a control loop and process performance monitoring
systems.
2. Ask marketing to identify the key business drivers.
3. Develop management sponsors, onsite advocates, and resources.
4. Determine the percentage of time sold out, scheduled and
unscheduled downtime, profit per pound, variable costs per
Chapter 3 - APC Pathways 107

Figure 3-6. Stressed-out Plant with Shifting Bottlenecks

pound (raw materials, utilities, waste treatment), fixed costs, and


product with less or no revenue (scrap, recycle, and giveaway).
5. Use best demonstrated periods of operation, statistical analysis
and/or high-fidelity dynamic simulations to determine target
values and average for the year or season to estimate actual values
of process efficiency and capacity.
6. Compute the gap between target and actual values and multiply
by the lost revenue or cost factors to complete an opportunity
sizing.
7. Review trend plots and performance of continuous and batch
operations to determine fact and fiction.
8. Meet with technical support and operations and discuss possible
advanced control solutions for each of the gaps.
9. Prioritize the solutions with a weighted rating based on benefits,
capital requirements, speed of implementation, ease of
implementation, and risk.
10. Benchmark the existing operation.
11. Select the top technology and supporting technologies.
12. Train technical support and operations in application of
technologies.
108 Advanced Control Unleashed

13. Implement the technologies.


14. Measure the results online.
15. Benchmark the results.
16. Look for new improvements.

Application Detail

Neutralizer
The benefits from the reduction in variability afforded by the improve­
ments to the basic control system can be estimated for the pH example in
Chapter 2. The largest sources of variability in the existing system are the
ball valves with excessive stick-slip caused by the excessive capacity and
friction and missing positioner. The limit cycle from the reagent valves in
terms of a reagent-to-feed ratio is translated to a pH limit cycle as illus­
trated in Figure 2-4b. However, the limit cycle from the first-stage valve is
first multiplied by the amplitude ratio based on the period of oscillation of
the first stage and the residence time of the second stage as shown in
Equation 3-12.

The loop period for quarter-amplitude oscillations is approximately 6


times the total loop time delay for the first stage, and the time constant of
the second stage is approximately the residence time. Since almost all of
the reagent is added to the first stage, the increase in the amount of unre-
acted feed from the increase in conversion time is estimated for the limit
cycle in the first valve as it propagates to the second stage. Note that the
residence time in the first stage is so short that there is little time for con­
version and most of the effects from pH exceeding the desired control
band occur in the second stage.

Equation 3-13b is a curve fit for the distribution of data points in neutral­
izer pH for a limit cycle. The standard deviation of the limit cycle for the
new precise control valves is estimated and the calculations are repeated.
To see the improvement in yield by other improvements, the decrease in
the fractional integrated error (AXp) can be estimated for a typical set of
feed rate changes. For the benefits in terms of less reagent use for a more
optimum pH set point, Equations 3-15 and 3-16 can be used to estimate the
shift in the mean (ΔXm) from reduced variability. Whether it is best to not
move the pH set point and reduce the fraction of product below the low
constraint, or to lower the pH set point to the point where there is no
change in the fraction of product below the low constraint, depends upon
the values of ABp and ΔBm.

Sni = (Smi/ 100%) * (Fr/Ff) * Pn * ARn (3-12)


Chapter 3 - APC Pathways 109

zi = |(XL - X M )/ Sni| (3-13a)

\?\ = - 1.4743 *z3i+ 14.488 *z2i+14.4888*zi+ 50 (3-13b)

ΔXp =(\?\i- \?\ j )/100% (3-13c)

ΔBp = ΔXp * ΔYp * Ct * Ff * Ef (3-14)

ΔXm = ΔSni (3-15)

ΔBm = (ΔXm / Pn) * Ct * Ff * Er (3-16)


where:
ARn = amplitude ratio for neutralizer
ABp = delta benefits from less product below low constraint ($/
hr)
ΔBm = delta benefits from shift of mean closer to low constraint
($/hr)
ΔYp = yield loss from distribution below low constraint
Ct = conversion factor for time (24hr/day)
Ef = economic factor for cost of feed ($/lb)
Er = economic factor for cost of reagent ($/lb)
Ff = feed flow (lb/hr)
Fr = raw material (reagent) flow (lb/hr)
\?\i = percent of data points below low constraint for limit cycle i
(%)
Pn = process gain in neutralizer from titration curve slope
(ΔpH/Δratio)
Smi = standard deviation in mixer reagent valve position for limit
cycle i (%)
Sni = standard deviation in neutralizer pH for limit cycle i (pH)
ΔXm = delta fractional product below low limit
AXm = shift in mean (pH)
XM = mean (pH)
XL = low constraint (pH)
Zi = number of standard deviations from the mean to the flow
constraint (see eq. 3-6a)
By the use of a sliding stem valve with a digital positioner, the standard
deviation of a limit cycle in the first reagent valve is reduced from 10%
(Sml) to 0.1% (Sm2). The process gain is 10,000 pH per flow ratio (Pn), the
flow ratio at set point is 0.1 (Fr/Ff), the set point is currently 7 pH (XM), the
110 Advanced Control Unleashed

feed flow is 100 kpph (Ff), the limit cycle period is 1 minute (T0), the
residence time in the well mixed neutralizer is 20 minutes (ι), the cost of
feed and reagent are both $0.1/lb (Ef and Er), and the conversion decreases
linearly from 98% at 6 p H (XL) to 88% at 5 pH.

The benefits in reduced feed costs from a better conversion for a 7 p H set
point can be estimated by the use of Equations 3-4 and 3-12 through 3-16
for the application details shown in Equations 3-17 through 3-25:

For S ml = 10%:

ARn = 1 / 2 * π * 20 = 0.008 (3-17)


v

Sn1 = (10%/100%) * (0.1ratio) * (l0000ΔpH/Δratio) * (0.008) = 0.8 pH (3-18)

Z1 = |(6.0ph-7.0ph)/0.8pH| = 1.25 (3-19)

\?\ = - 1.4743 * (1.25) 3 + 14.488 * (1.25) 2 -46.888 * 1.25 + 50 = 11% (3-20)

For Sm2 = 0.1%:

Sn2 = (0.1%/100%) * (0.1ratio) * (10000ΔpH/Δratio) * (0.008) = 0.008 pH (3-21)

z 2 = |(6.0ph-7.0ph)/0.008pH| = 125 (3-22)

\?\2 = 0% (3-23)

ΔXp = (11% - 0%)/100% = 0.11 (3-24)

ΔBp = 0.11 * 0.04 * 24hr/day * 100000pph * 0.1$/lb = $l,056/day (3-25)

The benefits in reduced reagent costs from a shift of the set point toward
the low constraint can be estimated as follows with Equations 3-26 and
3-27:

ΔXm = 0.8 pH (3-26)


ΔBm = (0.8 pH/10,000ApH/Aratio) *24 hr/day*100000pph*0.1$/lb = $20/day (3-27)

From the above analysis, it is clear that it is better to leave the set point at
7 p H and reduce the fraction of product that is below 6 pH.

Distillation Column
The benefits from the reduction in variability afforded by the improve­
ments to the basic control system can be estimated for the distillation
Chapter 3 - APC Pathways 111

example in Chapter 2 with equations similar to those given for the pH


example. The limit cycle before and after the addition of the aggressively
tuned digital positioner is multiplied by an amplitude ratio to determine
the attenuation from the storage tank volume. The period of the limit cycle
is about 6 times the temperature loop time delay, and the time constant of
the storage tank with just an eductor for mixing is about half the residence
time. The attenuated limit cycles are next translated from an oscillation in
the distillate-to-feed ratio to temperature oscillation as illustrated in Fig­
ure 3-5c. The amplitudes of the temperature oscillations are then multi­
plied by the process gain to translate from temperature to impurity
concentration and finally by the economic gain to translate from impurity
concentration to the cost of extra recycle and steam.

The benefits from better measurement resolution and reproducibility


afforded by the use of an RTD and a smart transmitter at a better control
point, and the reduction in noise and loop time delay from the better sen­
sor location, can be estimated by use of the resolution and half of the peak-
to-peak noise as the amplitude of a temperature oscillation. The oscillation
amplitude is multiplied by the amplitude ratio, process gain, and eco­
nomic gain to find the value of the improvements. The use of high control­
ler gain in the distillate receiver reduces the loop time delay. The time
delay introduced by the level loop is approximately the reflux valve dead
band divided by the level ramp rate (integrator gain) multiplied by the
level controller gain. For a horizontal receiver, the improvement is signifi­
cant. The change in total temperature loop dead time affects the loop
period and hence the amplitude ratio.

The value of flow feedforward and better feed tank level controller tuning
can be estimated by the reduction in fractional integrated error(ΔXp)in
product impurity concentration for a typical set of flow upsets. Case stud­
ies have shown a reduction of 2:1 from better control valves and a 5:1
reduction from an improved control scheme in product variability for a
distillation process [3.10].

The value of reduced valve stick-slip, although real and significant, is one
of the most difficult improvements to quantify. The estimation of benefits
from advanced control is usually much simpler because higher value-
added variables are more directly affected or optimized.

Paper Machine
The benefits from the use of model predictive control for the basis weight
of a paper machine shown in Figure 3-7a can be rather easily estimated by
Equations 3-28 and 3-29. Figure 3-7b illustrates how a decrease in the 2
sigma standard deviation can result in a shift of the mean much closer to
the optimum.
112 Advanced Control Unleashed

Figure 3-7a. Basis Weight Control in a Paper Machine

Figure 3-7b. Shift of Basis Weight Closer to Optimum

ΔXm = ΔS2 / Xm (3-28)


ΔBm = ΔXm*Ss*Cw*Cr*Ct*E8 (3-29)
where:
ΔBm .= delta benefits from shift of mean closer to low constraint
($/day)
cr = conversion factor for ream (ream/3000ft 2 )
ct = conversion factor for time (1440 min / day)
cw = conversion factor for width (ft)
Es = economic factor for cost of stock ($/ream)
Ss == sheet speed (ft/min)
ΔS2 == delta reduction in two standard deviation (lbs)
ΔXm =• fractional shift in basis weight mean
XM -= basis weight mean (lbs)
Chapter 3 - APC Pathways 113

Through the implementation of model predictive control, the 2S standard


deviation in basis weight was reduced from 0.8 to 0.4 for the production of
30-pound paper. The sheet speed is 2400ft/min and the sheet width is 30
feet. If the cost of stock is $2.50 per ream, the benefit from a reduction in
excess basis weight is $l,125/day by the use of Equations 3-30 and 3-31 as
follows:

ΔXm = 0.4/30.8 = 0.013 (3-30)


ΔBm = 0.013*(2400ft/nun)*(30ft)*(ream/3000ft2)*(1440min/day)*($2.50stock/ream) (3-31)

ABm = $l,125/day (3-32)

The implications of variability in the kraft, pulp, newsprint, fine paper,


and linerboard sectors of the industry have been quantified in audits of
digesters, bleach plants, stock preparation areas, and paper machines. A
new index has been developed that classifies variability by frequency
decades to help predict end-use problems of pulp bales and paper rolls
[3.11].

Reactor
A key operation in a plant is the catalytic reaction system. In this example,
the catalyst concentration is critical to maximize the conversion of raw
material to product, to improve yield and to minimize costs. Every pound
of the product is sold so it is also important to avoid operating conditions
that could lead to downtime and lost production [3.12].

The yield is correlated to catalyst concentration (Xc) by Equation 3-33


where yield (Yp) is defined as the pounds of raw material converted to
product per pound of raw material fed to the reactor times 100% [3.12].

Yp = 80% + 9*CC (3-33)


The business driver is to maximize catalyst concentration subject to a con­
straint (XH) where 0.55% or higher concentrations cause excessive plug­
ging and downtime. The average (mean) catalyst concentration is
presently 0.42% (XM), the total standard deviation is 0.12% (Stot), and the
capability standard deviation that includes the effect of dead time is 0.06%
(Scap). The average raw material flow rate is 100 kpph. The average uptime
is 8000 hr/yr and the raw material cost is $0.2/lb [3.12]. The benefits from
advanced control can be estimated by calculating the minimum standard
deviation possible from feedback control (Sfbc) and then the percent
improvement from a shift of the average (mean) concentration closer to
the high constraint using Equations 3-34 through 3-38, documented in
Chapter 4 for performance monitoring.
114 Advanced Control Unleashed

Sfbc = S c a p * [ 2 - ( S c a p / Stot)2]0.5 (3-34)


Sfbc = 0.06% * [2 - (0.06%/0.12%) 2 ] 0.5 = 0.08%
Stot = [l-(S f b c /S t o t ) ] (3-35)
Stot = [1 - (0.08%/0.12%)] = 33%
XM = [1 - (Sfbc/ Stot)] * [XH -X M ] (3-36)
XM = [1 - (0.08%/0.12%)] * [0.55% - 0.42%] = 0.043%
YM = 9* XM (3-37)
YM = 9*0.043% = 0.387%
BM = ( YM/100%)*Ff*Ct*Ef (3-38)
BM = (0.387%/100%)*100000pph*24hr/day*0.2$/lb = $1858/day
where:
Bm = delta benefits from shift of mean closer to high constraint
($/day)
YM = yield improvement from higher m e a n concentration of
catalyst (%)
Ct = conversion factor for time (24hr/day)
Ef = economic factor for cost of feed ($/lb)
Ff = feed flow (lb/hr)
S = capability standard deviation in catalyst concentration (%)
S tot = standard deviation in catalyst concentration (%)
Xm = shift in m e a n of catalyst concentration (%)
XM = m e a n catalyst concentration (%)
Sfbc = standard deviation for feedback control
XL = low constraint for catalyst concentration (%)

Popular Excuses
The following is a compilation of popular excuses used by people who
want to maintain the status quo and stay in their comfort zones. Most
reflect a process-design and steady-state viewpoint and a lack of under-
standing of dynamics and the advanced control goal of building and
incorporating process knowledge by extensive process monitoring, test-
ing, and modeling. In case these excuses are used as a means to delay an
APC opportunity, the APC counterpoint is given.

1. Excuse: It is an equipment or process problem, not a control


problem. Reality: If there were no equipment or process limitations,
there would be no need for advanced control.
Chapter 3 - APC Pathways 115

2. Excuse: The bottleneck keeps shifting. Reality: Advanced control


can find it.
3. Excuse: We don't know which constraint is more important. Reality:
Advanced control can determine relative importance.
4. Excuse: We don't know the relationships for the constraints. Reality:
Advanced control will identify and model them.
5. Excuse: Advanced control covers up equipment problems. Reality:
Advanced control will quantify how much of the time and to what
degree each constraint actively limits production. Process
monitoring, testing, and modeling will expose the problems.
6. Excuse: With APC, my operators will forget how to run the process.
Reality: With a properly designed APC interface, they will become
better operators.
7. Excuse: You can't optimize this process; there are too many
constraints. Reality: The greater the number of constraints, the
greater the economic opportunity, and the greater the need for
APC.

Rules of Thumb
Rule 3.1. — Measurements of present variability and estimates of reduced vari-
ability, attenuation factors for back-mixed volumes, conversion factors, and an
economic gain factor can be used to provide a more accurate estimate of the bene-
fits from advanced control. The gains and factors must trace the path from
the controlled variable to the process variable in the product that leaves
the plant. Dynamic property estimators may be useful to find process
gains and put the calculations online.

Rule 3.2. — Find the best practical production rate or yield from the best periods
of operation or batches from cost sheets and the best theoretical rate or yield from
steady-state and dynamic simulation of new operating conditions. The opportu-
nity assessment questions should all be oriented to find how to eliminate
the gap between the actual and practical or theoretical performance.

Rule 3.3. — Loop and process performance monitoring systems and online plant
economic performance calculations are essential to maximize and sustain benefits.
Without these calculations in place, the benefits will get lost in the noise or
attributed to other activities.

Rule 3.4. — If the benefits are not documented and reported, advanced process
control will not get the recognition needed to insure resource availability and com-
mitment. After any initial enthusiasm, the effort will rapidly fade. Due to
116 Advanced Control Unleashed

downsizing and information explosion, the average user is faced with an


overwhelming number of initiatives and supposedly neat ideas.

Rule 3.5. — If online analyzers have long sample-transportation delays, long


processing delays, long cycle times, poor signal-to-noise ratio, or an insufficient
on-stream time, develop and install dynamic estimators before process testing.
Model predictive controllers should use these estimators.

Rule 3.6. — Insure that at least a model predictive control (MPC) with some
degree of optimization is implemented. MPC has the best track record for ben-
efits.

Rule 3.7. — APC technologies must be employed in closed loop control to achieve
most of the benefits. Advanced control improvements that stay in an advi-
sory mode achieve a small fraction of the benefits possible, because of
operator inattention and bias.

Rule 3.8. — A simple maximization or minimization of feed, steam, reflux, or


recycle flow can be done with a ramper or pusher. A linear program is often
back-engineered to achieve what can be better tuned and maintained with
a ramper or pusher. In some cases, a pusher may be included in the MPC
capability.

Rule 3.9. — A linear combination of economic objectives with a linear relation-


ship to constraints is best optimized by a linear program (LP). The LP will find
the optimum intersection of constraints.

Rule 3.10. — Plants with changing economic objectives, complex recycle effects,
shifting bottlenecks, and complex nonlinear relationships in several unit opera-
tions need a real-time optimization. High-fidelity steady-state models are
used for the real-time optimization of continuous constant conversion pro-
cesses, whereas dynamic models are needed for batch operations and for
processes where reaction and polymerization rates are important for opti-
mization.

References
1. Bialkowski, William L., "Process Control Audits Have Major Impact on
Uniformity," American Papermaker, September 1990, pp. 50-57.
2. McMillan, Gregory K., "Benchmarking Studies in Process Control," InTech,
November 1992, pp. 44-46.
3. Shunta, Joseph P., "Use Statistics to Identify Process Control Opportunities,
Chemical Engineering Progress, October 1996, pp. 47-49.
4. Shunta, Joseph P., Achieving World Class Manufacturing Through Process
Control, Prentice Hall, Inc., 1995.
Chapter 3 - APC Pathways 117

5. Bialkowski, William L., "Process Analysis and Diagnostics," Fisher-


Rosemount Systems Users Group Meeting, November 1996.
6. Luyben, Michael L. and Luyben, William L., Essentials of Process Control,
McGraw Hill Chemical Engineering Series, 1997.
7. Chia, T.L., and Lefkowitz I, "Add Efficiency with Multivariable Control,"
InTech, September 1997, pp. 85-88.
8. Mansy, Michael M., McMillan, Gregory K., and Sowell, Mark S., "Step Into the
Virtual Plant," Chemical Engineering Progress (CEP), February 2002, pp. 56-61.
9. Tinham, Brian, "Control in the Chemical Industry," Control and
Instrumentation, January 1993, pp 34-35.
10. Beal, James F., "Process Control Analysis, Improvements and Results," ISA
Expo/Conference, Houston, September 10-13, 2002.
11. Bialkowski, William L., "Process Control Related Variability and the Link to
End Use Performance," Control Systems Conference, Helsinki, September 17-
20,1990.
12. Shunta, Joseph P., "Assessing & Implementing Control Improvement
Opportunities," ISA Short Course SC05(Du), 1996, Instructor's Notes.
4
Evaluating System
Performance

Practice
Overview
Target product quality and production levels can be maintained by contin-
uously evaluating the performance of the process and the control system.
The maximum throughput and operating efficiency of a plant are ulti-
mately determined by the process design and equipment selection. How-
ever, in many cases a plant's operation is far from achieving the ultimate
capability inherent in the plant design and equipment. For example,
numerous studies done in the pulp and paper industry show that loop uti-
lization ranged from 55% to 76%, depending on production area.

The reasons for process variation and poor control utilization can be attrib-
uted to one or more of the following:

• Field measurement or final control element: analyzer and process


measurement failures, noise, and repeatability errors; incorrect
ranges and scan times; improper sensor and valve locations;
damper or valve stick-slip; nonlinear installed damper and valve
characteristics; and incorrect valve sizes
• Process or control design: insufficient capacity, process changes,
undersized units, and lack of lab feedback
• Process interaction and mechanical problems
• Process, lab, and raw material problems (more so for chemical
than pulp and paper)

119
120 Advanced Control Unleashed

One of the main reasons such conditions exist is that the downsizing of
support services in many plants has resulted in plants operating with a
minimal staff for process control and instrumentation maintenance. Often
there is only enough manpower available to fix the critical problems found
today that are limiting production and affecting product quality. Under
these circumstances, there is little time to study the process operation to
determine if an abnormal condition exists that could soon be a major
source of process disturbance if not addressed. A problem may not
become visible until the situation has deteriorated to the point where it
affects product quality or production. Operating in this "firefighting"
mode may lead to variations in operation that result in less than maximum
production in a sold-out market, or to operating at less than best efficiency,
or to wider variation in product quality, regardless of production rate.

The benefits from early detection of control system performance decay


have been well documented by Bill Bialkowski [4.1] and others [4.2], [4.3].
The early detection of control system performance decay, of changes in
process dynamics and the detection of new sources of variability, is abso-
lutely necessary. This is an ongoing process that should focus on detection
through continuous analysis and correction through diagnostics to mini-
mize impact on plant operation.

One of the key factors is that the performance of control loops decays with
time as a result of the wearing of control valves, loss of calibration of trans-
mitters, or changes in the operation of the process. Some plants have real-
ized the impact that control and field devices are having on their
operation.

Efforts to achieve best plant performance must address both the areas of
analysis and diagnostics.

• Process Analysis is the examination of a process, its elements and


their relationships.
• Process Diagnostics is the investigation of the cause or nature of a
process condition or problem.

The relationship between process analysis and process diagnostics is


illustrated in Figure 4-1.

In some cases, additional staff has been added and a performance moni-
toring tool has been purchased to periodically evaluate control loop and
field device performance. However, this solution is costly and often hard
to justify in the short term. Tools layered on top of a traditional Distributed
Control System (DCS) to detect abnormal operation have had limited suc-
cess in the detection of problems in instrumentation and fast processes
Chapter 4 - Evaluating System Performance 121

Figure 4-1. The Reduction of Variability Is an Ongoing Process

because of limitations in the speed with which information may be


accessed — limitations that are inherent in the design of most DCS and
Programmable Logic Controllers (PLC) systems in use today.

However, some manufacturers have introduced a whole new generation


of field-based, scalable process control systems and Fieldbus field devices.
The core technology used in some of these systems provides a solid foun-
dation that allows the continuous monitoring and detection of abnormal
control and field device operation.

In this chapter, the basis for automatic detection of abnormal operating


conditions using this new technology is examined. The chapter covers per-
formance monitoring tools for online analysis and monitoring of control
loops. It also introduces the user to the use of multivariate Principal Com-
ponent Analysis (PCA) and Partial Least Squares (PLS) for the online anal-
ysis and monitoring of a unit operation in batch or continuous processes.
A multivariate regression technique, such as PCA and PLS, can be
designed to handle cases with a large number of correlated inputs and can
extract contributions of individual inputs and identify breakdown in input
correlation structure

Opportunity Assessment
The justification for investment in tools and people for process and control
analysis and diagnostics is the reduction in process variability and the
associated improvement in plant profitability. A process parameter's effect
on a particular plant's production depends greatly on that plant's limita-
tions and operating conditions. If the answer is yes to some of following
questions, there is enough of an economic incentive to implement a perfor-
mance monitoring system in a plant.

1. Are loops that affect plant capacity or efficiency always in their


highest mode?
122 Advanced Control Unleashed

2. Do loops that affect plant capacity or efficiency have too much


variability?
3. Is equipment and instrument maintenance more reactive than
predictive?
4. Are there batch operations where the quality or yield is affected by
the variability or limits in feeds, process variables, or manipulated
variables?
5. Are there batch operations where the cycle time is affected by the
variability or limits in feeds, process variables, or manipulated
variables?
If there is a large variability in variables that adversely affect production or
product quality then by identifying these areas of variation and address-
ing the source of variation, it is often possible to reduce variability and
thus increase production, increase operating efficiency, or improve prod-
uct quality.

The built-in diagnostic and analysis tools of modern scalable control sys-
tems provide the unprecedented capability to automatically identify pro-
cess variation caused by under-performing loops. This is done by
continuously calculating the improvement in control that is possible and
comparing this to the baseline loop performance. This approach to perfor-
mance monitoring automatically examines input-output and control
blocks in the controller for abnormal operating conditions that may indi-
cate maintenance is required.

By automating the task of monitoring measurements, actuators and con-


trol performance, it is possible to virtually eliminate the need for plant
audits with portable PC-based tools and thus allow plant maintenance
resources to focus on resolving the problems that affect plant operation
most. Since these systems utilize the function-block architecture and
model established for Fieldbus, they can use the input and output parame-
ter status and block mode to determine abnormal behavior. In particular,
this information is used to automatically determine the percent time a
block was not in its designed mode of operation, or an input status was
"bad," "uncertain" or "limited," or a downstream condition limited con-
trol action. Blocks having an abnormal status or mode condition may be
listed based on the percent time the condition existed. Plant control is
summarized by the Performance and Utilization values. Such metrics can
be used to judge overall control performance and whether control is being
fully utilized.

Once a problem is detected, finding the solution to the problem may be


straightforward for certain conditions. For example, a high variability
Chapter 4 - Evaluating System Performance 123

indication may in some cases only require that the PID control block be re-
tuned to correct a change in operating condition. An uncertain indication
on a transmitter may sometimes be quickly resolved using the online diag-
nostics provided by the manufacturer of the device.

Unfortunately, in some cases the source of the problem may not be appar-
ent and further analysis is required. In particular, resolving the source of
high variability may not be as easy as just re-tuning the loop. Even though
the loop may be correctly tuned, valve stick slip may cause the loop to
continually oscillate, as discussed in Chapter 2 and illustrated in Figure
4-2.

Figure 4-2. Limit Cycle from Valve Stick-Slip

Tools for high-speed trend, histograms, and power-spectrum and cross-


correlation analysis allow quick diagnosis of such problems. In a modern
control system, such tools are integrated into the system design and allow
such analysis to be done automatically.

Manufacturing companies are judged by the quality, cost, and availability


of the products they make and sell. To achieve world-class performance in
these areas, the manufacturing operation must develop specific metrics to
assess how well they are performing day to day and guide them towards
improvement. In order to improve control, the data must be taken fre-
quently enough to show inertial effects. Also, statistical metrics should be
established to evaluate process and control system performance. With
such online tools in place, it is relatively easy to detect the presence of a
new point source of variability, as individual signals will start to exhibit
markedly different behavior.

Process monitoring can take many forms, from a purely mechanistic


approach guided by first principles to a purely statistical approach. In
124 Advanced Control Unleashed

today's world, data drives the decision-making process and the ability to
make use of the vast amounts of process data is essential. In many plants,
the rate at which we are accumulating this data surpasses our ability to
extract knowledge from data and use it for better decision making. Multi-
variate statistical analysis is a technique that allows us to analyze plant
data in order to extract underlying themes in the behavior of the data, and
to then use these themes to monitor the state of the process.

Traditionally, the task of monitoring plant data has fallen into the statisti-
cal process control (SPC) or the univariate statistical analysis world. The
manufacturing industries have made great progress by focusing on key
quality variables and monitoring these variables with univariate analysis
techniques such as SPC. But are these techniques still relevant now that
the volumes of data being extracted by our automation systems have
increased by orders of magnitude? The answer is, Not always! In previous
years we had 10 engineers per quality parameter being monitored; today a
single engineer is asked to monitor 10 quality parameters. The drive to
understand quality coupled with the vast amounts of data can present a
formidable task, one well suited to the application of multivariate tech-
niques.

The multivariate techniques that are showing success today are called
Principal Component Analysis (PCA) and Partial Least Squares or Projec-
tion to Latent Structures (PLS). Multivariate statistical analysis is not a
new technology, but the application of PCA and PLS has gained attention
due mainly to the data explosion. The manufacturing and process indus-
tries have invested heavily in real-time data acquisition systems or Process
Information Management Systems (PIMS) and there is now a strong desire
by manufacturing directors to see this accumulation of data put to use to
improve plant operation; hence the strong interest in multivariate moni-
toring techniques. Other multivariate monitoring techniques include Fac-
tor Analysis, Eigen-vector Analysis and Singular Value Decomposition.

The field that uses multivariate statistical analysis tools for real-time anal-
ysis of process data is called Multivariate Statistical Process Control
(MSPC). The attraction of MSPC is in the ability to rapidly develop behav-
ioral models of your process, akin to fingerprinting normal process behav-
ior, and then to continuously compare current plant behavior to the
normal fingerprint. If a deviation from normal plant behavior is identified,
MSPC will allow you to identify which plant variables are the major con-
tributors to the cause of the deviation.

The ability to monitor, identify and diagnose the cause of process variabil-
ity is a task every engineer will recognize as important and one that will
improve the performance of our manufacturing facilities. The technologies
Chapter 4 - Evaluating System Performance 125

of today are focusing on these monitoring challenges, while the monitor-


ing technologies of tomorrow will be targeted more at relating variability
to the bottom-line profitability of our plants—an important link still not
well defined in most manufacturing environments.

Examples

Control Utilization and Its Impact


Control utilization is an indicator that can be used to quickly determine if
control and measurement problems exist within a control system. It is rela-
tively easy, using most control systems, to take a snapshot of control oper-
ation and to then calculate and document the control utilization. For
example, using the documentation features of most control systems, it is
possible to print the operation screens and later to compare the mode of
the control blocks to the design mode for the control. In some cases, a low
value of control utilization can be attributed to insufficient operator train-
ing on the control. However, surveys of many plants indicate that the pri-
mary reasons for control not being fully utilized fall into two areas:

• Field measurement or control element


• Process or control design

An immediate improvement in control utilization will be achieved by


addressing these problems. To achieve full utilization, improved
communications between maintenance and operations is important.

The automatic collection of control-utilization statistics is a major benefit


in identifying problems in measurement, control or process design. In
plants that do not have a control system that supports this capability,
problems may go unrecognized that have a direct impact on the plant
operation and efficiency of production. For example, at one pulp and
paper plant a snapshot of the control utilization was collected to allow the
state of the process control to be quantified. This survey revealed the fol-
lowing:

Total Control Loops not in


Plant Area Percent Utilization
Loops Normal Mode
Bleach Plant 78 60 76%
Power House 185 130 70%
Pulp Mill 174 116 66%
Paper Mill 236 134 56%
126 Advanced Control Unleashed

When the utilization was presented to the head of maintenance, he was


surprised to find it so low. To address this problem, a number of things
were put in place at this plant.

Automatic calculation of control utilization was implemented using fea-


tures of their historian.

An instrumentation team was formed to investigate loops that were not


running in their normal design mode. This team was responsible for mak-
ing sure measurement, control valve, and process problems were
addressed in a timely fashion.

The reduction in variability provided by these two steps led to significant


improvements in plant throughput and product quality. Long after the
team was formed, it continues to provide significant, quantifiable results
to the plant through constant evaluation and improvement of control per-
formance.

Control Variability
The response of a control loop in its designed mode of operation may not
be adequate to compensate for process disturbance and changes in set
point. The root cause of poor control performance may be traced to the
control design, tuning, measurement or actuator performance. Many mod-
ern control systems provide a means to automatically quantify the varia-
tion seen while on automatic control. Using this embedded feature, it is
possible to quickly identify control loops that require maintenance. If such
capability is not available in the control system, it may be possible to uti-
lize the statistical calculations included in a plant historian or to add loop
performance-monitoring packages that connect directly to the DCS or to
the plant historian through OPC connectivity. Both methods will monitor
control loop variability and provide insight into changes in control perfor-
mance.

By comparing the current variation over a shift or day to that previously


seen for the same operating condition and timeframe, it is possible to iden-
tify loops whose performance is degraded. Through the investigation of
problems that cause increased variability, it is often possible to find the
root cause and significantly improve plant operation. As discussed in
Chapter 3, this reduction in variation often makes it possible to shift oper-
ating points closer to operating constraints or to maintain the best operat-
ing set point, and thus provide greater throughput or improved quality.
Chapter 4 - Evaluating System Performance 127

Quality Variable Monitoring


We can use an SPC univariate chart to easily detect where the process is
going out of control. Consider the analysis of the two variables X1 and X2
in Figure 4-3. Although neither variable appears to demonstrate any sig-
nificant deviation from the norm, a scatter plot of the data clearly points to
an outlier. This irregularity in the operation of the process could only be
detected by revealing how this point differs from the normal correlation
structure of the process data.X1and X2 could not independently pinpoint
the fault, but the relationship (correlation) of X1 to X2 clearly identifies a
change in the process. Thus, multivariate statistical analysis models the
correlation structure of the process data in time and detects abnormal
behavior by pinpointing deviations from the normal correlation structure.
The simple example of monitoring the correlation's structure in the two
variables can easily be extended to monitoring hundreds of process vari-
ables.

Figure 4-3. Scatter Plot Reveals an Outlier

Caster Monitoring
The next sample industrial process we will consider for MSPC is a caster
monitoring application in a steel mill. The process is a continuous, straight
mold, Demag slab caster, shown in Figure 4-4. Liquid steel enters the
caster process where it is partially solidified in a water-cooled copper
mould. The steel exits the mould, where the steel is contained within a
thin solidified shell or skin, and cools as it moves along the process line
towards the run-out. Upon leaving the mould, there is potential for the
loss of containment of the molten steel. This loss of containment and
release of molten liquid steel is a called a breakout. A casting breakout is
an extremely hazardous occurrence that causes equipment damage and
loss of production. An MSPC monitoring system was installed to provide
real-time monitoring of the casting process variables in order to predict
potential breakout conditions and provide operators with information to
help diagnose casting process problems.
128 Advanced Control Unleashed

Figure 4-4. Slab Caster Process

The MSPC casting application has a few multivariate SPC control charts
representing the stable mould operation and alarms on detection of any
significant anomalies in the behavior of the process variables. At any time,
the operator can view how the measured plant values are contributing to
these alarms, to help diagnose process problems. The final PCA model of
the process contained over 240 inputs, collected selectively over several
months of operation in order to cover many different operating scenarios.
The PCA models were further summarized into control charts, using
Hotelling's T2 statistic, to be monitored along with the prediction-error
control chart. T2 statistics provide an indication of variability. The T2 is
named for Harold Hotelling, a pioneer in multivariate statistical analysis.

These statistical control charts were again summarized in a composite set


of alarms for operator notification. This multivariate SPC information is
displayed on the Caster Main control screen, along with other pertinent
mould data, as shown in Figure 4-5.

These three charts form the basis of the real-time MSPC system, providing
continuous information to the operator on the stability of the mould oper-
ation. This example of an MSPC system, implemented in a steel plant,
delivered benefits in the areas of increased production, improved safety
and much improved process understanding—the ultimate goal of any
monitoring system [4.12].
Chapter 4 - Evaluating System Performance 129

Figure 4-5. Display Interface for Slab Caster Example (Courtesy of Dofasco,
Inc.) [4.12]

Application

General Procedure
1. Decide on the objectives of performance monitoring in terms of the
relative importance of diagnosing and predicting the following:
a. Measurement problems
b. Rotating equipment problems
c. Hydraulic resonance and shock waves
d. Pressure regulator and steam trap problems
e. Interlock and alarm system sequence of events
f. Control valve problems
g. Improper controller tuning
h. Improper controller modes
i. Interaction between control loops
j. Feedforward and disturbance analysis
k. Weeping and flooding of column trays
1. Distribution and mixing of feeds, components, phases,
polymers, and particles in fluidized beds, columns, fermentors,
reactors, and crystallizers
m. Analyzer and sample system problems
130 Advanced Control Unleashed

n. Lab sample and analysis process problems


o. Ambient temperature and humidity effects
p. Raw material problems
q. Fouling and coating (heat-transfer surfaces, polymer pipelines
and reactors, crystallizers, sensors, electrodes, valve trim,
column trays, particle filters, ion exchange beds, catalyst)
r. Activity evaluation (beds, catalyst, initiators)
s. Batch sequence and profile problems
2. Select the proper scale, exception reporting, filter time, and scan
time so that information is not lost due to poor resolution or
missing data points.
3. Activate process monitoring embedded in your control system or,
if not available, install a performance monitoring application that
meets your objectives.
4. Establish a best period of operation for each plant area or piece of
process equipment and use this as the baseline for relative
performance indices.
5. Turn off and flag the performance analysis of loops that are
intentionally out of service due to the shutdown or sequencing of a
unit operation.
6. Compute the economic performance of your system on line
(efficiency and capacity, preferably in terms of cost of goods and
revenue).
7. Set the deviation alert and alarm limits from the baseline wide
enough to avoid an initial barrage of abnormal conditions when
the system is first turned on, but narrow enough to flag important
issues for the more critical process variables and control valves.
8. Work on down the list of alerts or alarms, prioritized by economic
importance and relative performance, to fix automation and
process deficiencies.
9. Reset the baseline and narrow the deviation limits as you progress
and the performance improves.
10. Establish procedures and best practices for continual
improvement.
11. For constrained model predictive control systems:
a. Display the constraints that are currently governing, the
direction of the change in the affected controlled variables
demanded by these constraints, the percent of time each
Chapter 4 - Evaluating System Performance 131

constraint has been active, and the percent of time that each
constraint has been violated.
b. Provide trending and statistics of the bias correction to each
trajectory.
c. Provide trending and statistics of predicted values for each
trajectory.
d. Provide trending and statistics of optimization set points (set
points from rampers and pushers).

Application Details
With a modern control system, all control loops are automatically moni-
tored on a continuous basis and any degradation in loop performance or
the detection of an abnormal condition in a measurement, actuator, or con-
trol block is automatically flagged. These systems may thus identify prob-
lem areas sooner than can audits done with portable PC-based tools. By
using automated system performance monitoring and built-in diagnostic
tools, the typically limited resources of plant maintenance can be used to
advantage to resolve measurement, actuator and control problems. As a
result, maintenance costs may be reduced or a higher overall level of sys-
tem performance may be maintained and process variability reduced. Any
reduction in variability can lead directly to greater plant throughput,
greater operating efficiency and/or improved product quality.

Continuous Control
In evaluating the performance of a continuous process, control systems
may calculate indices that quantify loop utilization, measurements with a
"bad," "uncertain," or "limited" status, limitations in control action, and
process variability. In addition, for control loops, the systems show the
potential improvement possible in control loop performance. The follow-
ing are a few practical scenarios that show how such information might be
used to determine an operating problem.

Loop Not in Normal Mode


A critical process input flow loop is designed to normally run in cascade
mode with its set point determined by a composition loop. However, the
midnight shift operator has to change this flow loop to Auto because of
wide fluctuations in the set point. The next morning, the process engineer
for the plant sees that the system flagged the actual mode as being incor-
rect for more than 1% if the time. This information allows the engineer to
quickly spot that there was a problem in the primary loop tuning and thus
minimize any off-spec product that resulted from manual adjustment of
this key flow.
132 Advanced Control Unleashed

Limited Output
An instrument technician for the power plant notices that the system
shows the oil-flow control loop is limited in operation. Having been
alerted to this problem, the technician determines that the set point for the
oil header pressure control has been lowered below the design pressure,
forcing the oil valve to go fully open under heavy load conditions. When
the pressure control is readjusted to its designed target, the oil valve can
meet its set point without going fully open.

Bad I/O
A key temperature measurement is flagged by the system as having been
"bad" more than 1% of the time over the last day. Having been alerted to
the fact, the instrument technician re-examines the transmitter calibration
and finds that the device has been calibrated for a temperature range that
is too low. Re-calibrating the transmitter restores the accuracy of the mea-
surement and improves the operation of the process.

High Variability
During normal operation of the plant, the plant engineer sets all variabil-
ity limits to the current value plus 5 percent. After a few weeks, he notices
that the system has flagged a critical flow loop as having excessive vari-
ability. Upon further investigation, he discovers that the valve positioner
connection to the valve stem is loose, causing the control loop to cycle
severely. Fixing the valve positioner returns the variability index to its nor-
mal value.

All these cases are quite typical. Individually they may seem to have mini-
mal impact on plant operation. However, the net effect of these problems
could be significant if they were not addressed in a timely manner. An
automated analysis system to evaluate performance can detect abnormal
situations as they happen and thus play a key role in preventing produc-
tion losses and product quality problems.

Built-in diagnostic tools may be used to quickly determine the source of a


problem. Typical diagnostic tool sets will provide support for the follow-
ing:

1. Trends
A high-speed trend provides the ability to trace any parameter
associated with measurement or control actions at the same rate at
which these values change. A trend resolution of 100 msec is usu-
ally sufficient to analyze most measurement and control problems.
Chapter 4 - Evaluating System Performance 133

A high-speed trend requires over-sampling at the input card and


access at the function-block level.

Experienced control specialists can readily spot a control valve that


has excessive slip-stick from a trend recording of the process vari-
able and controller output. Until waveform recognition systems
are developed to reliably provide this analysis online, there are
some general patterns to look for to help track down the causes of
oscillations.

For example, a rounded sine wave and sawtooth oscillation with a


period much larger than the integral (reset) time are symptomatic
of a reset-induced cycle for a loop dominated by a time constant
and time delay, respectively If the PV oscillation has a flatter top
(not as rounded for the sine wave or as sharp for the sawtooth), it
is and indication that the control valve has excessive dead band
(backlash).

If the PV oscillation becomes more like a square wave, then the


control valve has appreciable stick-slip. The CO oscillation will
resemble a sawtooth. For stick-slip, the oscillation is a limit cycle
rather than a reset cycle. Any reset action at all will initiate the
cycle and the cycle period will become proportional to the integral
time. A trend of a low-noise flow measurement is the best way to
see these oscillations. For sliding stem valves, the actual valve
position from a smart positioner can be trended. In both instances,
the unfiltered measurements are fast and the patterns for a dead
time-dominant response should be observed on the trend.

2. Histograms
A histogram allows the distribution of the variation in a measure-
ment value or actuator position to be analyzed. A bell shaped dis-
tribution indicates that the source of variation is random in nature.

3. Power Spectrums
A power spectrum is a frequency distribution of the components
that make up a measurement or actuator signal over a selected
period of time. Such information may be helpful in determining
the magnitude and frequency of process noise or the frequency of
disruptions caused by loop interaction or upstream processes.
134 Advanced Control Unleashed

4. Cross Correlations
The influence of other control loops and upstream conditions on
the variability of a control loop may be determined through cross
correlation

Diagnostic tools for process and control analysis are available as embed-
ded features in some control systems. These tools may be used to analyze
both fast and slow processes. If an integrated tool set is not available, diag-
nostic applications can be layered on the control system for diagnostics,
although they are often limited to analysis of slow processes. Figure 4-6
illustrates the diagnostic information that is presented to the user by a typ-
ical performance monitoring system.

Figure 4-6. Process and Control Diagnostic

Batch Control
The application of online performance monitoring is most often associated
with continuous processes. However, the percentage improvement from
its application to batch control could be larger because the state of the con-
trol loops typically receives less attention than the sequence of the batch
operation. The emphasis to date in batch control has been on event sched-
uling rather than on process control. The benefits of monitoring can be
especially significant when you consider the higher profit margin and
value-added opportunities for specialty chemical and pharmaceutical
products, which are almost always batch processes. In such cases, the
answers to Opportunity Assessment questions (4) or (5) are frequently
Yes.
Chapter 4 - Evaluating System Performance 135

Figure 4-7 shows a batch reactor for a sold-out high profit-margin spe-
cialty chemical plant. One of the feeds and a byproduct can be recovered
from the vent system if there is sufficient condensing to reflux unreacted
feed back to the reactor and there is no liquid carryover from high level or
foaming. Periodically there are interruptions due to high reactor pressure,
level, and exchanger temperature. The cooling water pressure is some-
times too low and its temperature too high. The feed profile is scheduled
to minimize the activation of interlocks and excessive riding of the con-
denser, vent system, and exchanger limits. The efficiency and safety of the
operation depends upon a uniform quality, flow rate, and distribution of
the feeds within the batch. There is no online composition measurement of
the product. The yield varies significantly with each batch.

Figure 4-7. Batch Reactor Control System

A performance monitoring system is installed that provides indices on


variability, the detection of limits, power spectrums, and cross correla-
tions. The results are as follows:

The standard deviation and variability indices are high and the control
output blocks are limited at times during the batch for the reactant B and
antifoam feed loops, the condenser and exchanger temperature loops, and
the reactor pressure loop.
136 Advanced Control Unleashed

The low-limit flag for the output function block of the reactant B flow con-
troller is periodically activated throughout the batch. After some investi-
gation, it is discovered that the original sliding stem control valve was
replaced with a ball valve that has too much capacity and too much fric-
tion near the closed position. The reactant B flow controller ramps up its
output but there is no appreciable increase in flow above the leakage flow
until its output reaches 6 per cent. Then there is a burst of flow so high
above the set point that the controller must close the valve. This square
wave of flow from the stick-slip of the oversized control valve continues
through most of the batch.

The antifoam controller output during some batches ramps up to its high-
output limit. A correlation analysis shows that this coincides with the
simultaneous demand for antifoam by other batch operations and a dip in
supply pressure. An undersized antifoam pump is pinpointed as the cul-
prit. A further look at other variables shows there is a significant cross cor-
relation between high antifoam flow controller output and high reactor
pressure controller output and high condenser temperature controller out-
put. This indicates the possibility of carryover into the vent system that
can result in contamination of the byproduct and a corresponding loss in
yield of the main product.

A trend of the condenser temperature variability indices and limit flags


show that the condenser performance is often worse during hot summer
afternoons. There appears to be no correlation with the sequence of other
batch operations. A review of the cooling-water system shows that the
cooling-water valves for other unit operations are left wide open all the
time and that the reactor condenser is at the highest elevation in the plant.
Furthermore, the cooling tower water temperature on hot days reduces
the Log Mean temperature difference for the exchanger by more than 20%.
This, combined with the inability to increase cooling water flow due to the
low pressure at the condenser elevation, causes the condenser control
valve to go wide open and the condenser temperature to float above the
set point. The high condenser temperature decreases the reactor yield and
increases the variability in the composition of the byproduct and recycle
recovered by the vent system. The installation of a higher head cooling
tower water pump and the commissioning of temperature control loops
for the other batch processes is a much lower-cost solution than the addi-
tion of a new cooling tower as favored by operations.

A power spectrum analysis of the eductor pressure loop in the vacuum


system and the reactor pressure controller shows that there are large peaks
at the same frequency and that these peaks get larger during hot summer
months. Upon inspection, the integral time of the eductor pressure loop is
much less than the period of oscillation that corresponds to the common
Chapter 4 - Evaluating System Performance 137

frequency with significant power. This is a clue that there is a reset cycle
from excessive integral action from the eductor pressure loop. During hot
summer days the process gain and time constant are less, making the reset
cycle more severe.

The above analysis reveals the importance of extending the scope of per-
formance monitoring to other unit operations, utility systems, vent sys-
tems, and ambient conditions. It also shows the value of being able to
trend performance indices, status flags, events, and variables to look for
coincident events. It is important that the trend be fast enough to deter-
mine what happened first and that the oscillation period or waveform not
be distorted by aliasing. Finally, there is obviously a need for some basic
understanding of the system to determine the actual cause-and-effect rela-
tionships.

The knowledge gained from performance monitoring systems leads not


only to immediate improvements in valve and pump sizes and controller
tuning, but also to the implementation of advanced control techniques. In
the chemical plant example, the identification of the limits can be used to
provide constraints for model predictive control. In this case, the con-
straints are condenser, exchanger, and vent valve position; the variable to
be optimized is reactor feed. A model predictive controller could be set up
to maximize reactor feed without violating utility and vent system limits.

Once the basic system has been improved, the use of performance moni-
toring can be extended to include online multivariate principal component
analysis (PCA). A PCA worm plot can provide rapid recognition of
batches that start to trend away from normal operation to warn of abnor-
mal conditions before they cause significant problems of reduced yield or
capacity. It can also lead to Partial Least Squares (PLS), Neural Network,
and dynamic linear estimators of batch end time and product concentra-
tion based on various peaks during the batch cycle. For the above reactor,
it turned out that the amount of fresh feed added to reach the first peak in
temperature was an indication of the composition of the recycled feed and
a sustained low condenser and vent valve position could be used to pre-
dict the batch end point.

While the above analysis was for a specialty chemical, the opportunities
for fermentors could be even greater due to the need for tight dissolved-
oxygen and pH control despite interactions and interferences and the
advantages of having online estimators to detect deviations of batch con-
ditions and cell and product concentration. Similarly, the analysis should
be extended to substrate and nutrient feed systems, antifoam systems,
vent systems, and utility systems. Figure 4-8 shows the main control loops
for a batch fermentor.
138 Advanced Control Unleashed

Figure 4-8. Batch Fermentor Control System

Multivariate Analysis
Although the methods for monitoring plant data have traditionally been
limited to statistical process control (SPC) or univariate statistical analysis,
manufacturing industries have made good use of these methods to iden-
tify and monitor key quality variables. Today, the volume of data being
extracted by automation systems has increased by orders of magnitude.
The drive to understand quality coupled with the vast amounts of data
can present a formidable task.

What are the factors that make process data analysis a challenge?

• Correlation: Although we may be monitoring hundreds of


variables, there are not hundreds of independent events occurring.
The significant correlation in the variables, with respect to each
other, points to only a few underlying themes.
• Dimensionality: The average chemical plant now records and
stores thousands of variables every minute in a plant historian. This
volume of data is difficult to handle, forcing people to look at only a
small subset of variables, and thus only a fraction of the overall
information.
Chapter 4 - Evaluating System Performance 139

And what are the elements that make PCA and PLS good multivariate
techniques for modeling plant behavior from real-time operating data?

• Handle large volumes of data


• Deal with highly correlated variables by reducing the space to a set
of independent factors
• Robust to missing data and noisy data
• Ability to drill down and understand contribution of plant
variables
• Standard set of graphical tools to represent results

The flow of the MSPC analysis path is quite simple.

1. Build a correlation model of your process


Although a process may contain hundreds of variables that are
being monitored by plant information systems, there are never
hundreds of independent occurrences. Most of the data moves in a
correlated fashion, which is the basis of our model. MSPC will
form new variables, called latent variables, which are simply linear
combinations of the existing plant variables, but which are selected
to model the greatest directions of variability in the data. These
latent variables are generated to not only extract the maximum
degree of variability from the data, but to do so such that the new
latent variables are independent of (orthogonal to) each other.
Often, a very large portion of the variability in plant variables can
be summarized and compressed into a relatively small number of
latent variables.

2. Monitor the reduced set of latent variables


Reducing the order of the problem is the key to MSPC. If the vari-
ability in hundreds of plant variables can be captured and modeled
with several new latent variables, our task is much more manage-
able.

3. Identify deviation from normal plant behavior


As with univariate SPC, MSPC relies heavily on visualization and
graphical tools to monitor the reduced set of latent variables.
Although the new latent variables do not have any physical mean-
ing, tracking these variables in a control chart fashion can clearly
identify deviation from normal behavior.
140 Advanced Control Unleashed

4. Identify which and how the original plant variables contribute to the
deviation
One of the principal strengths of the MSPC approach is the ability
to drill down and identify how the original plant variables contrib-
ute to the abnormal behavior in plant performance. Knowing how
the original plant variables contribute allows action to be taken to
bring the plant back within control.

In order to better understand the process described above we can graphi-


cally interpret MSPC applied to a simple process monitoring 3 process
variables: temperature, pressure and flow. Time-series plots of the vari-
ables are shown in Figure 4-9 with no apparent abnormal behavior. The
data can also be plotted in a 3-D space, as shown in this figure.

Figure 4-9. Plots of Process Variables

The first step in MSPC is to draw a line through the data in the direction of
maximum variability. This line is called the first principal component and
projecting the data points onto this component defines our first latent vari-
able. A second line can now be drawn through the data, orthogonal to the
first (linearly independent), and in the direction defining the second great-
est variability as shown in Figure 4-10. This is known as the second princi-
pal component, and again the data points can be projected onto this
principal component to generate latent variable 2.

These 2 new principal components now form a new plane in our data
space, which is commonly referred to as a scores plot. The scores plot for
our temperature, pressure and flow data is shown in Figure 4-11; it cap-
tures over 92% of the variability in the dataset. The scores plot has reduced
the order of our data from 3 dimensions to 2 dimensions. Monitoring how
the data points move on this plot is one of the graphical tools used in iden-
tifying abnormal behavior.
Next Page

Chapter 4 - Evaluating System Performance 141

Figure 4-10. First and Second Principal Components

Figure 4-11. Scores Plot Shows Contribution of Principal Components and a


Strong Outlier

For example, the temperature, pressure and flow data show significant
deviation from model norm, behavior not easily detected from the time
series trends. Although we have limited our analysis to reducing 3 vari-
ables down to 2 principal components, we can often reduce hundreds of
variables into only several principal components and still capture a signif-
icant amount of the variability in the dataset.

In any statistical online monitoring application, graphical output and


visualization of the data are important tools for identifying abnormal
behavior. In MSPC the most common plots include the scores plot, the
model residual plot and contribution plots.

Scores Plot
The scores plot allows the observations to be viewed in the new co-ordi-
nate system of principal components. The new principal components will
Previous Page

142 Advanced Control Unleashed

form a sub-space, and projecting the observations down onto this sub-
space will allow us to visualize how our observations relate to our PCA
model of the process. A common technique for real-time monitoring is to
connect the observations in the scores plot, forming a worm of a preset
length. Every new observation adds a value to the head of the worm,
while an older observation is dropped from the tail. Following the direc-
tion of the worm can indicate when the process is trending towards abnor-
mal behavior.

Hotelling T 2
The Hotelling T2 is a statistical parameter indicating variability of the mul-
tivariable process. It can be used to classify when an observation in the
data is a strong outlier. A strong outlier in the data indicates abnormal
behavior. The Hotelling T2 statistic is most often represented as an ellipse
on the scores plot with 95% or 99% confidence intervals. Thus, any obser-
vation outside of these limits would indicate a strong outlier that does not
conform to the normal correlation model of the process data. Figure 4-11
illustrates the result of the Hotelling T2 classification for a process with
two principal components. Prediction Error or Model Residual Plot

The prediction error is an indication of a moderate outlier and is com-


puted as the residual for each observation. In MSPC, the prediction error is
usually plotted in time with a critical tolerance level.

Contribution Plot
One of the many advantages of the PCA approach is that the information
in the model is open for inspection. If a situation is detected the system can
present how the original variables are contributing to the detected fault.
These are most often presented as a contribution difference from model
center or contribution difference between two observations.

Although this treatment of the MSPC topic has been limited to the moni-
toring and analysis continuous systems, the techniques can also be
applied to batch modeling and monitoring. In the analysis of batch data,
many batch variables can be modeled using the techniques of PCA to gen-
erate a set of new latent variables. Analysis of known good batches can
then generate a series of latent variables that define the golden batch tra-
jectory and target path for the batch process. This is known as intra-batch
modeling and helps provide early detection of abnormal batch operation,
within a specific phase of the batch. Moreover, MSPC has also been effec-
tively applied to the monitoring of overall batch performance, with the
ability to condense the overall batch information such that it can be com-
pared to several other batches.
Chapter 4 - Evaluating System Performance 143

Rules of Thumb
Rule 4.1. — Make sure the scan time of the I/O, module, and performance moni-
toring system is faster than 1/5 of the dead time for fault diagnostics. This will
prevent misalignment of events and aliasing of loop oscillations because it
will insure at least 10 samples per oscillation for dead time-dominant
loops and 20 samples per oscillation for self-regulating processes with a
large process time constant.

Rule 4.2. — To analyze items la through1ein the General Procedure, the scan
time should be as fast as the input card or field device to avoid improper identifica-
tion of periods of oscillation (aliasing) and the sequence of events. The speed and
richness of diagnostics from Fieldbus devices can be an important advan-
tage.

Rule 4.3. — To track down the cause of oscillating control loops, locate the loops
with significant power at the same frequency on a process flow diagram (PFD)
and trace the direction of manipulated flows between the loops. The loop from
which the manipulated flow originates is the source of the oscillations. It is
generally, but not necessarily, the furthest loop upstream.

Rule 4.4. — Watch out for periodic disturbances. Aggressive tuning of level
loops, valve stick-slip, too small a reset time on loops dominated by a time
lag (reactors, fermentors, evaporators, crystallizers, and columns), and too
high a controller gain for loops dominated by a time delay (webs, sheets,
and pipelines) are the most common causes of a periodic disturbance. If
the loop manipulates an inlet or outlet flow to the volume, the oscillating
loops will be upstream or downstream, respectively.

Level loops on surge and feed tanks tend to be tuned too tightly. Level
measurement noise is a frequent problem, particularly if a high gain or
any rate action is mistakenly used. Valve stick-slip often appears as a
square wave limit cycle. If the reset time is less than the oscillation period
for a loop dominated by a large time constant, the reset time is probably
too small. These loops need to overdrive the manipulated variable and
require more gain than reset action (see Chapter 2). Conversely, the pulp
and paper industry tends to have loops that are dominated by large time
delay, and consequently the use of a gain or reset time that is set too high
is more of an issue.

Rule 4.5. — If the oscillation period is less than 4 seconds, it probably originates
from rotating equipment, pressure regulators, actuator instability, burner insta-
bility, resonance, or measurement noise. Incipient surge and buzzing due to
dips in the characteristic curves of compressors and high-pressure pumps
are a principal cause of fast oscillations in gas and boiler feed water sys-
tems, respectively. Oversized field pressure regulators and inadequately
144 Advanced Control Unleashed

sized actuators and springs can cause large rapid fluctuations in flow.
Burner instability can cause rapid oscillations in fuel flow and furnace
pressure.

Rule 4.6. — Items la through 1i in the General Procedure can usually be handled
by performance monitoring systems that provide relative statistical measures of
variability, capability, utilization, sustained cycling, oscillation frequency and
power (Power Spectrums), saturated controller outputs, and limited process vari-
ables.

Rule 4.7. — The diagnosis and prediction of process equipment, operation,


sequence, allocation, and raw material problems (items 1j through 1s in the Gen-
eral Procedure) generally require cross correlation analysis and multivariate prin-
cipal component analysis (PCA).

Rule 4.8. — Online property estimators should be created to enhance the repeat-
ability and reliability of online and lab analysis systems by the addition of Partial
Least Squares (PLS) analysis, dynamic linear estimators, or neural networks.
Time delays and time constants must be used to align the inputs with out-
puts. Some of the need for data alignment can be reduced by the use of
long scan times but this slows down the calculation for estimation and
fault detection to a point where it may be too late or unable to resolve
what occurred first. This is a more important issue for continuous opera-
tions than batch operations since batch outputs can be tied to a batch, step,
and phase identification number and time (see Chapter 8).

Rule 4.9. — Control loops that are intentionally not in service due to batch oper-
ations, product or grade produced, or trains of equipment that are shut down to be
maintained or cleaned, must be flagged as such and other diagnostics automati-
cally suppressed. Otherwise, true problems are camouflaged by false alerts
and alarms. Also, the "Normal Mode" of loops used in such batch opera-
tions should be automatically updated to reflect the needs of the batch
sequence.

Guided Tour
This tour illustrates the potential ease of use and convenience of an inte-
grated interface that is possible for a performance monitor embedded in
an industrial control system. The following areas are addressed:

• Summary of measurement, actuator and control performance


• Examination of detail state
• Information provided to an operator
Chapter 4 - Evaluating System Performance 145

This performance monitoring capability is accessed from any operator and


engineer stations within the control system. The primary interface is
illustrated in Figure 4-12.

Figure 4-12. System Performance Monitoring Interface

From the explorer view at the left of the interface, the user may select the
entire plant or an individual process area to examine. Based on this selec-
tion, a summary is provided that shows the number of modules that are
being utilized for control, monitoring, and calculation within the selected
area. Also, a summary is provided of the modules that have an I / O or con-
trol block with abnormal conditions. The individual modules that have an
abnormal condition are listed in the right portion of the view in the Sum-
mary tab. The types of problems that have been detected are shown using
a frown face.

When a module is selected in the summary list, the individual I / O and


control blocks in that module are listed in the bottom right portion of the
interface. The percent time that an abnormal condition existed is shown
for each block. If this time exceeds the defined limit, then this condition is
flagged as an abnormal condition in the interface; that is, a frown face is
shown for the module. Through the Filter selections, it is possible to view
information for the previous or current hour, shift or day. Also, the user
may select the type of blocks and whether all modules or just those with
abnormal conditions are displayed.

The following types of abnormal conditions are automatically detected:

• The actual block mode is not in normal (designed) mode of


operation
• Control action is limited by a downstream condition
146 Advanced Control Unleashed

• Input status indicates the value is limited or has a Bad or Uncertain


status
• The calculated variability index and standard deviation exceed
their limits

In many plants, it is common for a piece of equipment to be shut down at


the end of a batch or to take it out of service for maintenance. During these
periods of down time, the measurements on that equipment may indicate
a Bad status. To prevent this information from being included in the
performance monitoring view, the user may select an area and then
indicate that the associated equipment if off-line. Such areas are clearly
indicated in the view of the equipment and are moved from monitoring.
An example where the RSU unit has been removed from monitoring is
shown in Figure 4-13.

Figure 4-13. Disabling a Selected Area of Monitoring

The user may find out more information on an abnormal condition by


using the detail tabs in the interface. For example, by selecting the Mode
tab, further information is provided on the actual and design mode of the
blocks within the module, as shown in Figure 4-14.

Figure 4-14. Detail Information for an Abnormal Condition


Chapter 4 - Evaluating System Performance 147

By selecting the Print icon, the user may choose to print a module sum-
mary report or a detail report. Also, to allow the operator to view the per-
formance and utilization information from his displays, a standard
function block and dynamo are used to include this information in the
operator interface. Through this dynamo an operator may also enable and
disable the monitoring in the associated area. This dynamo appears in an
operator display as shown in Figure 4-15.

Figure 4-15. Operator Access to Performance Information

Theory
The ability to quickly inspect control and measurement loops has a pri-
mary importance in industrial applications. Both poorly tuned loops and
malfunctioning field devices jeopardize product quality and production.
In the last decade this problem has received significant attention from both
academia and practitioners. Much of this work has focused on an assess-
ment of control loop performance using minimum variance controller as a
reference; see Harris [4.1] and Desborough and Harris [4.2] and [4.3]. Bea-
verstock et al. used heuristic definitions of unit production performance
tailored to specific applications, rather than loop performance [4.4].
Numerous other researchers have advanced performance monitoring. In
particular, Rhinehard [4.5] explored simple ways of computing standard
deviation from measurement-to-measurement deviations and filtering.
Harris et al. [4.6] and Huang et al. [4.7] provided guidelines for perfor-
mance assessment of multivariable controllers. Qin reviewed recent works
on performance monitoring in [4.8]. Shunta gave an excellent practical
summary of the performance monitoring in the monograph [4.9].

Automatic detection of abnormal operating conditions is possible with


Fieldbus architecture and Function Blocks (FB), features available in mod-
ern control systems. In older control systems, a monitoring application
148 Advanced Control Unleashed

may often be added as an OPC client. Based on these features, it is possible


to quantify loop utilization, "bad" or "limited" measurements, limitations
in control, and process variability. In addition, for control loops the poten-
tial improvement in loop performance is calculated. This functionality is
possible due to the Function Block features, defined by the Fieldbus Foun-
dation. Every function block has specific parameters for current block
input or output parameter status, block mode and total and capability
standard deviation. These block parameters are used by process monitor-
ing applications and for creating summary reports for hour, shift and day.

Major manufacturers of process instrumentation in the process industry


have introduced a new generation of field devices based on the standards
established by the Fieldbus Foundation. To allow interoperability of
devices from different manufacturers, a common implementation is sup-
ported using a standard set of function blocks for measurement, control
and calculations. A key concept in allowing control and calculations to be
distributed between devices or between a field device and a controller is a
common implementation of block mode supported by each function
block. Also, each input and output of a function block is designed to pro-
vide a status attribute along with a value, as illustrated in Figure 4-16.

Figure 4-16. Block Parameter Status and Mode are Utilized in Performance
Monitoring

The status associated with each function-block output gives a direct indi-
cation of the quality to the measurement of control signal. Quality is
defined by describing the measurement as suitable for control (Good),
questionable for use in control (Uncertain) or not suitable for use in con-
trol (Bad). In addition, the status provides an indication of whether a mea-
surement or control signal is high or low limited. For example, if a
measurement is operating above its calibration range, the quality of the
measurement may be shown as "Uncertain." Downstream blocks may
Chapter 4 - Evaluating System Performance 149

provide a status of "Good Limited" to the upstream control block to indi-


cate that a valve is at its upper or lower end of range.

The modes of Fieldbus blocks consist of four attributes, which may be


very valuable in examining the operation of a control block. When a con-
trol block is initially configured, the "Normal" attribute of the mode
parameter is defined based on the design mode of the block. During nor-
mal operation, the operator sets the "target" mode attribute to enable or
disable automatic control and to indicate whether the operator or another
block or application has control of the block set point and block output.
The block reflects the mode of operation it can achieve in its "actual"
mode attribute based on the target mode and the status of inputs to the
control block. By comparing the actual mode to the normal mode attribute
it is possible to determine if a function block is operating in its designed
mode of operation.

By continuously monitoring the status and mode supported by Fieldbus


function blocks, it is possible to automatically determine the following
abnormal conditions in control and I/O function blocks:

• Bad I/O — the status of the block process variable (PV parameter) is
"bad," "uncertain" or "limited." A sensor failure, inaccurate
calibration, or measurement diagnostics have detected a condition
that requires attention by maintenance.
• Limited (control action) — a downstream condition exists that
limits the control action taken by the block. Such limitations may
prevent the loop from achieving or maintaining set point.
• Mode not Normal — The actual mode of the block does not match
the normal mode configured for the block. An operator may change
the target mode from normal because of equipment malfunction.

The percent time that these conditions exist over an hour, a shift, and a day
is computed for every block and compared to a configured global limit for
each condition. When one of these limits is exceeded, the associated
module is displayed in the main summary display.

It is possible that the status of all inputs to a control block is normal and
the mode of the block is correct and yet the control provided is poor. No
indication is included in the standard block defined by Fieldbus that
directly indicates this problem. However, statistical techniques exist that
allow the quality of control to be determined in a reliable manner. Leading
companies in the process industry use such techniques to evaluate the per-
formance of control loops[4.9]. Based on a knowledge of total and capabil-
ity standard deviation of the control measure, illustrated in Figure 4-17, it
is possible to compute a variability index for control measurements that
150 Advanced Control Unleashed

compares current control performance to the best achievable for the pro-
cess dynamics.

Figure 4-17. Capability and Total Standard Deviation for Process Variable

However, to accurately determine total and capability standard deviation,


it is necessary that the measurement of fast processes be sampled at a rate
that can not be achieved by the communication networks of most DCS sys-
tems used in plants today. Thus, in the past, the application of this technol-
ogy has been limited to the analysis of slower control loops or to special
dedicated tools that support the sample rates required for faster processes.
It is now possible for the manufacturer of the Fieldbus device and modern
control systems to build these calculations into the control and I/O func-
tion blocks to support control analysis. By taking this approach, the con-
trol of even the fastest process may be accurately evaluated.

Using Statistics for Control Performance Evaluation


The valuable role that statistics can play in evaluating control performance
is well known. Standard deviation of the process measurement (Equation
4-1) is commonly used and is an easy-to-understand measure of process
variability

(4-1)

where n is the number of sample, typically n = 120.


Chapter 4 - Evaluating System Performance 151

(4-2)

When loop performance is of concern, then standard deviation alone may


not provide sufficient information to evaluate loop tuning. To gain an
objective judgement, a reference value for the process control performance
is required. The best performing feedback control theoretically is mini-
mum variance control, Sfbc. This value may be calculated [4.9] directly
based on the knowledge of the total, Stot, and capability, Scap, standard
deviation of the process measurement as shown in the Formulas 4-3 and
4-4.

(4-3)

Scap is calculated from process samples XI as :

(4-4)

Based on a the value of total standard deviation and standard deviation


for minimum variance control, it is possible to define a variability index
for the control loop that reflects how close control performance comes to
minimum variance.

(4-5)

where s is the sensitivity factor.


Finally, for control blocks, overall control performance is as follows:

Performance Index (PI) = (100-VI) (4-6)


The overall control utilization is an indication of what percent of time
loops were in normal mode:

Utilization = Average Percent Time in Normal Mode


To support the VI and PI calculations, the I/O and control function blocks
in the scalable system and Fieldbus devices calculate Scap and Stot and
make them visible as block parameters. Intermediate calculations are done
152 Advanced Control Unleashed

each execution of the function blocks. The parameter values are then
updated per every n executions of the function block. For a typical imple­
mentation, the update is done after 120 executions of the function block.

To simplify calculations and memory requirements, the total standard


deviation, Stot,, can be calculated by mean absolute error, MAE, computa­
tion as follows:

(4-7)

where:

(4-8)

The measurement value is used in I/O blocks to calculate the mean value.
In control blocks, either the working set point or the measurement value is
used depending on block mode.

The relation between standard deviation and a mean absolute error can be
verified by computing a mean absolute value for the normalized Gaussian
distribution, p(x):

(4-9)

(4-10)

Since for the normalized Gaussian distribution standard deviation σ=1

(4-11)

The capability standard deviation S cap , is calculated as follows:

(4-12)

where:

(4-13)

Only the summing component associated with the MAE and MR is done
each execution. The division of the sum by N or N-1 is done as part of the
Chapter 4 - Evaluating System Performance 153

Stot, and Scap calculations only once every N executions (default N=120).
Capability standard deviation calculation requires that the sampling rate
be fast enough. The requirements for the sampling rate are similar to the
scan rate of a control loop. A practical estimate for selecting scan rate for
control loops is sampling five or more times per time constant. [4.9].

Extending the Concept to the Multi-variable Environment


In many products designed for control and measurement system perfor-
mance evaluation, the scope of detection is limited to traditional I/O and
traditional control capability. This capability has been extended to address
I/O and control blocks in Fieldbus devices and to provide interfaces and
analysis tools for high-speed trend collection, device diagnostics and tools
for commissioning control. Since MPC control and Neural Network capa-
bility are being embedded in modern control systems, it is a natural exten-
sion of this work to include performance evaluation support that includes
advanced control.

For the purpose of performance evaluation, advanced control can be


defined as "monitoring or control applications that use multiple process
measurements to produce one or more outputs that are used directly or
indirectly to improve the operation of a plant." As illustrated in Figure 4-
18, this definition fits the advanced control applications that are currently
used in industry today: model predictive control, fuzzy logic control,
adaptive tuning, soft sensors based on neural networks, plant optimizers
(such as RTO+), multi-stream blending.

Figure 4-18. Example of an Advanced Control Applications

In spite of the diverse technologies associated with each of these applica-


tions, their requirements for correct operation are basically the same as
those for base-level measurement and control.
154 Advanced Control Unleashed

• Valid measurements (good and not limited).


• Control action is not limited by downstream conditions.
• The operator and operating conditions allowed the algorithm to
operate as designed; i.e., the application is in the correct mode of
operation.
• Control or calculation objectives were met.

When compared to the measurement and control-blocks requirement


addressed in single loop control, the primary differences are (1) multiple
inputs (rather than one) are used for the control or calculation and to
indicate downstream limitations, and (2) the measurement of control
performance is based on multiple inputs and targets.

Thus, to extend system evaluation to address advanced control applica-


tions, the techniques used for single loop control to determine abnormal
input, control limited, incorrect mode, and control index exceeds limit
must be generalized to include multiple inputs and targets.

Addressing Advanced Control


The detection of abnormal conditions or poor performance of an advanced
control application should take into account the multiple inputs and tar-
gets used in the control or calculation. Also, in many cases multiple inputs
must be examined to determine if control has been limited by downstream
conditions. Thus, the four basic indications used by system performance
evaluation to determine abnormal or under-performing control must be
generalized to address the following:

1. Inputs used in the application are abnormal ("bad," "uncertain,"


or "limited").
2. Inputs associated with downstream conditions indicate control
action was limited.
3. The algorithm was not able to operate as designed; i.e., its mode of
operation was not normal.
4. The process output did not meet the target-operating objective.

In this section, the manner in which each of the calculations may be


generalized to include multiple inputs and targets is detailed.

Abnormal Inputs
For a single input control block such as the PID or FLC control block, the
status of the primary input (PV) of the block may be used to determine if
the input is abnormal. For example, if the PV status is uncertain or "bad"
Chapter 4 - Evaluating System Performance 155

or is "limited," then the block will be flagged as having an abnormal


input. In advanced control applications in which multiple inputs may be
required for correct operation, any of these inputs that have a status of
uncertain, bad or limited may prevent the control from achieving its goal.
Thus, for advanced control applications, the indication of abnormal
input(s) will be based on all inputs that directly affect the control or calcu-
lation. This may be calculated as follows:

Abnormal Input Condition Advanced Control =


V Abnormal Condition on Input I or block
where:
V = a logical sum of I = 1,..., N Conditions

Control Limited
Where advanced control is providing multiple process inputs, the control
interface to the process will be done through I/O or control blocks. Both
provide an output that will be used as a back-calculation input to the
advanced control application. If the control action taken is limited down-
stream, then this is reflected in the status of the associated back-calculation
input. Since the control objective will not, in general, be met if any of the
control outputs becomes limited, then the indication that control is limited
must consider the back-calculation input status associated with all control
outputs. This may be calculated as follows:

Control Limited = v Control Action (I) is downstream limited


where:
V = a logical sum of I = 1,..., N Conditions

Incorrect Mode
For single input/output blocks, the status of the primary input, cascade
input, back-calculation input, and target mode must be used in determin-
ing achievable mode of operation. This mode of operation is reflected in
the mode parameter as the actual mode attribute. When a block is config-
ured, the customer may indicate the normal mode that the block is
designed to operate in. By comparing the actual mode attribute to the nor-
mal mode attribute for the block, it is possible to determine whether the
block is operating in its designed mode. Advanced control applications
may be engineered to use the standard status and mode definition. To cal-
culate mode, the status of all inputs used in the control and the status of all
back-calculation inputs will be utilized. Each output provided by the
advanced control application will be designed to support handshaking,
bumpless transfer, and windup prevention based on the back-calculation
156 Advanced Control Unleashed

inputs. Thus, mode will be treated exactly the same as other control blocks
in the control system. Incorrect Mode may be determined as follows:

Incorrect Mode = v Incorrect Mode ( I )


where:
V = a logical sum of I = 1,...N Conditions where the input or
output are required to continue control.

Control Index
For single loop PID and FLC control, the variability index is calculated
based on the total and capability standard deviations calculated in the
control block. In an advanced control application involving multiple
inputs and targets, the measure of control performance must consider
each controlled input compared to its target value. For advanced control
applications, such as MPC or multi-variable fuzzy logic, this concept may
be extended to calculate an average index or minimum value of the index:

CIA=1/L\?\CI(i)
CIM = min CI(i)
Control Index for Advanced Control Application is an average or a
weighted average of control indices for individual controlled variables. A
maximum value of control index may be used as well in some applica-
tions.

We compute CI for a particular input in a similar way as for a single loop


control.

Diagnostic Tools
Insight into the source of variation may be obtained using tools that sup-
port the calculation of power spectrum and cross correlation:

• Power spectrum shows how measurement variability is distributed


over different frequencies.
• Cross correlation reveals dependencies between two process
variables.

The power spectrum may be calculated from the Fourier series coefficients
as illustrated in Figure 4-19.

The calculation of cross correlation is illustrated in Figure 4-20.


Chapter 4 - Evaluating System Performance 157

Figure 4-19. Illustration of Power Spectrum Calculation

Figure 4-20. Illustration of Cross Correlation Calculation

System Communication Model


Selected function-block parameters in the system controller are reported
on a change basis to the performance monitoring application. This appli-
cation may be structured as a server in the control system's main worksta-
tion. The information resulting from the calculation in the server may be
accessed through the client applications provided on other system work-
stations, as illustrated in Figure 4-21.

The status and actual mode attributes used by the performance monitor-
ing system to calculate loop utilization, limited condition, and bad mea-
surement normally do not change in value. Thus, communication
requirements may be minimized by reporting parameters only on a
change in these attribute values [4.10], [4.11]. If this approach is taken in
the system design, then the communication load for reporting these
parameters will normally be close to zero. Performance statistics may be
158 Advanced Control Unleashed

calculated over a specified period of time, e.g., 120 executions of the con-
trol or measurement block, and then reported to the performance monitor-
ing application. Thus, these parameters are reported every 60 seconds for
a block with an execution rate of 0.5 seconds.

Figure 4-21. Process Monitoring Communication Diagram

When the server is first placed online, the current state of the required
attributes is reported once and subsequent updates are sent by exception
reporting.

The support of fast speed trend for diagnostics places a special require-
ment on the control system design. Inaccuracies may be introduced by jit-
ter and aliasing as a result of measurement sampling:

• Jitter — time between sampling varies; i.e., sampling is not done on


a precisely periodic basis.
• Aliasing — sampling rate is less than twice the highest-frequency
component of the process measurement.

Jitter and delays may be introduced into data collected by a diagnostic


application that polls for information in the controller through the control
system communication network. The information captured in the system
historian for analysis is assumed to be reported on a periodic basis. If the
information is delayed in communication to the PC, then jitter occurs, as
illustrated in Figure 4-22.

The control system communication speed and response to request have


often limited the rate at which information in the controller may be
accessed by a diagnostic application, causing aliasing. To avoid aliasing,
the sample rate must be at least twice as fast as the highest frequency in
the sampled signal. If this is not possible, then aliasing occurs, as illus-
trated in Figure 4-23.
Chapter 4 - Evaluating System Performance 159

Figure 4-22. Impact of Jitter on the Signal Used in Diagnostics

Figure 4-23. Aliasing Occurs by Sampling at a Slow Rate

Historically, the only way to collect high-speed data for diagnostics was
dedicated tools that attached at the terminal strip of the control system, as
illustrated in Figure 4-24.

Figure 4-24. Traditional Diagnostic Tools


160 Advanced Control Unleashed

However, many Fieldbus devices introduced in the last few years support
the collection of high-speed data for diagnostics. Also, some modern con-
trollers allow measurement and control parameters to be collected in the
controller for diagnostic support. This trend information collected in the
Fieldbus device and controllers may be accessed without aliasing or jitter
and thus used within the control system for diagnostic support, as illus-
trated in Figure 4-25.

Figure 4-25. Capture of Trends in Field Devices and Controllers

References
1. Harris, T., "Assessment of Control Loop Performance," Can. J. Chem. Eng.,
1989,67(10):856-861.
2. Desborough, L., and Harris, T.J, "Performance Assessment Measures for
Univariate Feedback Control," Can. J. Chem. Eng., 1992,70:1186.
3. Desborough, L., and Harris, T.J, "Performance Assessment Measures for
Univariate Feedforward/Feedback Control," Can. J. Chem. Eng., 1993, 71:605.
4. Beaverstock, C. Malcolm, and Martin, Peter G, "Performance Control
Apparatus and Method in a Processing Plant," US Patent Number 5,134,574,
July 28,1992.
5. Rhinehart, R. Russell, "A Cusum type on-line filter," Process Control and
Quality, 2 (1992) 169-179.
6. Harris, T., Boudreau, F., and Macgregor, J. F., "Performance Assessment of
Multivariable Feedback Controllers," Automatica, 1996, 32(11):1505-1518.
7. Huang, B., Shah, S.L., and Kwok, K.Y., "Good, Bad or Optimal? Performance
Assessment of MIMO Processes," Automatica, 1997, 33(6): 1175-1183.
8. Qin, S.J., "Control Performance Monitoring - A Review and Assessment,"
NSF/NIST Workshop, New Orleans, March 6-8, 1998.
Chapter 4 - Evaluating System Performance 161

9. Shunta, Joseph, "Achieving World Class Manufacturing Through Process


Control," 1995, Prentice Hall PTR (ISBN 0-13-309030-2).
10. Blevins, T., Wojsznis, W., Nixon, M, Riley, K., and Thiele, D., "Process
Monitoring Benefits From Fieldbus Architecture," ISA Conference, 1999.
11. Blevins, T., and OChoa, D., "Statistical Analysis Rides the Bus", Intech,
March, 2000, pp.48-50.
12. Dudzic, Michael, Miletic, Ivan, Quinn, Shannon, and Vaculik, Vit, "The
Application of Advanced Statistical Technologies (Multivariate Statistics) in
On-Line Monitoring and Control," Process Automation Technology, Dofasco,
Hamilton, Ontario, Canada.
5
Abnormal Situation
Management

Practice
Overview
In modern control systems, expert system technology is playing an ever-
increasing role in assisting the operator in the detection and management
of abnormal situations in a plant. With the introduction of distributed con-
trol systems in the late 1970's, the basic control systems of many process
plants went through major changes in organization and operation. In
many cases, the introduction of distributed control allowed control func-
tions to be concentrated into a few control rooms. The traditional control
panels for operator interface to the process were replaced with keyboards
and monitors. There were few limits on the amount of information that
could be accessed and displayed at these operator stations. These systems
allowed an operator to make changes and see the process alarms associ-
ated with his area of responsibility. In some cases, the system was
designed to allow all information about the plant to be accessed from any
terminal within the system [5.1]. As a result of this technology change, and
the increasing pressure on companies to increase productivity, the scope of
control that an operator was responsible for changed dramatically.

In one pulp and paper mill the introduction of a distributed control system
allowed three control rooms to be consolidated into one and for one oper-
ator to do the job formerly done by three [5.2]. In some process areas an
operator is responsible for as many as thousands of measurements and
hundreds of motors and control loops in addition to various subsystems in
a process area. Thus, it has become increasingly difficult for an operator to
be aware of all conditions in the plant. During normal operations, there is
insufficient time for an operator to examine all measurements in his area
163
164 Advanced Control Unleashed

to determine if an abnormal condition exists. He depends to a great extent


on process alarms to alert him to abnormal conditions. Through the early
detection of abnormal conditions using expert systems, it is possible to
improve plant operation or to prevent a failure that could cause a process
shutdown. Also, during shutdown and startup conditions, expert systems
can be used to draw the operator to abnormal situations that might affect
plant operation.

Some of the areas of abnormal situation management that an expert sys-


tem can effectively addressed are the following:

• Alarm screening—presentation of the root source of an alarm


condition so that the operator is not distracted from the critical
events that are associated with an equipment failure.
• Detection of process change—Degradation of equipment
performance because of improper equipment setup and wear or
buildup in pipes and heat-transfer surfaces may alter the behavior
of equipment. The automatic detection of significant process
changes helps prevent equipment damage and associated
production loss.
• Detection of abnormal conditions—The automatic detection of
patterns in plant operation that indicate an abnormal condition
may bring this information to an operator's attention sooner that
process alarms.

Expert systems may also be used to automatically initiate corrective action


on detection of equipment failure. For example, expert systems are used to
prevent a condition known as alarm flooding. Process value alarms have
traditionally been used in a control system to alert the operator when a
critical measurement has deviated from its normal operating range. Also,
deviation alarming is commonly used in control applications to alert the
operator when a controlled parameter has deviated from set point. In
addition, alarms often accompany the detection of an interlock condition
or malfunctions such as high vibration or interlock active in motor control.
Under normal plant operations, the operator may easily address the
occurrence of individual process alarms. However, the failure of a major
piece of equipment that is critical to an operation, such as a motor, may
result in hundreds of process alarms being activated at once.

With the flood of alarm information, it is difficult for the operator to


quickly determine the source of the alarms if, for example, a motor
tripped. Also, in this flood of alarms, it is difficult to determine which
alarms indicate conditions that need to be immediately addressed to
insure safety or to prevent damage to process equipment. If an expert sys-
tem is monitoring individual events that indicate equipment failure, it is
Chapter 5 - Abnormal Situation Management 165

possible for the expert system to automatically suppress other process


alarms that were triggered by the failure. This allows the operator to focus
on the fault and more effectively address the source of the problem.

A combination of individual events or process conditions may indicate a


problem. If an operator misses this relationship, this condition may go
undetected. For example, the detection of high discharge pressure on a
variable-speed pump in combination with low flow indication may indi-
cate a restriction in the downstream line, e.g., a blocking valve is closed. If
an expert system is used to detect this combination or pattern, it can auto-
matically alert the operator that such a condition exists. Through this early
detection, it is possible to avoid equipment damage or lost production.

Such proactive detection of abnormal conditions is especially helpful dur-


ing the startup or shut down of a complex process. Under these condi-
tions, the operator has to take a number of manual actions. His job is
complicated by the fact that it can be months or years between process
shutdowns and startups and thus he must often quickly deal with opera-
tion conditions that he seldom sees during normal operation of the plant.

Opportunity Assessment
Expert systems have been successfully applied in a variety of applications.
Within the process industry, there is significant benefit in using this tech-
nology for abnormal situation management. In assessing the potential
benefits of this technology, the following questions should be asked:

1. Are there areas of the plant operation where unscheduled


shutdowns commonly occur? Could these shutdowns have been
prevented if the operator had taken action on the available
information?
2. Is the process characterized by high maintenance of
measurements, control valves, or equipment? Would continuous
monitoring for abnormal conditions help reduce loss of production
or avoid unsafe processing conditions
3. Has the operator's area of responsibility been expanded to the
point where he does not have time to proactively look at process
conditions? Are there cases where operations could have been
improved if the operator had taken advantage of process
conditions to increase throughput or improve operating efficiency?
4. During the startup of a process area, does it take longer to get to
normal operating conditions that would be expected? Would
detection of abnormal conditions during startup help the operator
reduce the time to complete startup?
166 Advanced Control Unleashed

5. When a piece of equipment fails, does the operator have difficulty


determining the source of the problem? Would automatic
screening of alarms help the operator react faster to these
situations?
6. Are batch cycles put on hold or new batch cycles delayed because
operations has difficulty determining the phase end points or the
cause and validity of failure expressions in the batch sequence?

The cost of expert-system software has dramatically decreased since the


initial introduction of commercial expert systems in the early 1980's. In
most cases expert-system software can execute on a PC that uses a
common Microsoft operating system. However, interfacing these software
packages to the control system to allow the use of real-time data in
abnormal condition monitoring is often a time-consuming and costly
adventure. Also, there are wide differences in the ease with which the
knowledge associated with an application is entered into the expert
system. Often, more effort goes into the design and implementation of the
user interface than into the expert system itself. Many of these
implementation issues can be addressed by embedding the expert system
within the control system.

Examples

Alarm Screening
Expert systems are used in areas of the refining industry for abnormal sit-
uation management. One such use is in the prevention of alarm flooding.
For example, the regeneration unit of the hydrocracking process is vital to
plant operation and production. The interactive nature of the large num-
ber of measurements and control loops associated with this unit means
that a failure of one piece of equipment may result in the operator being
presented with many alarms: the original failure plus the alarms associ-
ated with measurements that the equipment affects. Under these condi-
tions, it is of great help to the operator if the alarm associated with the
failure is clearly presented and other alarms resulting from this event are
only logged, not presented to the operator. An expert system is used to
look for specific equipment failures and to automatically suppress other
alarms triggered by the failure. Under normal operating conditions, all
alarms would be active; the expert system only suppresses alarms when
specific operating conditions are detected.

As part of the development of an expert system, an expert on the regener-


ation process needs to define what types of failures on that process will
benefit from alarm suppression. For example, one condition that might be
identified is the tripping of the blower motor. When the blower motor
Chapter 5 - Abnormal Situation Management 167

trips, a number of other upstream and downstream alarms are also gener-
ated that would make it difficult for the operator to quickly identify the
problem areas, as illustrated in Figure 5-1.

Figure 5-1. Alarms Following a Regenerator Blower Trip

The regenerator measurements and control loops that support alarming


are defined as facts that are available in the expert system. The fact associ-
ated with the blower motor includes a slot that reflects the status of the
blower motor. This fact is dynamically updated through the interface to
the control system. Only two rules are required to automatically change all
measurement and control-loop alarm priorities in the control system so
they are suppressed at the operator interface when a blower trip is
detected. Using the ability to define variables in these expert rules, it is
possible to implement rules for alarm suppression independent of the
number of measurements or control loops associated with the regenerator
unit. Based on this design, alarming of associated measurements and
loops alarms caused by the blower trip are automatically suppressed
when a blower trip is detected, as illustrated in Figure 5-2.

Fault Detection
An oil field may contain hundreds of wells. The early detection of an
abnormal condition such as blocked flow in the wellhead may avoid dam-
age to the associated pump. Because such conditions are often indicated
by a combination of measurement values, the traditional value or devia-
tion alarming cannot be used to alert the operator of them. However, by
168 Advanced Control Unleashed

Figure 5.2. Alarm Screening Allows the Source of Failure to be Identified

using an expert system to monitor the wells, the conditions that indicate
abnormal operation are detected and brought to the operator's attention.

To implement the expert system, facts would be defined for the measure-
ments that are included in a production control system. To detect blocked
flow, one rule would be written that examines the conditions that indicate
blocked flow; e.g., oil flow and the pump pressure. By using the variable
definitions in both the left and right portions of the expert rule, it is possi-
ble to monitor all the wells with one rule. As wells are added to or
removed from the system, the only change required would be to update
the facts; the rule would not have to be modified. When a blocked-flow
condition is detected, the rule is designed to write to parameters in the
control system that cause the detected condition to be alarmed and dis-
played at the operator interface. An example of how this would be dis-
played to the operator is shown in Figure 5-3.

Application

General Procedure
1. Identify the areas to be addressed by the expert system:
a. Select applications that have significant impact on plant
operations.

b. Quantify potential benefits based on past operations.


Chapter 5 - Abnormal Situation Management 169

Figure 5.3. Operator Alerted to Condition Detected by Expert System

2. Develop a cost estimate for an expert system solution


a. Cost of equipment and software
b. Manpower and expertise required to execute the project
3. Implement, Test, and Maintain the Solution
4. Focus on the objectives established for the project
a. Test initial design concepts before doing full implementation
b. Test full prototype with detailed noisy dynamic simulation
c. Run full prototype online in computer or configuration room
d. Eliminate false diagnostics before installing system in the
control room
e. Establish procedures to monitor and maintain the system

Application Details
The following details were developed for the application of expert sys-
tems for abnormal situation management. To achieve the best results, the
user should adhere to the guidelines specified in this section.
170 Advanced Control Unleashed

Selection of the Problem


In selecting a problem to be addressed, the following should be consid-
ered:

• Define objectives to be achieved by the project. If the expert


system application is to be successful, the scope of the project
should be limited to the requirements of the problem area selected.
• Quantify the benefits to be expected from implementation of an
expert system. The savings should be compared to the cost to
initially develop and maintain the expert system. The tools selected
for the project will significantly influence these costs.
• Identify the resources needed to address the application. To
develop an expert system for abnormal situation monitoring, it is
necessary to have access to a person who is very familiar with the
process; i.e., an expert on the process.

Define Process I/O Requirement


A more detailed project plan based on the project objectives allows the
scope of the project to be further refined:

• Define I/O for the process areas and units of equipment that are
addressed by the project.
• Analyze the best grouping of this information into facts to support
the pattern-matching requirements.
• Test the fact design by addressing portions of the application in the
first iteration of the implementation. Modify the fact slot definitions
as required based on testing with this initial implementation.

Capture Expert Knowledge


• Interview experts on the process for the target application and
express their input as rules to be implemented in the system. Avoid
rules that are too fine-grained; they will make maintenance of the
system difficult. Seek narrow, well-defined problems. Expert
systems involve a lot of implicit knowledge; people often take for
granted that assumptions, logic, and limitations are understood
[5.8].
• Document the user interface requirements as part of the rule
definition. Write use cases to test the concepts and screen
presentations before attempting to implement them in the expert
system.
Chapter 5 - Abnormal Situation Management 171

Test the Expert System

• Develop a test plan to verify the operation of the expert system.


This document will also be useful in regression testing the system
after making changes.
• Test the behavior of the expert system using the simulation
capability of the expert system. Follow the test plan. Connect the
expert system to a detailed noisy simulation and exercise the rules
for a variety of operating conditions and interactions

Maintain the Knowledge Database


• Document the expert system's database, updating the
documentation when facts and/or rules change.
• Document the expert system objectives and design and the test
results.
• Establish a maintenance support program that closely monitors
the use of the expert system for the number of valid, false, and
missed diagnostics and conclusions.
• Define procedures to document and correct errors found in the
implementation.

Rules of Thumb
Rule 5.1. — The granularity of rules has impact. Too little definition leads to
rules firing that are hard to understand. However, too fine granularity
makes an expert system difficult to modify.

Rule 5.2. — If the problem can be solved by conventional programming, it is not


a good choice for an expert system. Expert systems are best suited to applica-
tions for which there is not defined algorithm solution. Conventional pro-
grams generally expect input to follow a certain pattern. Expert systems
deal best with unexpected input that does not follow a predefined pattern,
by reacting to such inputs. Expert systems are appropriate when the
knowledge is largely heuristic or uncertain.

Rule 5.3. — The domain must be well bounded for an expert system to work.
Effort should be spent in the initial portion of the project to clearly define
the objectives that must be met. The project should be tightly managed to
achieve these objectives.

Rule 5.4. — Most real-world heuristics are too complicated to be expressed as


rules with just a single pattern. Rules with more than one pattern may be
needed to express these conditions.
172 Advanced Control Unleashed

Rule 5.5. — When new functionality is added, don't test just the new features.
As an expert system is built up in increments of new functionality, it is
important to always test the previous knowledge features of the system
along with the new ones.

Rule 5.6. — Make the rules as modular and generic as possible to improve the
expandability and maintainability of the expertise. Make sure the fundamental
concept is captured rather than a specific representation of a more general
rule [5.9].

Rule 5.7. — False diagnostics and alerts will quickly destroy operator confidence.
Debug the system and bring it online out of view of the operator, in a com-
puter or configuration room [5.9].

Rule 5.8. — Base the validation of level and other integrating responses on a rate
of change rather than an absolute value to eliminate model drift. The diagnostics
for level measurements and the detection of plugged nozzles should be
based on a calculated and measured rate of change of level.

Rule 5.9. — Rate of change, sustained deviation, and dead signal calculations
must use filters, velocity limits, and calculation intervals that maximize the sig-
nal-to-noise ratio. The signal should be filtered and sent through a dead
time block to create a calculation time interval long enough to see a valid
change. Velocity limits should screen out changes that are faster than
physically possible in the process. Note that a dead time block instead of a
large scan time should be used to prevent low-frequency noise and long
failure-detection delays.

Rule 5.10. — Scan time and rule execution time must be fast enough to resolve
the proper sequence to avoid picking up on an effect rather than a cause. Often
the sequence of events is important to determining the root cause. For fast
processes or slow measurements, it can be difficult to decipher what hap-
pened first.

Rule 5.11. — The interface must be integrated into the displays for the operators.
Separate computers with special screens decrease the utility and accep-
tance of the system

Rule 5.12. — The alerts must not be repetitive or obnoxious. Loud or abusive
alerts will lead to loud and abusive replies from the operators. The setting
and clearing of alerts should require multiple executions with the same
conclusions before the operator is notified. The engineer should be
advised of flickers so that the system can be adjusted. Use a manual reset
for alerts that cannot be reliably cleared [5.9].
Chapter 5 - Abnormal Situation Management 173

Rule 5.13. — Use a sustained deviation from an updated baseline for the detec-
tion of batch phase endpoints and a rate of change for the detection of batch trajec-
tory abnormalities to eliminate drift and changes in starting points. A large filter
is used to create the baseline. The last good batch trajectory is used as the
reference trajectory. Process knowledge must be used to make sure conclu-
sions are reasonable.

Rule 5.14. — Upgrade the field instruments before installing an expert system.
Otherwise it will be giving you information you already know. If you use
valves without positioners or orifice meters, the expert system will tell you
over and over again that these are stupid.

Rule 5.15. — Make sure you have access to the original process expertise to tune
and maintain the system. The idea that these systems allow you to capture
for prosterity expertise that is disappearing is a fallacy. These systems
need to be adjusted and updated for unforeseen or changing conditions
and mistakes. If the expert leaves, the expert system is not far behind.

Guided Tour
This tour illustrates the potential ease of use and convenience of an expert-
system interface for abnormal situation monitoring embedded in an
industrial control system [5.3, 5.4]. The following areas are addressed:

• The engineering interface


• Accessing control system I/O
• Fact configuration
• Rule configuration
• Online debug capability
• Simulation support for check-out

When the expert system capability is embedded in the control system,


standard fact templates are provided to access real-time information
within the control system. This embedded capability is illustrated in
Figure 5-4.

Embedding the expert system within the control system makes it possible
to use predefined fact templates to access measurement, calculations,
events, and historical information in the control system. As part of the I/O
processing done outside of the expert engine, dynamic analog values
included as slots in the template are automatically compared to limit val-
ues to obtain a discrete value. Thus, when the discrete value of a slot
changes state, the old fact is automatically retracted and the fact re-
inserted with the new slot value.
174 Advanced Control Unleashed

Figure 5.4. Embedded Expert System Capability

Through a special function added to the expert engine, a rule can write to
any function-block parameter within the control system. A special block is
provided in the control system to allow alarms and messages to be acti-
vated, as shown in Figure 5-5.

Figure 5.5. Expert Rules May Access Control System Blocks

The fact templates and rule templates provided in the expert system are
made visible within the system in a hierarchical tree similar to Microsoft
Windows® Explorer file presentation. When a fact template is selected, a
dialog is automatically presented to the user that can be used to define a
fact. An example of the template provided to access parameters of a func-
tion block within the control system is shown in Figure 5-6.

Rules can be written to meet specific application requirements. With the


expert engine embedded in the control system, a rules editor is provided
Chapter 5 - Abnormal Situation Management 175

Figure 5.6. Fact Templates for Accessing Control System Information

that allows the features of the expert engine to be fully utilized. The fact
templates can reference control system data in the left side of the rule. In
addition, a special function is supported in the action, or right-hand, side
of the rule that allows block parameters within the control system to be
written. This capability is used to trigger alarms and to write messages to
be shown in the operator interface. Also, this capability is used to directly
interact with the control system by changing set point, mode of control
blocks, etc. An example of the dialog used to define rules is shown in Fig-
ure 5-7.

Figure 5.7. Dialog for Configuring Rules


176 Advanced Control Unleashed

Once rules and facts have been assigned to execute on a PC node within
the control system, the results of rule evaluation are examined. The expert
system includes an option to examine the current state of facts and rules
that have fired and are on the agenda, and to access statistics concerning
the evaluation of rules. Based on this selection, the information about the
rule evaluation is displayed in the right side of the expert system explorer
view as shown in Figure 5-8:

Figure 5.8. Examining Rule Evaluation

To support the testing of an expert system, the expert engine has several
online debugging features. Through the toolbar selection shown in Figure
5-9, the user may exert the following control over the expert engine's exe-
cution:

• Reset and Refresh — Begin execution at a known point.


• Stop, Run, Run Once, Run to — Limit control execution.
• Set Breakpoint, Remove Breakpoint — Stop execution at a specific
point.

When an expert system is used for abnormal situation management, most


of the fact slots are reflections of process measurements, the status of
discrete inputs, etc. To check out rules and facts definitions, the user must
be able to change the values used for fact slots without affecting the online
configuration. This simulation support is built into the view provided to
examine fact slots, as illustrated in Figure 5-10.
Chapter 5 - Abnormal Situation Management 177

Figure 5.9. Online Debugging support

Figure 5.10. Simulation Capability Used During System Check-out

When simulation is enabled on a fact, the user may change the value of
fact slots. The icon associated with a fact changes, to flag the fact that the
real value from the system is not being utilized.

Theory
Introduction to Expert Systems
The architecture of today's expert system is based on work going back to
the early 1970's [5.5]. This aspect of artificial intelligence (AI) focuses on
techniques to capture and reuse the knowledge of experts to solve difficult
problems. Thus, the term "knowledge-based system" is synonymous with
the term "expert system." The process of building an expert system is
known as knowledge engineering. Since the introduction of commercial
178 Advanced Control Unleashed

expert system products, this technology has been successfully applied by


the process industry in a variety of applications.

The knowledge of an expert can be represented as rules and facts. If facts


exist that match the pattern defined by a rule, then the rule is satisfied and
executes its configured action. Unlike procedural program languages,
such as C++, in which there is a tight interweaving of data and knowl-
edge, expert systems allow for two levels of abstraction: data and knowl-
edge. An inference engine applies the knowledge to the data and relies on
pattern matching to guide execution. The inference engine determines
which rules are satisfied by facts and executes these in the order of their
assigned priority. There is no specified control flow as with procedural
programming. The general structure of an expert system is shown in Fig-
ure 5-11.

Figure 5.11. Components of an Expert System

An expert system differs from conventional programming in that the solu-


tion relies on inferences rather than an algorithm. Often the human knowl-
edge captured in rules is heuristic in nature; that is, input data can be
incomplete, inconsistent or even incorrect.

The user interface will typically allow various aspects of the expert system
execution to be monitored. The user interface may also be used to enter
facts.

Rules
The reasoning capability of the human mind is complex and difficult to
fully explain. However, as early as the early 1970's, it was shown that
much of human problem solving can be expressed by IF...THEN-type
rules [5.6]. This model of human problem solving and inference is the
Chapter 5 - Abnormal Situation Management 179

basis of modern rule-based expert systems. The general structure of a rule


is shown in Figure 5-12. The rule consists of a conditions part and a list of
actions.

Figure 5.12. Rule Structure

The conditional part, or antecedent, of a rule may consist of one of more


conditional elements. The simplest type of element is a pattern consisting
of one or more constraints intended to match the fields of a template fact.
For example, the format of the rule in the CLIPS expert system software
[5.4] is shown in Figure 5-13.

Figure 5.13. CLIPS Rule Format

The expert system attempts to match the patterns of rules to the facts in
the fact list. If all the patterns of a rule match facts, the rule is activated and
put on the agenda. A rule will fire only once for a specific set of facts. This
property, know as refraction, prevents an expert system from being caught
in a loop where the same rule continues to fire.

One of the most powerful features of expert systems is the capability to


define variables that are later referenced in the left-or right-hand portion
of a rule. Using this feature, it is possible to implicitly reference all
instances of a fact in a single rule. Thus, as more fact instances are added,
the same rule can be utilized. This unique feature of an expert system is
used to define effective monitoring systems that are easily modified and
maintained.
180 Advanced Control Unleashed

In a process control system, it is important that information within files in


the control system be accessible by rules. To support this feature, most
expert systems include a read function. The manner in which a read func-
tion is defined is specific to the expert system.

Inference Engine
The inference engine is the part of an expert system that matches facts to
rules to determine which rule to execute. Expert systems may use several
different inference methods.

The two most common methods used by commercial expert systems for
the inference engine are forward and backward chaining.

• Forward Chaining: Reasoning from facts to the conclusion


resulting from these facts. If a rule has multiple patterns, then all of
its patterns must be satisfied for it to be placed on the agenda for
execution.
• Backward Chaining: Reasoning in reverse from a hypothesis—a
potential conclusion to be proven—to the facts that support the
hypothesis.

While some expert systems support both inference methods, others


support only one. It is possible to accomplish backward chaining in a
forward-chaining expert system by the manner in which the rules are
structured. For diagnostic problems, backward chaining may provide the
most efficient solution and often fits the way an expert expresses his
knowledge. For monitoring applications that deal with abnormal situation
management, forward chaining will in most cases provide the most
efficient solution and better fits the problem definition.

A rule whose pattern is satisfied (matched) is placed on the agenda and


activated. Multiple rules on an agenda at the same time are activated or
fired in their order of assigned priority. If rules are not prioritized, the
order of solution depends on the order in which the rules were defined.
After a rule has fired on a fact, the inference engine will not fire on that fact
again. Activations are left on the agenda until it is determined that the left-
hand side is no longer satisfied. As part of actions taken by a rule, it is pos-
sible to remove or add facts. The format of actions is often specific to the
expert system.

The inference engine normally executes on a period basis during which


the impact of changes in facts are evaluated. Even in a small expert sys-
tem, there are thousands of facts and hundreds of rules that must be con-
stantly evaluated. Thus, the efficiency with which facts and rules are
Chapter 5 - Abnormal Situation Management 181

evaluated is of great importance in a real-time system. To achieve best effi-


ciency, there is a need to be able to know the rules that apply without hav-
ing to try each one sequentially. The Rete algorithm [5.7], developed in the
late 1970's, is used in many expert systems to provide this efficiency in
execution. Using this approach, information about rules is stored as a net-
work. When the inference engine runs, it only looks for fact changes and
their impact on matches in each cycle. Using this technique, it is possible
to achieve dramatic improvements in the speed at which facts are
matched.

Facts
In order for an expert system to solve a problem, it must have data or
information for pattern matching. Groupings of this information are
known as facts. In general, a fact will consist of a relation name and named
slots with their associated values. An example of a fact, as defined in the
CLIPS expert system software, is shown in Figure 5-14.

Figure 5.14. Fact Example

In this example, the symbol "Header" is the fact's relation name and the
fact contains three slots: name, pressure, and temperature. The value of the
name slot is "Steam," the value of the pressure slot is "1200" and the value
of the temperature slot is "900." The order in which slots are specified in
the fact is irrelevant. To facilitate implementation, groups of facts that
share the same relation name and contain common information are
described using a Fact template. Also, multi-slot capability is provided
that allows multiple values to be defined for one slot.

Groups of facts are stored in the expert system in the fact list. New facts
may be added to the fact list using an Assert command. Similarly, the
Retract command is used to remove facts.

Other Structures
In some cases, other technologies are combined with an expert system to
achieve a specific goal. For example, the relationship between process
inputs and a process output may in some cases be highly nonlinear. For
this case, it is most effective to use a neural network to determine this rela-
182 Advanced Control Unleashed

tionship and then to use the expert system. The neural network output is
used as a fact input to the expert system (rather that using the process
inputs directly as facts). Such an implementation is illustrated in Figure
5-15.

Figure 5.15. Combining Neural Networks with Expert Systems

As a means of dealing with uncertainty, some systems utilize fuzzy logic.


Some expert systems include fuzzy expert-system capability. To make
such systems practical, they must also include a threshold in the pattern
matching; otherwise the rules will fire for any nonzero antecedent [5.9].

References
1. Deaton, B and Blevins, T., "An approach to Mill Wide Control," ISA
Conference, Houston, October 1986.
2. Flynn, J., "G-P Completes First Phase of Upgrade at 1,300-tpd Crossett Pulp &
Paper Mill," Paper Trade Journal, August 15,1982, pp 17-22.
3. Tzovla, V. and Zhang, Y., "Abnormal Condition Management Using Expert
Systems," ISA Conference, Houston 2001.
4. Giarratano, J. and Riley, G., "Expert Systems—Principles and Programming,"
PWS Publishing Company, 1998.
5. Buchanan, B.G and Mitchell, T. "Model-directed Learning of Production
Rules," Pattern Directed Inference Systems, Academic Press, 1978, pp. 297-
312.
6. Newell, Allen and Simon, Herbert A., Human Problem Solving, Prentice-
Hall, 1972.
7. Forgy, Charles, "Rete: A Fast Algorithm for the Many Pattern/Many Object
Pattern Match Problem," Artificial Intelligence, 19, 1982, pp. 17-37.
8. Fink, Gavin A., "Rules of Thumb for Implementing Expert Systems in
Engineering," InTech, April, 1987, pp 33-37.
9. Mertz, Glenn E., "Application of a Real-Time Expert System to a Monsanto
Process Unit," Chemical Manufacturer's Association Process Control
Symposium, 1990.
Next Page
6
Automated
Tuning

Practice
Overview
Proper controller tuning is the largest, quickest, and least expensive
improvement one can make in the basic control system to decrease process
variability. The detrimental effects of disturbances, interactions, and con-
trol valve dead band are minimized by an appropriate selection of tuning
settings [6.30]. However, the expertise, patience, and time required to com-
pute these settings is significant and most of the methods don't take into
account real-world problems like process and measurement noise, peri-
odic upsets, load shifts, and valve stick-slip. In addition, many processes
are nonlinear: the tuning settings change with the operating point, and
with time. Also, the tuning methods and coefficients change with the ratio
of time delay to time lag and the performance objective [6.3]. For these and
other reasons, the accuracy of tuning settings usually doesn't exceed 20%
for industrial processes and a reproducibility of better than 10% should
not be expected for settings computed by manual or automated methods.

It is important for the user to realize that there is always a tradeoff


between performance and robustness. Tuning settings that provide the
smallest errors for load upsets and the fastest set point response are the
settings most likely to cause instability for an increase in the process gain
or time delay or a decrease in the process time constant. The quarter
amplitude response from Ziegler-Nichols is a benchmark for minimum
peak error for a load upset, but is too oscillatory and close to instability for
most industrial processes. A smooth over-damped response is more
robust and less disruptive to other loops.

183
184 Advanced Control Unleashed

In order to compute and verify tuning settings, the process must be


moved. This perturbation can be a small change in either the controller set
point or output. The auto tuner illustrated in this book makes step changes
in the controller output yet maintains a method of feedback control impor-
tant for processes that cannot be safely left in manual.

Auto tuners can conduct an automated test of the process, compute the
tuning settings that are best for a particular process and performance
objective, provide estimates of performance and robustness, and show
predicted response plots for load upsets and set point changes. Since the
time to start the test is selected by the user, it is called an "on demand"
auto tuner. Auto tuners also eliminate human error and impatience and
can implement a more exact and complex methodology.

As mentioned, the best tuning settings are a moving target. Often over-
looked are some fundamental process relationships that are used to pre-
dict changes in tuning settings before they affect control. The task of
predicting the actual value of a tuning setting is usually too difficult, but
often a simple first principle equation predicts how the setting will change
with a process condition. The computation is for the change from a refer-
ence point and is done in conjunction with an auto tuner. If the change in
tuning settings with an operating point or time is known, the controller
settings are scheduled. For example, if a plot of the controlled variable ver-
sus the manipulated variable or of the installed characteristic of a control
valve is available, the controller gain is made inversely proportional to the
slope of these curves that represent the process and valve gains.

There are many possible opportunities for the scheduling of tuning set-
tings. If there is a known change in the process gain caused by changes in
heat-transfer surface area, reaction rates, and catalyst activity, the control-
ler gain is scheduled as a function of batch or run time. Since the biggest
contribution to a time delay for a continuous process is often a transporta-
tion delay that is inversely proportional to throughput, a performance
improvement is realized by scheduling integral time and derivative time
settings that are inversely proportional to feed or speed. In addition, surge
tank level control is made smoother by scheduling the gain to be propor-
tional to the error to achieve error-squared control. The power of modern
control systems combined with an auto tuner makes this custom schedul-
ing of tuning settings relatively easy and a potential improvement ripe for
the picking.

Adaptive controllers can tune themselves. The tuning settings are cor-
rected whenever there is an error sufficient to provide enough knowledge
to find better controller settings. Many of the adaptive algorithms pre-
sume an oscillation and are confused by noise and quiet periods of opera-
Chapter 6 - Automated Tuning 185

tion. Most adaptive controllers are presently not foolproof enough to be


left unattended. Also, the user must recognize that the correction is done
after the fact compared to the preemptive scheduling of settings.

Opportunity Assessment
The greatest improvement in plant performance from the application of
this technology occurs where oscillations are evident. Most oscillation is
eliminated, or at least attenuated, by an adjustment of the controller tun-
ing settings. The questions to ask are:

1. Are the oscillations less if a controller is put in manual?


2. Is the oscillation period without aliasing slower than 4 seconds?
3. Is there more than one loop with the same oscillation period?
4. Do the oscillations get slower as you increase the integral time?
5. Is there a high standard deviation coupled with a low performance
index?

If the answer is yes to any of the above questions, benefits from an


improvement in controller tuning are significant and should be pursued. If
the variability is less with the controller in manual, then the controller is
doing more harm than good due to poor tuning.

If the oscillation period is slower than 4 seconds, the source is usually a


loop rather than flashing, burping, hammer, or other such sources of pro-
cess and measurement noise. It is important to note that at least four sam-
ples are needed within an oscillation period to detect the proper period. A
scan time greater than 1 second for a 4-second or shorter period will show
a longer than actual period and a smaller than actual amplitude due to
aliasing.

If trend recordings or a Power Spectrum analyzer show there is a common


frequency in several loops, then there is probably an interaction between
loops that is reduced by appropriate tuning.

If the oscillation period gets larger if you increase the integral time, the
oscillation is probably caused by excessive reset action or valve stick-slip
from excessive friction. While the best solution is to repair or replace con-
trol valves with stick-slip, tuning can decrease the severity of the oscilla-
tion and provide a temporary fix.

Finally, if the standard deviation is high and the loop performance is low
compared to the control capability, then either the robustness or the per-
formance of the controller needs to be improved. If the standard deviation
186 Advanced Control Unleashed

is associated with a large error from process upsets, then the performance
is too low. If the standard deviation is high due to an oscillation, then the
robustness is insufficient.

Even if the loops are not oscillating, there can an excessive control error
and lost or downgraded production. This is most evident in a response to
set point changes and load upsets that is sluggish due to overly conserva-
tive or missing tuning settings. The following questions should be asked
in this situation.

6. Are there temperature or composition loops on vessels with no


derivative action?
7. Are there supervisory or batch controls that could benefit from a
faster response?
8. Are level loops upsetting other loops (are other loops oscillating at
the same frequency as an upstream level loop)?
9. Is the level fluctuating in reactors, evaporators, crystallizers, and
reflux drums?

If the answer is yes to any of these questions, the tuning of these loops
should be checked.

The ability of loops with large mixed volumes to minimize the overshoot
from load upsets and set point changes has been severely compromised if
the derivative or rate time setting is zero.

Supervisory and sequence controls need more gain than reset action to
provide a more immediate response. However, this is not how loops are
usually tuned. To minimize the time required for batch operations or star-
tup, the controller output should actually drive its output beyond its rest-
ing value.

Level loops on vessels that feed other unit operations are a major cause of
variability. The level controllers on feed tanks and surge volumes must be
tuned to allow the level to float between the high and low alarm limits by
decreasing the controller gain or scheduling the gain proportional to the
control error.

On the other hand, level loops on continuous reactors, evaporators, crys-


tallizers, and columns often need to keep the level at its set point to main-
tain a residence time that is important for product quality, especially when
the equipment is pushed beyond its nameplate capacity and needs high
controller gains. The level in these situations is often at its highest permit-
ted limit. A rise in level causes excessive carryover and loss of product in
Chapter 6 - Automated Tuning 187

the vent. A drop in level reduces product or - a drop in residence time


decrease the time available for reaction or crystallization crystal formation
that can lead to poor quality. The level in the sump of a column with a
gravity flow thermosyphon reboiler must be tight to keep both the reboiler
circulation flow and heat transfer area constant. The level controller on a
distillate receiver that manipulates reflux flow back to a column must pro-
vide tight level control to maintain the correct material balance. Tight con-
trol is achieved by the use of the higher controller gains and more
aggressive correction for upsets offered by the Ziegler-Nichols method.

Examples

Liquid Flow Control


The liquid flow control loops FC1-1 and FC1-2 for reactor feeds in Figure
6-la have a fast nonlinear response. The process gain is unity, but the valve
gain is highly nonlinear and the process time delay and lag are both less
than a second. If the scan time and the PV filter time of the flow loop are
one second or less, the controller tuning depends mostly on the dynamics
and resolution of the valve response and the nonlinearity of the installed
valve characteristic. Even if the control valve operates within its recom-
mended throttle range, the valve gain and thus the controller gain can
vary by a factor of 4 just based on the inherent characteristic of the control
valve. If the pressure drop available to the control valve is not constant, an
additional nonlinearity is introduced. Also, the use of a valve positioner,
although absolutely necessary to deal with practical issues of actuator
bench settings and stiffness, creates a cascade control loop where the sec-
ondary valve position loop is not necessarily 5 times faster than the pri-
mary loop. For these and other reasons, such as measurement noise and
dead time dominance, the gain and reset time settings are usually a factor
of 4 smaller than what would be predicted by Ziegler-Nichols rules. The
Lambda tuning method with a large Lambda factor provides the desired
decrease in gain action and increase in reset action and is the preferred
choice for liquid flow loops.

The step size must be large enough to get out of the noise band of the mea-
surement and the dead band of the control valve. Step sizes of 2% to 5%
are normal. The hysteresis setting of a relay auto tuner is set large enough
to compensate for measurement noise.

An important feature shown in Figure 6-la is the ratio of one flow-loop set
point to the other flow-loop set point. This is a common requirement for
reactor feeds to enforce a stoichiometry that is critical for a reaction. It is
important that both feeds change in unison. Ratioing to the set point
rather than the process variable of the other loop helps maintain proper
timing if both control loops have the same closed loop response. Other-
188 Advanced Control Unleashed

wise, the follower is always behind the leader. Lambda tuning helps
insure that the differences in the response are negligible by enforcing a
closed loop time constant. This is particularly important if the feeds are
frequently changed by a sequence, recipe, or a control loop for startup,
batch, changeover, or production rate control.

Liquid Pressure Control


The pump pressure controller PC1-1, shown in Figure 6-1a on the feed
tank pump discharge, manipulates the recirculation flow back to the stor-
age tank to keep the liquid pressure constant despite changes in the feed
requirements of the users. This control loop has similar dynamics to the
liquid flow control controllers but the process gain depends upon the
pump curve. If the pump is oversized and the operating point is on the flat
part of the pump curve, control is poor due to an extremely low process
gain and hence poor process sensitivity. Again valve resolution and valve
gain greatly affect performance and Lambda tuning is the best choice. The
step size is usually larger than that used for flow control due to a smaller
process gain.

Static Mixer Concentration Control


The pH controller AC1-1 at the outlet of the static mixer in the feed to the
reactor in Figure 6-lb manipulates a small reagent valve. The dynamics in
this loop consist of a 0.2 minutes transportation delay from the control
valve to the sensor plus a 0.1 minutes time constant associated with the
glass electrode. This is a relatively fast dead time-dominant process for a
clean new electrode. However, a coating of just 5 millimeters on the sur-
face of the glass electrode can cause the sensor time constant to increase
from 0.1 minute to 2 minutes. When this happens the controller gain must
be increased but the reset action must be greatly decreased since the time
delay-to-time constant ratio has decreased by an order of magnitude. In
addition, the oscillations at the mixer outlet from more gain action do not
appear in the reactor discharge because of the attenuation by the highly
mixed reactor volume. A responsive Ziegler-Nichols tuning is best since
the loop can switch to being dominated by a large time constant and the
high frequency oscillations at the static mixer do not increase the variabil-
ity of the final product.

There is also a valve controller XC1-1 on the static mixer whose set point is
the optimum position of the small control valve for throttling and whose
PV is the pH controller output or implied position of the small control
valve. The output of XC1-1 manipulates the large reagent valve. This valve
optimization control system is better than split ranging the small and big
valve because the pH controller can manipulate smaller and more precise
increments of reagent by always throttling the small reagent control valve
Chapter 6 - Automated Tuning 189

for even high reagent demands. It also eliminates the stiction and disconti-
nuity associated with a split range point. However, the response of XC1-1
must be slow and gradual, so that it doesn't interact with AC1-1. Integral
only action is normally used. This can be approached by the use of
Lambda tuning with a very large closed loop time constant (small control-
ler gain). Also, AC1-1 must be in automatic with aggressive settings when
XC1-1 is tuned. There are many other examples where valve controllers
are used for optimization where a limit in valve capacity is important.
Some of the more common applications involve a valve controller whose
PV is a coolant, compressor, or pressure control valve position, whose set
point is the maximum position of this valve with sufficient sensitivity
(slope) on the installed characteristic, and whose output manipulates a
feed flow controller set point.

Continuous Reactor Pressure Control


The reactor pressure controller PC1-2 in Figure 6-lb manipulates a vent
valve. While a continuous reactor is shown, the discussion that follows
would be similar for batch reactors. Like a furnace pressure controller, the
process time constant is very large compared to the process time delay.
However, the process gain is dramatically less for reactors since the pres-
sure set point is in psig instead of inches of water column and the resi-
dence time is in minutes rather than seconds. The process gain is two
orders of magnitude smaller for reactors compared to furnaces because the
gas volume has a much greater change in density associated with the con-
trol region due to the difference in engineering units of the controller scale
and a much lower vent flow rate.

Tight reactor pressure control is important to prevent fatigue of rupture


discs, and, in many cases, to keep the reaction rate constant. PI Ziegler-
Nichols-type settings are used with a controller gain cut in half to provide
more robustness and less sensitivity to noise. The controller gain esti-
mated from manual and automated tuning methods may be extremely
large and may need to be reduced to avoid amplification of noise and fal-
tering of the approach to set point at the start of the batch. Also, velocity
limiting of set point changes is advisable if the banging around of the vent
valve from high gain action upsets the operator or other loops. If the set
point is not moved while the controller is automatic, the process variable
may falter repeatedly in its initial response to a demand for pressurization.
If the set point tracks the PV when in manual, output tracking, direct digi-
tal control (DDC), or remote output (ROUT) mode—which by definition
means there is a set point change when in automatic or supervisory or cas-
cade modes—there is no problem in the controller recognizing the need to
go to a new set point.
190 Advanced Control Unleashed

A large step size of 10% or more, and the use of the integrator specification
that is available in some auto tuners for the pretest, are suggested for reac-
tors with small vent valves since the time to reach steady state is long due
to the large process time constant. The specification of an integrating
response insures the pretest doesn't wait for the process to line out (reach
a steady state).

Continuous Reactor Temperature Control


In Figure 6-lb, the continuous reactor temperature controller TC1-2 output
is the remote set point of the exchanger outlet temperature controller TC1-
1 that in turn manipulates the chilled water return valve. The exchanger is
in the recirculation line and has a response that is five times faster than the
reactor because the exchanger volume is so much smaller than the reactor
volume. However, if you were to do an open loop step test on TC1-1, you
would find that after the initial response associated with the exchanger,
the temperature continues to ramp. A change in the exchanger outlet tem-
perature causes a slow gradual change in the reactor temperature after
some initial time delay set by the degree of mixing in the reactor. The
change in reactor temperature then causes the inlet, and consequently the
outlet, temperature of the exchanger to change. This recycle effect is signif-
icant and can appear as a ramp since T98 (time to 98% of the response) is an
order of magnitude or more larger than the time interval of interest for the
exchanger. The ramp rate up is generally faster than the ramp rate down
but depends upon the feed temperatures and the rate of reaction. The TC1-
1 controller must be tuned based on the fast dynamics of the heat
exchanger, yet an open loop test for Lambda tuning would yield errone-
ous results. The solution is to use an auto tuner with an integrator specifi-
cation for the open loop pretest. This specific case is indicative of the more
general case of concentration and temperature control of recycle streams,
where the slow integrating response after the initial response is beyond
the time frame of the PID controller.

Continuous Recycle Tank and Reactor Level Control


In Figure 6-la, the continuous recycle tank level controller LC1-1 manipu-
lates the makeup flow of reactant, and the reactor level controller LC1-2
manipulates the reactor discharge flow. Tight control is important to main-
tain the concentration of the recycle tank and the residence time of the
reactor. The integrator option must be chosen for the pretest and the PI
Ziegler-Nichols tuning rules are used but with the controller gain cut in
half. Since the controller gain is so large, the same considerations for veloc-
ity limiting and noise amplification mentioned for reactor pressure control
should be addressed. Also, as with pressurization, filling the vessel for
startup is a problem unless the set point tracking option is used. The step
size should be 10% or larger to make the pretest faster. The controller scan
Chapter 6 - Automated Tuning 191

time should be 5 seconds or faster and a process variable filter should be


added to improve the signal-to-noise ratio.

Figure 6-1 a. Continuous Feed and Recycle Tanks

Figure 6-1b. Continuous Reactor with an Exchanger in the Recirculation Line


192 Advanced Control Unleashed

Batch Reactor Temperature Control


The reactor temperature control shown in Figure 6-2 must keep the tem-
perature close to set point by the manipulation of the set point of the tem-
pered-water temperature controller. For endothermic conditions at the
start of the batch, the tempered water is kept hot by manipulation of steam
flow. For exothermic conditions near the end of the batch, the tempered
water is kept cold by manipulation of the cooling water flow. The tem-
pered-water temperature is a secondary controller with a split ranged con-
troller output. The water temperature controller must be tuned first for
each of the manipulated variables. This loop has a nonlinear valve and
process gain. The equal percentage characteristic of the control valve helps
compensate for the process gain. Since the set point response must be fast
to insure that this secondary loop is responsive to the demands of the pri-
mary reactor temperature loop, PID Ziegler-Nichols tuning is suggested
but with the controller gain cut in half. A small amount of derivative
action is used to compensate for the thermal lag of the thermowell and
exchanger metal mass.

The tuning of the secondary controller TC2-1 is different depending upon


which valve is being throttled because of the change in process gain, time
delay, and time constant. The tuning and output limits should be sched-
uled to prevent the controller output from crossing back and forth across
the split range point. Overshoot of the set point is acceptable and this loop
can be quite oscillatory as long as it is stable. Gain action should be
emphasized over reset action to make sure the response to set point
changes is as fast as possible. Thus, PID Ziegler-Nichols tuning is pre-
ferred. The use of high gain and rate action assumes that a transmitter
with a calibration span narrowed to the operating range is used. If the sen-
sor comes directly into the control system through a thermocouple or RTD
card, the wide range of these cards will cause A/D chatter that will
severely restrict the use of gain and rate action. A temporary fix until a
transmitter is installed is to add a velocity limit to the process variable to
restrict its rate of change to one that is physically realizable to partially
screen out the A/D noise. The controller scan time should be 5 seconds or
more to improve the signal-to-noise ratio.

The tuning of the primary controller TC2-2 should be done at different


important times in the batch sequence. Changes in heat-transfer areas,
feeds, and heats of reactions can cause settings so different that they need
to be scheduled. The tuning method should be PID or PD Ziegler-Nichols.
The controller output must be driven beyond its final resting point for a
set point change. This overdrive is important for minimizing batch cycle
time and is achieved by the use of Ziegler-Nichols rather than Lambda
tuning. However, set point limits and output limits must be scheduled to
prevent the wrong valve from opening.
Chapter 6 - Automated Tuning 193

There is no steady state in a batch operation. An open loop pretest may


take too long and would be dangerous if done during a highly exothermic
reaction when a runaway condition can develop. The reactor temperature
may appear to ramp in the permissible operating region because of a large
process gain and lack of self-regulation. For these reasons, the tuning must
be closely monitored and the integrator option chosen for any automated
open loop pretest. For polymerization and other highly exothermic reac-
tions, a pretest is not safe and a closed loop manual tuning method is
advisable where the controller stays in automatic. In this method, as out-
lined in reference 6-3, reset action is removed (integral time increased by at
least a factor of 10) and the controller gain is increased until damped oscil-
lations first start. Sustained oscillations are not sought. Furthermore, inte-
gral action must be severely restricted or eliminated altogether and
proportional plus derivative control used.

Large step sizes of 5 to 10% are used for the endothermic portion of the
batch to make the pretest faster. During the exothermic reaction, smaller
step sizes are generally used for a safer test. Any change in settings must
be extensively tested for set point changes and closely monitored for sev-
eral batches before left overnight or over a weekend. The controller scan
time should be 5 seconds or more to improve the signal-to-noise ratio.

Surge Tank Level Control


The surge tank level controller LC2-1 shown in Figure 6-2 manipulates the
discharge flow of the surge tank. The level control must minimize the
upsets in the feed to the continuous downstream column despite a cyclic
feed from the batch reactor. In this application, an averaging level method
of tuning should be used. Since the controller gain is far below its maxi-
mum, the integral time must be checked to make sure it is large enough to
prevent sustained oscillations, or proportional-only control should be
used. Also, error-squared control and an adapted velocity limited feedfor-
ward control, per Appendix B, should be considered to provide maximum
smoothing of the column feed. If there is a cascade control of level to flow,
the secondary flow loop must be tuned first.

The integrator option must be chosen and the step size should be at least
10% to make the pretest faster. The controller scan time should be at least 5
seconds to improve the signal-to-noise ratio.

Continuous Column Temperature Control


In Figure 6-3, the distillation column temperature controller TC3-2 manip-
ulates the distillate flow from the overhead receiver. Column temperature
loops are very slow and large columns can take a day to reach steady state.
Since a pretest that waits for the response to line out will take too long, the
194 Advanced Control Unleashed

Figure 6-2. A Batch Reactor and Surge Tank

temperature will probably look like a ramp and the integrator option
should be chosen. PID Ziegler-Nichols tuning is preferred, but with the
controller gain cut in half to provide more robustness. Large step sizes of 5
to 10% are used to help make the pretest faster. The scan time should be at
least 10 seconds to improve the signal-to-noise ratio unless the RTD trans-
mitter span is less than 50° C.

Continuous Distillate Receiver Level Control


In Figure 6-3, the distillate receiver level controller LC3-1 manipulates the
reflux flow back to the column. The large diameter of the horizontal
receiver translates to a small process gain. Unlike the boiler drum, there is
no inverse response and no excessive measurement noise so that a high
controller gain is used. Tight level control enforces the material balance
since the temperature controller only affects the column when the reflux
changes as a result of the level controller's response to a change in distil-
late flow. Also, the changes in internal reflux caused by the changes in
temperature of the column walls from rainstorms are partially compen-
sated for by the changes in reflux flow made by the level controller. For
these reasons, tight level control is important for column operation and
high gain action is recommended. PI Ziegler-Nichols tuning should be
used. However, the controller gain that is permitted is way above the prac-
tical limit. The user should select the maximum gain that doesn't wear out
Chapter 6 - Automated Tuning 195

the reflux valve. It is very important to velocity-limit set point changes


and to apply proper output limits to keep the reflux flow within its set
point limits. It is also helpful to use reflux-to-feed ratio control for startup
and as a feedforward to compensate for feed upsets. How long the level
controller can stay in manual during startup depends upon the size of the
receiver. If there is a cascade of level-to-flow control, the secondary flow
loop must be tuned first.

The integrator option must be chosen and the step size should be at least
10% to make the pretest faster. The controller scan time should be at least 5
seconds and a process variable filter should be added to improve the sig-
nal-to-noise ratio.

Figure 6-3. A Continuous Distillation Column

Furnace Pressure Control


The furnace pressure controller PC4-1 shown in Figure 6-4 is distin-
guished by a high process gain because the measurement span (usually
inches of water column) represents a small fraction of the total possible
process range. In addition, the scale range might span a small fraction of
an inch (-0.1 to 0.1). The process time delay is small compared to the loop
scan time. If a variable speed drive (VSD) is used as depicted, there is no
observable time-delay contribution from the manipulated variable and the
196 Advanced Control Unleashed

controller scan time is essentially the total loop dead time plus any lags
from pressure transmitter dampening or controller PV filtering. These sig-
nals tend to be very noisy. Installation practices to muffle the sensing lines
from the impact of a varying velocity head are helpful but often the PV fil-
ter time must be judiciously increased. Some furnace pressure loops can-
not stay in manual long enough for a pretest. The process gain is so high
that the pressure will ramp off toward a trip setting before the pretest is
complete. The open loop response looks like an integrator in the allowable
operating region. For these "pseudo" integrators, the integrator option
must be chosen. Since the peak error is important to prevent furnace trips
and the time delay-to-time constant ratio is usually extremely small, PI
Ziegler-Nichols tuning is the preferred choice, but with the controller gain
cut in half to provide more robustness and less amplification of noise.

The step size should be small to minimize the possibility of disrupting


plant operation due to the very high process gain. The step size could be
as small as 0.5% if a variable speed drive is used. If a fan damper is throt-
tled for pressure control, step changes of 5% or larger may, unfortunately,
be necessary to get the damper to move due to high backlash from link-
ages, high friction from high-temperature packing, and a low positioner
gain setting necessitated by low damper inertia. The hysteresis setting of a
relay auto tuner is often quite large to compensate for measurement noise.

Boiler Drum Level Control


The drum level controller LC4-1 in Figure 6-4 manipulates the boiler feed
water. The large diameter of some horizontal drums translates to an excep-
tionally low process gain. The controller gain could be quite large if there
weren't an inverse response from shrink and swell and a noisy level signal
from the boiling water. If the level controller directly manipulates the feed
water valve for startup when the flow is below the low-range ability limit
of the flow meter of the secondary flow loop, the tuning test must be done
for both modes of operation. In this application, an averaging level
method of tuning should be used. Since the controller gain is way below
its maximum, the integral time must be checked to make sure it will not
cause sustained oscillations.

The feedforward signal, where the feed flow is set by steam flow, should
be turned off during the tuning tests. The controller scan time should be 1
to 2 seconds and a process-variable filter should be added to improve the
signal-to-noise ratio. The integrator option should be selected for the pre-
test and the step size should be 5% to 10%. The hysteresis setting of a relay
auto tuner may need to be relatively large to compensate for noise.
Chapter 6 - Automated Tuning 197

Figure 6-4. A gas fired furnace and boiler

Application
General Procedure
1. Get the controller to its normal operating point by a combination of
changes in its set point and controller output. The controller
should also be tuned at expected loads (flows). The tuning of
control loops during water batching is a waste of time. Nearly all
control loops have some operating-point or load nonlinearity.
a. For operation on the knees and flat portions of an installed
valve characteristic or on a plot of a process variable such as
column temperature or neutralizer pH versus the manipulated
flow-to-feed ratio, add signal characterization per Chapter 2 to
reduce the severity of the gain nonlinearity.
b. For split range valves, adjust the split range point or add signal
characterization per Chapter 2 to minimize the changes in gain.
2. For cascade control systems, tune the secondary (slave) controller
first. After the secondary is tuned it should be put in the remote set
point (RSP) or CAS mode and then the primary (master) controller
tuned.
198 Advanced Control Unleashed

a. If the secondary loop is not 5 times faster than the primary


loop, the primary controller tuning must be slowed down. This
is done by a decrease in the controller gain or a larger Lambda
factor and slower closed-loop time constant for the primary
loop.
b. If the primary controller is switched to directly throttle a
control valve for startup (e.g. steam drum level control), the
controller tuning will have to be found and switched for the
two modes.
3. For dead time compensators, such as Smith Predictors, tune the
PID controller with the compensator turned off. Then enter the
correctly identified time delay, time constant, and process gain into
the compensator. If necessary, compute and update the time delay
continuously. The controller gain setting identified with the
compensator off is used but a much faster integral time should be
possible if the identified settings are accurate. In fact, for dead
time-dominant processes, the minimum integral time approaches
¼ of the error in the identified dead time used in the compensator.
In practice, the integral time is set to be about 20% of the sum of the
identified dead time and time constant.
4. Turn off any feedforward signals unless they are absolutely
necessary to keep the process variable within bounds. The action
of these calculations can confuse the auto tuner.
5. Make sure the exception reporting for data is set small enough and
the update time is set fast enough for you and the auto tuner to see
the changes in the manipulated and controlled variables.
a. If the auto tuner is in the controller, this is not normally an
issue because the exception reporting at the I/O card is small
enough and the controller scan rate is fast enough.
b. If the auto tuner is in a separate computer, the exception
reporting of the console and data highway and the
compression and update time of the data historian must be
minimized.
6. Find and eliminate the cause of any oscillations per Chapter 2.
Nearly all auto tuners require that the loop not be oscillating when
the test starts.
a. If the oscillations are self-inflicted, whether due to incorrect
tuning or valve stick-slip, de-tune the controller or put it in
manual long enough for it to settle down.
b. If the oscillations are coming from another loop, de-tune the
culprit loop or put it in manual.
Chapter 6 - Automated Tuning 199

c. If the oscillations are coming from a steam trap or pressure


regulator, replace the defective or incorrectly applied devices.
d. If the oscillations are measurement noise, add a PV filter to
keep the output fluctuations within the resolution of the final
element. A PV filter equal to or less than a properly set
controller scan time should not add excessive lag to the control
loop.
e. If the oscillations come from a feedforward signal, add a
feedforward filter to keep the output fluctuations within the
resolution of the final element or turn off the feedforward. The
feedforward is often a noisy flow signal that needs to be
filtered.
7. Identify the PV noise band (amplitude of fast oscillations or
chatter). All measurements have some noise, even if it's just A/D
chatter.
8. Decide if the control loop behaves like an integrator or a runaway.
If it does, choose the integrator option on the auto tuner. Loops
with very large time constants or process gains will look like an
integrator in the control region of interest. If the integrator option
is not selected, a test to find the steady-state response may take too
long, time out, or activate an interlock.
9. Pick a step size at least 5 times larger than the resolution of the
final element that will cause a PV excursion at least 5 times larger
than the noise band. Most importantly, make sure the step size is
not so large that it will drive the process into an undesirable
operating region, hit an output limit, exceed a set point limit,
activate an alarm, trip an interlock, cross the signal selection point
of an override controller, or cross back and forth across a split
range point. See the Rules Of Thumb for more details on the step
size selection.
10. Pick a tuning method that is consistent with the process and
performance objective. Some auto tuners will pick the appropriate
method after you specify the type of loop (temperature, level, flow,
pressure). If you need to minimize peak errors for temperature,
composition, and pressure control of vessels, the Zeigler-Nichols
settings with about half the gain action are generally the best
option. For pipelines, plug flow, and web (sheet) processes,
Lambda tuning is the right approach. See the Rules Of Thumb for
more details on method selection.
11. During the test initiated by the auto tuner make sure the
magnitude and rate of change of the process variable are not too
large or too small.
200 Advanced Control Unleashed

12. Check the set point and load response for the new settings first
with any simulation options offered by the auto tuner and then by
making actual set point and controller output changes. Observe
the performance of the controller for several days under different
operating conditions, noise, and interactions before leaving it on
overnight or over the weekend.

Application Detail
The following details, developed for the relay-based auto tuner, are also
generally true for other auto tuners. To achieve the best results, the user
should adhere to the guidelines specified in this section.

Selection of the Proper Test Conditions


• The control valve resolution and stick-slip must be small enough
and the measurement repeatability must be good enough to keep
the process within the desired control band. Surveys in the pulp
and paper industry have proven that valve problems cause at least
30 % of loop tuning problems. This means that almost one out of
every three loops needs valve improvements to enable proper loop
control. All valves have hysteresis and a resolution limit. Even more
detrimental to tuning and performance is the excessive stick and
slip of some valves. A step size approaching the resolution limit or
degree of stick will have no response or an excessively slow
response. Slip will cause a rapid and erratic response. These
conditions can cause inconsistent and erroneous results. If the
calculated values seem out of the ordinary, it might indicate valve
problems.
• Before tuning a loop, the loop must be stable.
• The step size must be large enough for the loop to exceed the noise
band in both directions and to develop the oscillations driven by
the relay method. The change in the controller output or step size
must cause an appropriate change in the PV. If the change in the PV
is not large enough, the tuning process might not start at all because
there are no self-oscillations. The oscillation might be distorted for
the following conditions:
• Load disturbances
• Interactions
• Excessive noise
• Periodic upsets
• Stick-slip
Chapter 6 - Automated Tuning 201

• The process input must be able to make the step change required
for tuning.
• The step change should not cause a relief valve to open or any
interlocks to be triggered, either of which could affect the tuning
results.
• When tuning, the controller set point, process variable, and
controller output should not be near their limits.

Selection of the Proper Tuning Method (Rule)


A good auto tuner design provides a wide choice of tuning rules and fea-
tures that are used to provide the best tuning for a loop. Some general
guidelines are as follows:

• After changing tuning values, observe the loop for a period of time
to see its reaction to noise, disturbances, and small SP step changes.
• If the loop performance is not satisfactory, change the selection of
desired loop response (Normal, Slow, or Fast) or the use of the
Expert option for advanced tuning rules.
• If the loop performance is still not satisfactory, try an alternative
tuning method (without re-testing the loop).
• The PI design is the most common choice for flow, liquid level, and
gas pressure control.
• PID control, along with feedforward and cascade strategies, is often
used for temperature control, pH control, and composition control.
(Composition loops with sample and hold signals might require
some limits on derivative action.)
• If a loop was tuned by trial and error and it works well, it is still
worthwhile to consider using an auto tuner to verify loop settings.
If the values calculated by the auto tuner are drastically different
than trial-and-error values, someone has to consider the new values
closely before changing controller parameters.
• The PID designs are limited for some applications. For example, if
the time delay of a loop exceeds the time constant (control loop is
dead time dominant), the PID performance deteriorates as the
degree of dead time dominance increases (ratio of time delay to
time constant increases). For such applications, a dead time
compensator or Model Predictive Control provides better results. If
the dead time changes more than 10% for the compensator or more
than the scan time for the model predictive controller, the dead time
used in the model must be updated.
202 Advanced Control Unleashed

Rules of Thumb

Rules of Thumb on Step Size


Rule 6.1. — The step size should be at least 5 times larger than the control valve
resolution limit and should cause a change in the process variable that is at least 5
times larger than the measurement resolution limit and the noise band.

Rule 6.1a. — The control valve resolution limit without excessive stick-slip is
about ½ the valve dead band and varies from 0.1% to 5% for valves with properly
calibrated and tuned positioners. It is as high as 25% for valves without posi-
tioners or with excessive packing or seal friction. As you approach the
dead band, the valve dead time increases dramatically, especially for
valves without digital positioners. The best resolution is obtained from
sliding stem valves with diaphragm operators and properly tuned digital
positioners, per Chapter 2.

Rule 6.1b. — The measurement resolution is the largest of the analog-to-digital


(A/D) card resolution and the data reporting exception and compression settings.
The A/D resolution is about 0.05% of the card span. For thermocouple
input cards, the card span is often much larger than the controller span.
Narrow span transmitters and digital signals have the best measurement
resolution. If the auto tuner is not resident in the controller, then the data
highway exception and data historian compression settings should be the
same as the input card exception setting, which in turn should be com-
mensurate with the input card resolution. The exception and compression
settings are often set too large (0.25 to 1.0%) in an effort to minimize traffic
and to save on data storage requirements.

Rule 6.1.c — The noise band is the largest for flow, pressure, composition, and
level measurements and can vary from 0.05% to 5% of scale span. For tempera-
ture measurements, the noise is principally from A/D dither (0.05% of
input card span). The signal-to-noise ratio is improved by selecting a scan
time large enough that the true change in the process variable within the
scan time is larger than the measurement resolution. The noise band is
reduced by a proper adjustment of the process variable filter. Some tuners
also provide a noise band or hysteresis setting to discern true changes in
the process variable.

Rule 6.2. — The step size should be the largest possible that does not cause a pro-
cess or operational problem or cause a controller to hit its set point or output limit.
An increase in the step size will reduce the test time and minimize the vul-
nerability to noise, load upsets, and resolution limits. This is particularly
important for slow loops. However, the step must not be so large as to acti-
vate an alarm or exceed a limit or cause a failure.
Chapter 6 - Automated Tuning 203

Rule 6.2a. — Ask the operator and the process engineer what is the largest possi-
ble safe step size in each direction. You need to avoid getting within the accu-
racy limit of constraint, interlock, and relief device settings. Rupture discs
that see excursions within 10% of their setting may experience premature
failure.

Rule 6.2b. — Make sure the step size does not cause a secondary (slave) control-
ler to hit its set point or its output limit or the primary (master) controller output
to hit its output limit for more than the dead time. Some tuners will stop the
tuning procedure as soon as any of a chain of controllers in a cascade con-
trol system hit an output limit. This is particularly a problem with reactor
temperature-to-coolant temperature cascade control systems.

Rule 6.2c. — For split ranged valves, pick a step size and a starting point and
choose a set point to stay away from the split range point to insure each valve is
individually throttled. Avoid crossing back and forth across the split range
point because this is the point of greatest valve nonlinearity and stick-slip.
The lowest gain and derivative time setting and the largest integral time
setting should be used from the separate tests unless the tuning settings
are going to be scheduled based on the valve throttled.

Rule 6.2d. — For override controllers, pick a step size that does not cause the
output signal selector to choose another controller output. Each override con-
troller must be tuned individually with freedom to move the manipulated
variable. There must be no crossing of the signal selection point during the
test. It is necessary to put the other override controllers in manual with an
output far away from the signal selection point so that they are out of the
picture.

Rule 6.2e. — The step size is proportional to the controller gain. For loops with
high controller gains, such as level and temperature, the step size should
be large enough (10% or more) to reduce the time of the test and the effect
of noise. For high process gains that are often encountered in pH and fur-
nace, kiln, or dryer pressure loops, the step size needs to be quite small
(less than 1%) to avoid an excessive upset to the process. Even though
flow loops have a low controller gain, step sizes of 2% or more are needed
to sufficiently get out of the noise band. If a positioner is not used or the
control valve has excessive stiction or backlash, the step size might need to
be an order of magnitude larger.

Rule 6.2f. — For severe operating point nonlinearities, the step must not drive
the process to either an excessively flat or steep slope on a plot of the controlled
variable versus the manipulated variable. A movement from a flat to a steep
portion of the curve will look like a runaway. A transition to the extremely
flat tails of the plot will cause an exceptionally slow response and test.
204 Advanced Control Unleashed

Rule 6.2g. — For high process gain integrators and runaways, the step size must
not cause an excessive rate of change of the controlled variable. The step size for
furnace pressure and exothermic reactor temperature controllers must be
carefully chosen to prevent the activation of interlocks or relief devices.

Rules of Thumb on Tuning Methods


These rules build on the explanation of the different tuning methods and a
table of typical tuning settings and tuning methods for different types of
processes provided in Chapter 2.

Rule 6.3. — For fast interacting loops and dead time-dominant loops, use the
Lambda tuning method settings. Flow, liquid pressure, liquid pipeline, plug
flow reactor, and solid product control loops tend to interact and have a
dead time larger than the largest time constant (lag). Sheet, web, conveyor,
and fiber spin line are examples of processes with solid products that are
dead time-dominant. Variability that enters almost anywhere in the pro-
cess is trapped in a solid product and cannot be blended out as it can with
liquid products. The Lambda tuning method provides a smooth response
that minimizes variability and the disruption to other loops. It also results
in reset action that increases (reset time that decreases) with the degree of
dead time dominance. For a pure dead time-dominant system the reset
action is as much as 8 times greater (reset time 8 times smaller) for Lambda
tuning compared to Ziegler-Nichols tuning.

Rule 6.3a. — If rapid changes in the controller output disturb other loops, a
higher gain margin (larger Lambda and lower controller gain) should be used than
what is obtained from default Lambda Tuning settings. If changes in the con-
trolled rather than the manipulated variable are important, then a smaller
Lambda (larger controller gain) should be used to provide tighter control.

Rule 6.4. — For lag-dominant loops, use the Ziegler-Nichols tuning method set-
tings but reduce the gain for more robustness. Crystallizer, reactor, evaporator,
and neutralizer temperature, concentration, and pressure control loops
have back-mixed volumes with a large lag. Columns have a series of
mixed interacting volumes that create a large lag.

Rule 6.4a. — For self-regulating loops where the process variable will line out to
steady state in manual, the gain margin should be increased (controller gain
decreased) from the classic Ziegler-Nichols tuning for quarter amplitude to pro-
vide more robustness. While the quarter amplitude response minimizes the
peak error for a load upset, an increase of 25% in the dead time or process
gain can cause instability. The controller gain should be reduced at least by
50% to insure sufficient robustness.
Chapter 6 - Automated Tuning 205

Rule 6.4b. — For fast integrating and runaway loops where the process variable
can rapidly ramp or accelerate in manual, the minimum gain margin (maximum
controller gain) for sustained robustness should be used. The tuning must be
aggressive. The user must pay particular attention to the window of
allowable controller gains for integrators and runaways since too low a
controller gain is as much a problem as too high a controller gain. Vessel
level, boiler drum blow down, and batch vessel temperature and concen-
tration control loops have an integrating response, but the integrating pro-
cess gain (ramp rate) of these loops is usually very slow, except where
there are volumes with extremely small diameters. Of greater general con-
cern are exothermic reactors, fermentors in the growth phase, and com-
pressors in surge, because they can develop an accelerating response for
certain combinations of conditions.

Rule 6.4c. — Pseudo integrators and runaways should be treated as integrators


in terms of tuning settings and options for auto tuners. Gas pressure loops on
large vessels behave as an integrator because in the control range of inter-
est (below relief settings) the pressure never lines out and is a ramp due to
a large process time constant or high process gain. This response is preva-
lent when the controlled variable has units in inches of water column. Col-
umn temperature and pH loops with transitions from low to high process
gains exhibit an acceleration of the PV that looks like a runaway. Aggres-
sive tuning is used to keep the PV from venturing into the steep slopes of
tray temperature or pH titration curves.

Rule 6.4d. — Concentration and temperature control loops with a recycle effect
should be treated as integrators. If the recycle effect is delayed and slow, then
it should be ignored for tuning. Closed loop tuning methods and the relay-
oscillation method of auto tuning will identify tuning settings to match the
fast dynamics. Open loop tuning methods and pretests may try to identify
erroneous tuning settings for the slow ramp from the recycle effect that
should be ignored.

Rule 6.5. — For a controller with an objective of optimization rather than distur-
bance rejection or servo response, use Lambda tuning with a large closed loop time
constant to insure a gradual and smooth approach to the optimum. The most
common example of this type of controller is a valve controller that will
increase a feed rate to ride the maximum capacity constraint of a coolant,
compressor, or pressure control valve used for disturbance rejection. The
constraint is the largest position with sufficient sensitivity on the installed
valve characteristic and enough capacity for fast disturbances.

Rule 6.6. — For control loops where there are time constants greater than 0.2
minutes in the process or measurement besides the largest time constant, consider
the use of a derivative time equal to contribution of these additional lags. The
206 Advanced Control Unleashed

temperature control loop with its many thermal lags in the process and
thermowell is a common example of a loop with multiple appreciable time
constants.

Rule 6.7. — For control loops where there is a true or pseudo runaway response,
use derivative action and minimize reset action (maximize the integral time).
Rate action senses the change in slope and acceleration of the process vari-
able (PV) that is important for processes whose open loop response shows
an acceleration within the control band (see Chapter 2 for more details).
For polymerization reactors that are highly exothermic, proportional plus
derivative control is used and reset action is omitted since it has no sense
of direction [6.3]. For pH and column temperature control systems where
the PV can move onto a steep portion of the process curve, rate helps to
catch and prevent large excursions. For these processes, the controller may
have to always stay in auto, which eliminates the ability to do a pretest. If
the runaway is slow enough, the integrator option may enable a pretest
but it must be closely monitored for an accelerating PV.

Guided Tour
This tour illustrates the potential ease of use and convenience of an inte-
grated interface that is possible for an auto tuner embedded in an indus-
trial control system. The following areas are addressed:

• Identification of process dynamics


• Simulation of loop response
• Tuning rule selection

The process of tuning control loops is more effective using an automated


tuning tool. This is especially true if the process dynamics are difficult or if
you do not have experience tuning control loops. During startup
situations where hundreds of loops must be quickly tuned, effective
tuning tools may significantly reduce the time needed to commission the
control system. Also, even during normal operation, the ability to easily
establish the best tuning for critical loops can often reduce process
variability. Such an improvement may lead to improved operations.

The interface provided for tuning PID control blocks is accessed from any
station within the control system. By selecting a control loop and choosing
Tune, this interface is activated for the selected block. The controlled
parameter, manipulated parameter and set point of the PID are automati-
cally trended and the current tuning displayed. Tuning is initiated from
any mode of operation by selecting Test. In response, the actual mode of
the PID block changes to LO to indicate that tuning is active. While testing
is active, the process is under two-state control. The manipulated parame-
Chapter 6 - Automated Tuning 207

ter is changed from its initial value by the specified step size for a total of
three cycles, as shown in Figure 6-5.

Figure 6-5. Loop Tuning Interface

Based on the process response during testing, the process dynamics are
automatically determined. Process gain, lag and dead time values are dis-
played in the Test Process portion of the interface. Based on the process
gain and dynamics, a recommended setting is provided for the PID tun-
ing. If the user chooses Simulate, the process response using the recom-
mended tuning is examined before transferring this to the PID block.
When this selection is made, a simulation of the closed loop response is
shown for set point and load disturbances as illustrated in Figure 6-6.

The robustness of the control is also shown in terms of phase and gain
margin. By selecting a different point on the robustness map, the tuning
and response are automatically displayed for the new robustness selec-
tion. Thus, the tuning is easily selected based on the desired response and
the degree of robustness that is needed in the process. The recommended
tuning is transferred to the PID block by selecting Update.

If the user wants to apply his knowledge of tuning, then by choosing the
Expert option he may choose the tuning rule that is used to determine the
recommended setting based on the identified process dynamics, as shown
in Figure 6-7.
208 Advanced Control Unleashed

Figure 6-6. Simulation of Loop Responses

Figure 6-7. Expert Tuning Options

When the Expert option is chosen, other selections are provided for special
conditions such as extremely high or low process gain. These expert fea-
tures are not required to tune most processes. Normally, tuning is a simple
two-step process—select Test, select Update—and is quickly and automat-
ically accomplished without additional user input.

Theory

Introduction to Auto Tuners


Auto tuners can provide a consistent automated methodology to find the
best tuning settings for a process-control system. The automatic procedure
consists of three steps:

1. Process testing
2. Evaluation of process characteristics from test data
Next Page

Chapter 6 - Automated Tuning 209

3. Calculation of controller parameters

The procedure emulates a typical manual tuning. Let's take a look at an


example of a manual procedure using the Ziegler-Nichols (ZN) method,
the first industrial tuning technique introduced in 1942. With this
technique a user has to increase gradually proportional controller gain to
get limited amplitude sustained oscillations, measure the period and the
amplitude of the oscillations and finally, using ZN calculations, find
controller parameters. While control literature still cites this method
verbatim, a significant practical modification is actually used in industry
that keeps the loop far from its stability limit. In practice, sustained
oscillations are not sought but just the start of a damped oscillation. Even a
single half wave suffices [6.3]. The period is longer than the ultimate
period for these attenuated oscillations, but it is in the direction of a more
conservative estimate of the integral time.

Another popular tuning technique, Lambda tuning, requires putting the


controller in manual and making a step change on the controller output,
waiting for the full response, and calculating controller tuning parameters
using Lambda tuning rules.

While evaluating various auto tuner techniques we should consider the


following:

• Type of processes where auto tuning is to be applied (slow/fast,


self- regulating/integrating/inverse response)
• Resistance to noise and disturbances
• Process disturbance magnitude required for process testing
• Process operation safety during tuning
• Tuning time
• Achieved controller performance

Both of the manual tuning techniques cited above can be applied to


automatic tuning in principle; however, the requirements for process
disturbance magnitude and safety are difficult to satisfy in practice.
Increasing the controller gain to get just the start of an oscillation for the
modified Ziegler-Nichols closed loop method requires persistent
attention; and the process may move during the time needed to reach
steady state during an open loop test for Lambda tuning. In addition, a
load upset will confuse the person making the test and measuring the
results.
Previous Page

210 Advanced Control Unleashed

K. Astrom and T. Hagglund proposed using relay action to develop a pro-


cess oscillation. This simple idea has many significant advantages and has
been implemented in a number of industrial products. Applying relay
action for tuning keeps the process under on-off control, which is inher-
ently stable. The technique is robust and practically does not require any
settings. The amplitude of the oscillations is adjusted by relay step size
and relay hysteresis.

Further advancement of the technique has been achieved by improving


Ziegler-Nichols tuning formulas, calculating a simplified first order-plus-
dead time process model from tuning data and finally by identifying an
exact process model.

Basics of Relay-Oscillation Tuning

Process Testing
A two-state relay replaces the PID/PI controller for the period of process
testing as shown in Figure 6-8. During testing the relay keeps a loop under
two-state control. The two-state controlled loop experiences oscillations
with limited amplitude. The frequency of loop oscillations is the critical
frequency, thus the period of these oscillations is the ultimate period. The
amplitude of the oscillations and the relay step size are used for the calcu-
lation of the ultimate gain. The ultimate gain is defined as the proportional
controller gain that brings the system to its stability limit. After these
parameters are defined they are used to calculate controller parameters.

Figure 6-8. Relay-Oscillation Method Block Diagram

A properly implemented relay-oscillation auto tuner adheres to the fol-


lowing rules:

1. Starts tuning when a loop is stable.


Chapter 6 - Automated Tuning 211

2. Collects data and computes various parameters incrementally, as


tuning proceeds.
3. Supervises loop safety; interrupts tuning when loop safety is
violated.
4. Uses averaging rather than instantaneous values in computing
oscillation amplitude.
5. Applies noise protection techniques by using relay hysteresis,
filters and algorithmic noise protection.
6. Makes partial results available if tuning is interrupted.
7. Facilitates selection of various techniques for controller parameters
computations.

Figure 6-9 illustrates a typical time plot of the relay output and the process
variable (PV) during tuning. Note that after initialization the relay is
triggered at the point when the PV passes through the set point. The relay
amplitude (d) is typically 3 to 10 % of the controller output range. The PV
change (a) is largest during initialization (the first half-period sine wave).
Typically, the PV change ranges from 1 to 3 percent of the PV range.

Figure 6-9. Relay-Oscillation Response

Oscillations must continue for at least one cycle after initialization. If more
cycles are used for tuning, the average amplitude and period of the oscilla-
tions is used to determine the ultimate gain and ultimate period. Usually
two cycles are adequate for defining the amplitude and period of the oscil-
lations as the average values.
212 Advanced Control Unleashed

Evaluation of Process Characteristics


The frequency domain is used for loop analysis during tuning. Relay
action is expressed by a describing function, N(a, ε, d). The describing
function approximates a nonlinear element with a linear transfer function
for the first harmonic frequency of the Fourier transform and is expressed
at the critical frequency by the equation:

where:
N(a, ε, d) = relay-describing function at the critical frequency
e = relay hysteresis
d = relay amplitude
a = amplitude of the oscillation of the process variable (PV).
i = imaginary unit vector

Figure 6-6 explains the relay parameters: relay output amplitude d and
hysteresis e.

Conditions for the oscillatory loop are:

(6-1)

where:
G(wc ) = process gain at the critical frequency

The Nyquist plot in Figure 6-10 shows the process transfer function G(w)
and negative inverse of relay describing function with 8 = 0 and with e > U.
The cross-section of these two functions defines the loop oscillation
parameters.

The N(a, ε, d) amplitude is considered an approximation for the ultimate


gain. Note that with e=0, N(a,ε,d) is exactly the ultimate gain, as shown
in the formula

(6-2)

However, with hysteresis e>0, the gain and frequency determined by the
oscillation are shifted from the original ultimate gain and ultimate period.
Chapter 6 - Automated Tuning 213

Figure 6-10. Relay-Oscillation Tuning on Nyquist Plot.

As illustrated in Figure 6-10, the hysteresis used for noise protection intro-
duces an error in defining the critical point on the Nyquist curve. To pre-
vent the introduction of excessive errors, complementary noise protection
techniques should be applied. A PV filter time constant equal to a properly
selected loop scan period does not degrade loop performance and is bene-
ficial for process test results. An algorithmic noise protection disables
relay switching at the start of each half period for about half of the initially
defined dead time (see Figure 6-9). The algorithmic noise protection has
proven to be an effective method of noise protection without adversely
affecting identification of the ultimate gain and ultimate period.

A significant advancement in relay-oscillation tuning has been achieved


by an estimate of the process dead time from a linear regression to find the
tangent line to the process step response curve (see the start of tuning in
Figure 6-9). The tangent line is extrapolated to intercept the set point or
process output value line at the tuning start. The time between the initial
relay step and this intercept is the apparent dead time. A complementary
technique estimates dead time as delay of PV reaction after relay output
changes direction (see time interval tl-t2 in Figure 6-9). An apparent dead
time is calculated finally as the average of the two results.

Calculation of PID Controller Parameters


Having identified the ultimate gain.Ku and the ultimate period Tu PID
controller settings are determined by Ziegler-Nichols rules as in the Table
6-1.
214 Advanced Control Unleashed

Table 6-1. Ziegler-Nichols Calculations of PID Controller Parameters


Controller\Parameter K Tl Td
P 0.5 Ku
PI 0.4 Ku 0.8 Tu
PID 0.6 Ku 0.5 Tu 0.1 Tu

K. Astrom proposed more flexible rules with a phase and gain margin
specification, as follows:

(6-3)

where:
\?\ = desired phase margin
a = design selection of the ration Td:Ti
Gm = the desired inverse of gain margin (the default value .5)
K, Ti, Td = controller gain, integral time and derivative time

Equation 6-3 gives better tuning results than the original Ziegler-Nichols
rules for processes with small dead time-to-time constant ratios. If during
the process test dead time τd is defined, better results are achieved over a
wide range of process dynamics by using non-linear tuning rule
estimators developed by the authors of this chapter. Non-linear estimators
are intended to correct the following major deficiencies of Ziegler-Nichols
tuning:

• Too short integral time for processes with small dead time-to-time
constant ratios
• Excessive integral time for processes with large dead time-to-time
constant ratios
• Excessive controller gain for processes with small dead time-to-
time constant ratios

Non-linear tuning estimators have a general form as follows:

(6-4)
Chapter 6 - Automated Tuning 215

Sigmoid functions are used for creatingf1(Tu, τd) and f2(Tu, τ d ). A sigmoid
function is used for modeling a neural network (see Chapter 8).

The following formulas were developed and tested to satisfy the above
requirements:

(6-5)

(6-6)

a1, a2, b1, b2, c1, c2 are heuristic coefficients


0.25 ≤ a1 ≤ 0.4 0.25 ≤ a2 ≤ 0.4 b1 ~ 0.6, b2 ~ 1.0, c1 ~ 7.0, c2 ~ 6.0

Figure 6-11. Non-linear Functions for Computing Controller Integral Time and
Gain

Equation 6-5 gives the value of the coefficient used for Ti computation,
which smoothly varies from a minimal value ofa1to a maximum value of
a1 +b1(upper plot in Figure 6-11). Formula 6-6 provides the coefficient for
K computation in the range [a2+(b2 - a1)/c, a2+(b2 -a1- b1)/c] (Lower plot
in Figure 6-11). Let's take a closer look at the plot to better see the tuning
dependencies. When ratio Tu/τdis close to 2, the process is dead time-dom­
inant. An increased Tu/τd ratio signifies a decreased τd/τ ratio. The upper
plot shows that for the dead time-dominant process Ti ~0.35 Tu, while for a
process with a small dead time Ti ~ Tu. The controller gain for a dead time-
dominant process is K ~ 0.4 Ku, while for a process with a small dead time
it is K -0.27 Ku (lower plot).
216 Advanced Control Unleashed

Implementation of non-linear tuning estimators results in significantly


smoother, more robust loop responses, which are close to the IMC or
Lambda tuning (rather than to the ZN quarter amplitude decay). Some
typical step responses of loop tuning and loop step response for a second-
order process with Kp = 1, τ1 = 10 sec., τ2 = 3 sec, τd = 2 sec. are shown in
Figure 6-12.

Figure 6-12. Tuning Plots and Loop Step Responses for the Process with τd/Τ~
0.2, loop scan = .1 sec0

The auto tuner gave the following controller settings: K = 1.25, Ti= 12.36
sec., and Td= 1.97 sec., comparable to the IMCcalculations with xfilter= 13
sec: K ~0.87, Ti = 13 sec., and Td = 2.3 sec.

Developing The Process Model


Having defined the process dead time τd, ultimate gain Ku, and ultimate
period Tu, it is possible to calculate the process time constant τ and process
static gain Kp needed for model-based tuning.

One of the advantages of relay-oscillation tuning is that it is applicable to


both self-regulating and integrating processes. While determining the
apparent dead time, the gain of an integrating process model is identified
as the maximum slope of the process output during initialization. The cal­
culation of a first-order plus dead time process model is detailed in refer­
ence 6-13. The following equations are derived from the first-order process
model transformed into the frequency domain:
Chapter 6 - Automated Tuning 217

(6-7)

where:
Τ = process time constant
Tu = ultimate period
τd = process apparent dead time
Kp = process static gain
Ku = ultimate gain

The process time constant in Equation 6-7 is expressed by a tangent


function, which gives a good approximation when the dead time is
relatively large in relation to the time constant (the argument of tangent is
less than π/6). For processes with insignificant dead time, a large error
results in the time constant computation, even for a small error in
identified dead time. In such cases the argument of tangent is getting close
to Π/2 and a small error in argument changes tangent value significantly.

The accuracy of model identification for a self-regulating process is


improved by identifying the process static gain through a closed loop step
test. Following the relay-oscillation test, a small set point change is made
with the controller in automatic mode. The static gain is calculated as the
ratio of percent change in set point and percent change in controller out­
put between steady states.

(6-8)

where:
SP = set point change (%)
u = controller output change from one steady state to a new
steady state (%)

To detect valve hysteresis, the test should be repeated with a set point
change in the opposite direction. Knowing the process static gain, ultimate
gain, and ultimate period, the process dead time and time constant are
calculated from the following equations:
218 Advanced Control Unleashed

(6-9)

Table 6-2 demonstrates the accuracy of using these equations. The ultimate
gain and ultimate period were determined for several first-order plus
dead time process models using relay oscillation. The time constant and
dead time were calculated using Equations 6-9 and the actual static gain.
The difference between actual and calculated time constant and dead time
averaged less than five percent.

Table 6-2. Comparison of Actual and Calculated Process Model from Static Gain,
Ultimate Gain and Ultimate Period
Kp actual τ actual τd actual Ku Tu τcalc τd calc
1.0 100.0 12.0 13.7 68.0 106.2 12.5
1.0 25.0 3.0 13.7 12.0 26.0 3.1
1.0 50.0 12.0 7.5 66.0 52.1 11.0
1.0 12.5 3.0 7.4 11.2 13.0 3.0
1.0 66.7 12.0 9.5 66.0 69.2 12.3
1.0 16.7 3.0 9.8 11.2 17.3 3.0
1.0 200.0 12.0 26.0 50.0 207.0 12.8
1.0 50.0 3.0 27.1 12.0 51.7 3.1
1.0 15.0 2.0 12.3 8.0 15.7 2.1
2.0 15.0 2.0 6.2 8.0 15.6 2.1
0.5 15.0 2.0 26.7 8.0 15.7 2.1

A better model accuracy is achieved by applying optimization techniques


or least-square identifiers to the process test data. Using those techniques
with relay-oscillation process testing requires a faster data collection than
the original loop scan used in the industry. The techniques are discussed in
Chapter 9.

Model Based Tuning


In recent years significant progress has been made with model based tun­
ing, particularly with Lambda and Internal Model Control (IMC) tuning
[6.11], [6.12]. Both approaches result in a first-order closed loop response
to set point changes. A tuning parameter relating to the speed of response
is used to vary the tradeoff between performance and robustness. Both
methods adjust the PID controller reset (or reset and rate) to cancel the
process pole(s) and adjust the controller gain to achieve the desired closed
Chapter 6 - Automated Tuning 219

loop response. IMC and Lambda tuning have become popular because
oscillation and overshoot are avoided and control performance is specified
in an intuitive way through the closed loop time constant. When there is
mild to moderate interaction between loops, choosing the appropriate
closed loop time constants with Lambda tuning can minimize the impact
of the interactions.

IMC tuning rules for self-regulating processes use the three model param­
eters for a first order-plus-dead time model and a user-defined filter time
constant τfilter to determine PID controller settings. The filter time constant
should be set to the desired closed loop time constant. A larger value of fil­
ter time constant gives more damped tuning. Lambda tuning rules use a
parameter λ , which is also used to set the desired closed loop time con­
stant for a self-regulating process. The closed loop time constant is usually
chosen to be longer than the open loop time constant, perhaps as much as
two or three times longer. This is to ensure robustness in the event of inac­
curacy in model identification and changing process conditions. Process
analysis tools have been used to assist in choosing the closed loop time
constant to achieve minimum variance control [6.12]. A proper closed loop
time constant will attenuate slow disturbances without amplifying noise
in the process measurement. The following are formulas for IMC and
Lambda calculations:

Self-regulating Process
IMC Tuning Rules

PI Control:
(6-10)

where τd is filter time constant.

PID Control (ISA standard form):

(6-11)

Lambda Tuning Rules

PI Control:

(6-12)

where X defines the desired closed loop time constant.


220 Advanced Control Unleashed

PID Control:

There is no direct form of Lambda Tuning for the PID controller using a
first order-plus-dead time process model.

Integrating process
Model based control of integrating process makes it possible to achieve
averaging control. Rather than attempting to maintain tight control by
aggressively moving the manipulated variable, averaging control uses the
tank, in the case of level control, to absorb disturbances. By allowing level
to swing within limits, variability in the outlet flow is reduced which min­
imizes disturbances to downstream processes. The IMC rule given below
is intended for integrating processes with a significant dead time. The IMC
and Lambda results are equivalent when there is no dead time. For inte­
grating processes the tuning parameters, τf and λ, are the desired time
for the disturbance effect to be arrested, i.e., the time the process output
begins to recover following a disturbance. Increasing the value τf and λ
relaxes control.

IMC Tuning Rule

PI control:

(6-13)

Ki = Δy/ΔuΔt = integrating process gain

Δy = % change in the process output


Δu = % change of step change in the process input
Δt - time interval over which Ay is calculated

Lambda Tuning Rule

PI control:

(6-14)

Model based tuning using the IMC or Lambda rules has some advantages
over other tuning rules. Controllers are less sensitive to noise, valve life is
prolonged, and process variability is minimized. However, the pole-zero
cancellation approach results in poor load disturbance performance when
the process time constant is long. A technique suggested to overcome this
Chapter 6 - Automated Tuning 221

is to apply the integrating process model in place of the self-regulating


model. Recall that the relay-oscillation auto-tuning technique is capable of
identifying both models for a self-regulating process. Using the IMC rule
for an integrating process may improve the performance of a self-regulat-
ing process with a long time constant.

Model based tuning is extended to the Smith Predictor dead time-com-


pensating PID controller. The PI controller settings are determined using
IMC or Lambda rules, applying the gain and time constant from the pro-
cess model, with little or no dead time.

The relay-oscillation tuning method may also be used to auto-tune the


fuzzy logic PID controller. The process parameters needed to tune the scal-
ing factors are the ultimate period, ultimate gain, apparent dead time, and
apparent time constant. Details are discussed in Chapter 7.

The discussed formulas provide tuning for the ISA standard form control-
ler. If a series PID controller is in use the parameters should be appropri-
ately converted:

(6-15)

Note: Superscript s is used with series controller parameters, and super-


script n is used with standard ISA controller parameters.

If Td = 0 (PI controller), parameters for both controllers are identical.

For conversion from series form to parallel, the formulas are:

(6-16)

Robustness Based Tuning


While accepting tuning calculations it is important to know how much the
process gain and dead time can change before a loop becomes unstable.
The answer is obtained from the robustness plot as shown in Figure 6-13.
The plot shows the stable area for a loop and a point with the loop's nomi-
nal dead time and gain ratios. Around the point is an area of assumed
changes of process dead time and gain. The bounds are set for an increase
222 Advanced Control Unleashed

in dead time and gain, which move control toward the stability limit, and
for a decrease in dead time and gain where control is too sluggish. Twice
an increase or decrease in the gain and dead time is usually assumed. For a
properly tuned loop (loop 2) the area of assumed process-parameter
change should be within the loop's stable area; otherwise (loop 1), the con-
troller tuning parameters should be adjusted to expand the stable area.
The robustness plot uses a logarithmic scale [6.14].

Figure 6-13. Robustness Plot - Loop Stable Area and Controller Operation
Area

John Gudaz [6.18] developed a practical way of graphical interactive tun-


ing using a robustness map. The robustness map uses loop gain margin
and phase margin coordinates, instead of gain-dead time coordinates as in
the robustness plot.

It is useful at this point to define gain margin and phase margin and illus-
trate the definitions on the Bode plot in Figure 6-14.

Gain margin is the inverse of a loop gain at the phase crossover frequency,
i.e., when the loop phase shift isl80 degrees. It means simply how many
times loop gain is increased to reach stability limit at the crossover fre-
quency. Since stability limit gain at crossover frequency is equal to 1, we
can write:

K*GM=1 GM=1/K
Chapter 6 - Automated Tuning 223

Figure 6-14. Gain Margin and Phase Margin on Bode Plot

Phase margin is the phase shift that is added to the loop phase shift at the
gain crossover frequency to make a total phase shift of 180 degrees. The
gain crossover frequency is the frequency when the loop gain is equal to 1.

A sample robustness map is shown in Figure 6-15a. A robustness map pre-


sents a range of reasonable tuning parameters from which the user can
select. Tuning objectives bound tuning parameter range. The bounds for
Figure 6-15a are defined by:

• Tuning for near zero overshoot on a step change in set point (IMC
tuning)
• Near minimum integral absolute error (IAE) for a step disturbance

The robustness coordinates are in terms of gain margin and phase margin.
Within the delineated tuning area there are a variety of responses
available, the intention being to simply click at a point to get a set of
tuning parameters and view the responses. A set point performance range
at a gain margin of three and six is shown in Figure 6-15a (the vertical
number sequence on the robustness map correlates phase margin to
response curve). Significant variation in the set point overshoot in Figure
6-15b for different tuning points on the robustness map in Figure 6-15a
illustrates that not only are there performance and robustness tradeoffs to
be made, but there are performance tradeoffs between the speed of the
approach to set point and overshoot, as well.
224 Advanced Control Unleashed

Figure 6-15a. Robustness Map at Gain Margin = 3 and 6

Figure 6-15b. Responses at Gain Margin = 3 and 6

There are no particular rules in picking the defining response objectives.


Using well-known tuning rules (such as Lambda, and IMC) to establish a
set point response boundary has proven to be a sound approach and pre-
sents rules for which there is background describing rationale and perfor-
mance. Depending upon the presentation preference, more than two
defining objectives could be presented and those that appear on the map
could be user selectable. Robustness map shape, whatever the specific
defining tuning objectives, depends upon process type and user-specified
tuning preference and controller structure (PD, I, PID, etc.). Some combi-
nations (Proportional only, Integral only, for example) result in a robust-
ness map that is simply a line.
Chapter 6 - Automated Tuning 225

While the robustness map is meant to define a range of tuning parameters,


it is not necessarily meant to create hard constraints at the boundaries of
that range. As a practical matter a robustness map is intended to provide a
tuning range from which good tuning is extracted.

For some process types, performance measurements other than IAE or


overshoot are more relevant to the functioning of the loop. An example is
an integrating process, where surge percentage and arrest time are essen-
tial performance indicators. In such cases the addition of these perfor-
mance measurements will substantially enhance the utility.

Combining performance and robustness requires performance plots as


well as the values of characteristic measurements. Response plots are less
abstract than numerical measurements and provide a form of definition
for the measurements. The simulation needs to show set point response
and disturbance response. Disturbance responses should be in terms of
defined types of process disturbances and could include noise. A robust-
ness map provides the means for point-and-click tuning, but without a
plotted response, it is difficult to make a judgement as to whether or not
the tuning is acceptable. Fast simulation and plotting allow the user to
view a large number of responses in seconds. This makes the overall pre-
sentation more comfortable and interactive and enhances its use as a tun-
ing tool.

Adaptive Control
A perfectly tuned PID controller may degrade over time and perform
poorly or become oscillatory. There are two main reasons for these
changes.

1. The controlled process is non-linear and the process operation


entered a region with significantly different process parameters
than during the tuning.
2. The process parameters have changed since the auto-tuning was
performed.
The application of a gain scheduler, which adjusts the PID controller gain
as a function of the operating point, is often sufficient for solving the first
problem. In more demanding applications the PID controller is auto-tuned
in several ranges and the PID gain and reset and rate set appropriately to
match the current working range. In between ranges, parameters are
approximated by weighted averages, taking into account parameters in
adjacent ranges and distance from the ranges boundaries [6-20].

To achieve the desired performance in the second case, the tuning should
be repeated periodically or upon changes in the controller performance. If
226 Advanced Control Unleashed

the procedure is triggered and controlled automatically, the operation is


called adaptive tuning. Adaptive tuning can also be applied in the first case.

Adaptive tuning or the more general area of adaptive control has been a
significant research focus for years. Many sophisticated techniques have
been developed for adjusting controller parameters to process changes.
Practical progress is, however, less impressive, because the safety, reliabil-
ity and robustness requirements of practical applications often exceed
those achieved in research.

We can classify simple and reliable adaptive tuners for PID controllers as
shown in Figure 6-16.

Figure 6-16. Adaptive Tuning Techniques for PID Controllers

There has been significant effort over the years to implement adaptive tun-
ers. This section presents the more promising candidates for implementa-
tion as industrial products.

Model-Free Adaptive Tuning


Model-free tuning evaluates controller performance and adjusts the PID
controller parameters to achieve the desired controller performance. A
process model is not used in the calculations. Model-free tuning attracts
developers of PID self-tuners. This type of tuning gives a promise of fewer
calculations and smaller process excitations than would be required for
model-based tuning.

These were major considerations leading to development of the first PID


self-tuning products, which were model-free adaptive tuners. A classic
example of a model-free tuning is the PID adaptive controller Exact [6.4]
that adjusts the controller parameters based on process variable (PV)
behavior.
Chapter 6 - Automated Tuning 227

This controller adapts from the loop response to a set point change or a
load disturbance, when the actual damping and overshoot of the error sig­
nal are recognized, and the period of oscillation is measured. If there are
no oscillations in the error signal, proportional control gain is increased,
and integral and derivative times are decreased. If an oscillation occurs,
damping and overshoot of the oscillation are measured and controller
parameters are adjusted accordingly [6.4].

The adaptation principle is simple; however, it suffers from a few short­


ages:

1. Adaptation works only when control response is oscillatory.


2. When the control response is not oscillatory during quiet periods
of operation, the changes in the controller parameters are too
aggressive
This way of tuning has the following practical implications:

1. Adaptation requires more than one set point change to tune an


over-damped controller.
2. The controller is tuned for an oscillatory response, with a small
safety margin.
There are other known implementations of the PID adaptive controllers,
like UDC 6000, that exploit an oscillatory control error response and there­
fore have similar features [6.4].

Another method of model-free adaptive control, where the adaptation is


based on controller parameter relationships, avoids some of the pitfalls
noted above. This method examines the relationships between the control­
ler's proportional, integral and derivative terms and adjusts the controller
parameters to achieve the desired relationships [6.21], [6.22]. This
extended adaptive system, which exploits PID parameter relationships
and the PV behavior, is shown in Figure 6-17.

The adaptive tuner is composed of five functional components: Supervi­


sor, Excitation Generator, Gain Adaptation, Reset/Rate Adaptation and
Safety Net.

The supervisor coordinates the operation of all tuner components. It tests


the condition for starting the gain adaptation cycle:

\e(k)\>Emin or |Δe(k)|>ΔEmin (6-17)


228 Advanced Control Unleashed

Figure 6-17. Model-free Adaptive PID Control Loop

For reset adaptation, an additional condition should be satisfied:

e(k)*Δe(k)<0 (6-18)
Both controller input (PV) and output (out) should be within their operat­
ing range and there should be no oscillations. If the supervisor detects
oscillations, it activates the safety-net component, which cuts controller
gain every time the PV crosses SP line, that is:

If e(k) *e(k-1)<0 then Kc (k +1)= ηKc (k) η .95 (6-19)

Adaptation works from set point changes made naturally or injected by


the excitation generator or load disturbances. When the conditions for
gain adaptation are satisfied, the gain is adjusted according to the follow­
ing formula:

Δ K c ( k ) = χK c (k)(W(k) + Wref) χ =0.02÷0.05 (6-20)

X defines the speed of gain adaptation

W(k) = sign e(k)sign[Δ2e(k)] (6-21)


Wref is a value from the interval of - 1 to +1. Practically it was found that
Wref ~ -0.3 delivers an adaptive gain close to the model-based tuning. W(k)
is an oscillation index, which is negative for an oscillatory PV and positive
for a normally damped and over-damped PV. As is seen from Equation 6-
20, W(k) + Wref >0 causes gain increase.
Chapter 6 - Automated Tuning 229

Reset adaptation uses the property in a PID controller that a firm relation­
ship between the increments of proportional and integral terms is speci­
fied:

ΔP = ΣΔPk ; and ΔI = Σ ΔK; β =ΔP/ΔIα (6-22)

The summation is performed over a predefined number of scans or over


the whole period when the adaptation conditions (6-17) and (6-18) are sat­
isfied.

In Equation 6-22 a is set for a desired controller performance, and control­


ler integral time is adapted to achieve this performance by making p=l,
according formula:

ΔTi(k) = γTi(k)(1/β-1) (6-23)

γ ~0.05 defines speed of reset adaptation.

The tracking of the changes in the process lag and the appropriate adjust­
ment of the controller's reset time are demonstrated in Figure 6-18a. The
adaptive performance of this type of controller to set point changes is
shown in Figure 6-18b.

Figure 6-18a. PID Reset Adaptation


230 Advanced Control Unleashed

In this figure, Process Gain=1.5, PID Gain = 1.0, Dead Time = 2 sec, Lag
Time = 15+10sin(k/14400), SP changes every 240 sec, trend sample time =
20 sec, a=1.6.

Figure 6-18b. A Typical SP Response for an Adaptive PID Controller (a=1.6)

The controller-switching adaptive technique evaluates the performance of


various controllers with a predefined set of parameters and selects the best
performing controller. The original idea assumes sequential evaluation of
controllers. Despite the simplicity of the concept, it was not effective in
implementation, because of the significant time required for the evalua-
tion of controllers in the sequence.

M.G. Safonov and T. C. Tsao introduced the concept of a fictitious set point
for PID controller that assumes a simultaneous evaluation of the actual
controller in the loop and all other controllers in the assumed controllers
set [6.27 ]. The set of candidate controllers is fitted to the process input and
output measured data over the period of adaptation. To make the fit of the
same data set possible for various controllers, a set point value is calcu-
lated different than the true set point. The fictitious set point for an ideal
PID controller (6-24) is calculated by inverting the PID controller as in Fig-
ure 6-19 [6.28].

out = (kp +ki/s(SP-PV) - skD /sTd +1 PV (6-24)

After the fictitious set point is generated, the performance index is calcu-
lated as a squared control error relative to the fictitious set point. The per-
Chapter 6 - Automated Tuning 231

Figure 6-19. Fictitious Set Point Diagram

formance index accounts as well for the controller output move and the
difference between real and fictitious set point.

The controller with the minimum performance index is selected to be the


active controller for the next scan. The performance index is integrated
over the adaptation period, defined as the time when reasonable changes
of the process variable are detected.

Model-Based Adaptive Tuning


Model-based tuning adjusts the process model to the changes in the pro-
cess itself and then updates the controller parameters based on the model.
The process model is built and adapted in the time domain or the fre-
quency domain. The most common classical approach for both the time
and frequency domains is a recursive adaptation method of the type
shown in Figure 6-20 that adjusts the model parameters every scan.

Figure 6-20. PID Loop with Recursive Adaptation.


232 Advanced Control Unleashed

An adaptive PID loop contains supervisor and safety-net blocks, two typi­
cal blocks for any adaptive design. As soon as adaptation conditions,
defined primarily by control error, exist, the supervisor activates a recur­
sive model update mechanism. The recursive mechanism updates the cor­
rective term first and then recursively updates the process model
parameters:

θk =θk-1 + Pk χk-1ek (6-25)

θk = process model parameters vector


Pk = corrective term (matrix)
χk-1 = regression vector containing process output and input
variables
ek = error on the model output relative to true process output
The controller parameters are then recalculated based on the updated
model. The controller design is performed in both the time and frequency
domains.

Typical problems of recursive identifiers include selection of initial param­


eters, insufficient excitation, filtering, parameters wind-up, and slow
parameter tracking speed.

Many innovative works have focused recently on adaptation by model


switching in the time domain and adaptation by using the discrete Fourier
transform in the frequency domain. These two adaptive techniques offer
promise for industrial applications.

Model Switching Adaptation in the Time Domain


Logic-based switching strategies have been proposed by many researchers
as the way to implement adaptive control [6.23]. The approach uses an
identifier-based parameterized controller consisting of two parameter-
dependent subsystems, one an identifier whose primary function is to
generate an output estimation error, and the other an internal controller
[6.23]. The control signal fed back to the process is based on a current esti­
mate of the process model. In general, such estimates are selected from a
suitably defined admissible model set.

The overall strategy is based on the concept of "cyclic switching." Cyclic


switching is employed with or without process excitation. K. S. Narendra
and J. Balakrishnan [6.24], considered an architecture with N identification
models, operating in parallel. Corresponding to each model is a parame­
terized controller. At every instant, one of the models is selected by a
switching rule, and the corresponding control input is used to control the
Chapter 6 - Automated Tuning 233

process. Models are fixed or adaptive. The rationale for using fixed models
is to insure that there is at least one model with parameters sufficiently
close to those of the unknown process. The approach yields the desired
speed of adaptation, but requires the use of a significant number of mod-
els. If instead of switching models, the adaptation is performed through
model-parameters interpolation, the number of models used is drastically
reduced. Figure 6-21 shows a PID adaptive controller with model switch-
ing and interpolation of parameters.

Figure 6-21. Adaptive PID Controller with Model Switching and Interpolation

The heart of the design is a set of models. A model is composed from a


finite number of parameters. It is reasonable to assume that all models
have the same structure with m parameters. If we assign n values for every
parameter, we obtain N = nm models.

This adaptive controller operates in the following way. The supervisor


detects changes in the process output y and changes in the process input u
coming from the PID controller and on the feedforward input d. If changes
on any input exceed minimum level, it starts the evaluation of models by
computing for every model a squared error:

Ei(t) = (y(t) - Yi(t))2 (6-26)


where:
y(t) = process output at the time t
Yi(t) - i - th = model output at the time t
Ei(t) - squared error for the model i at the time t
E(t) = [E1(t),...,Ei(t),...,EN(t)] = squared error vector for models 1,..., N at the time t
234 Advanced Control Unleashed

The norm (6-26) is assigned to every parameter value of the model i, if this
parameter value is used in the evaluated model. If a specific parameter
value is not a part of the evaluated model, the parameter value is assigned
zero. Next the model i+1 is evaluated, and again a norm (6-26) is
computed for this model and assigned to every parameter value of the
model i+1 and added to the previously assigned norm for every parameter
value. The process continues until all models are evaluated. As a result of
this evaluation every parameter value has been assigned a sum of squared
errors from all models in which this specific parameter value has been
used. In one scan t, every parameter value pkI has an assigned norm where k
= 1,2,...,m - number of parameters and l = 1,2,...,n - number of values
assigned for every parameter.

(6-27)

where:
Epkl(t) = norm for parameter pkl evaluation at scan t
N = number of models
γkl = 1 if parameter pkl is used in the model i and γkl = 0 otherwise

The process is repeated in the next scan and the sum of the squared errors
for every parameter value is added to the sum of the appropriate
parameter values accumulated in the previous scan. The process
(adaptation cycle) continues the declared number of scans (1 to M), or
until there is enough excitation on the inputs, whichever condition is
satisfied first. As a result of this procedure, every parameter value pkl has
an assigned accumulated value of the squared errors over a period of
evaluation:

(6-28)

At the end of the adaptation cycle the inverse of the sum is calculated for
every parameter value pkl:

(6-29)
Then, for every parameter pk there is calculated an adaptive parameter
value pk(a) as a weighted average of all values of this parameter:

(6-30)
Chapter 6 - Automated Tuning 235

where:
fkl = Fkl/sumFk (6-31)
sumFk = Fkl + ...+ Fki+...+ Fkn (6-32)
Calculated parameters define a new model set, with center parameter val­
ues pk(a), k = l,...m, and a range of parameter changes as assumed in the
design. The range of changes are defined as +-A%, and should be repre­
sented by two or more extra parameters. In other words, if we define for
the new model parameter pk (a), then we have to define at least two extra
parameters, pk(a)+Δ%pk(a)/100and pk (a)-Δ%pk(a)/100, for the new model
evaluation. Every parameter has defined lower and upper bounds for
adaptation, and if pk(a) exceeds the bound it is taken as a bound value. As
soon as the model has been updated, controller redesign takes place based
on the updated pk(a), k =1,...mmodel parameters. Adaptation can work
for the whole model or on the feedback or feedforward part of the model,
exactly that part which relates output with inputs where the required min­
imum excitation level exists. Application of the first order-plus-dead time
process model for both the feedback and feedforward loops gives the
adaptive model structure for PID loop shown in Figure 6-22.

Figure 6-22. Model Structure Used in an Adaptive PID Controller

Assuming three values for every parameter, then for every input (u and d)
the number of model switching combinations is 3 x 3 x 3 = 27. If both
inputs in Figure 6-22 are used for adaptation, the number of switching
combinations increases up to 27 x 27 = 729.

If, however, parameters evaluation, instead of models evaluation, is


applied, nine parameter values are adapted for the feedback loop and an
additional nine for the feedforward loop; that is, the increase in number of
parameters values is proportional, not exponential as with pure model
switching. This is the major advantage of the parameters evaluation
approach, which is even more evident in multivariable adaptive configu­
rations.
236 Advanced Control Unleashed

Discrete Fourier Transform Adaptation Technique


The Discrete Fourier Transform (DFT) is a basic operation used in many
signal processing applications. The practical value of the DFT has been rec­
ognized in numerous areas of technology. Recently there has been
increased interest in the practical implementation of the DFT for control
systems identification, as presented in [6.25].

There are several self-tuning systems for industrial controllers that apply
frequency domain tuning, though they don't use the DFT. Hagglund and
Astrom designed a controller that performs tuning from set point changes
and from natural disturbances [6.26]. The specific tuning frequency is
selected by using band-pass filters on the process input and output. The
frequency of the filters is set from an on-demand tuner, which defines the
ultimate period using the relay-oscillation technique. The tuner defines
the process gain for the tuning frequency, using a simplified recursive least
squares estimator. The tuner can work from set point changes and from
natural disturbances, and may include external excitation injected in the
controller output or at the set point. A simplification of this design is
achieved by using a discrete Fourier transform identifier (estimator),
instead of a recursive least squares estimator [6.31]. The DFT tuner design
is shown in Figure 6-23.

Figure 6-23. A DFT-based Tuner with Multi-frequency Estimator and interpola­


tor

DFT computations are applied on the input and output, and the subse­
quent transfer-function estimation for every filtered frequency is defined.
The Verification unit tests the monotonous process gain and phase
decrease with frequency increase. The Interpolator defines the frequency
and process gain for the point where the phase shift is about -π. From
these parameters ultimate period and ultimate gain are calculated directly
and used for PID controller tuning and for setting up a tuning frequency
for the next cycle. Required excitation level depends on the process gain
around the critical frequency and on the valve performance. Loops with
Chapter 6 - Automated Tuning 237

significant valve hysteresis and stickiness require an excitation level that is


2-to-3 times higher than the same loops with good valve performance.
Though tuning from set point changes is possible, in most cases tuning
from injected excitations gives consistent results and is considered as a
basic mode of tuner operation.

References
1. Ziegler, J.G. and Nichols, N.B., "Optimum Settings for Automatic
Controllers," Transactions of the ASME, Vol. 66, Nov. 1962, p.759-768.
2. Bialkowski, W.L., "Plant Analysis, Design, and Tuning for Uniform
Manufacturing," Process/Industrial Instruments and Control Handbook,
McMillan, G.K., editor, McGraw-Hill, 1999, pp 10.50-10.55.
3. McMillan, Gregory K., Pocket Guide to Good Tuning, ISA, 2000.
4. Astrom, K. and Hagglund T., "PID Controllers: Theory, Design, and Tuning,"
ISA, 1995.
5. Ziegler, J.G. and Nichols, N. B., "Optimum Settings for Automatic
Controllers," Transactions of the ASME, Vol. 115, June 1993, pp. 220-222.
6. Bialkowski, W.L. and Haggman, Brian, "Quarter Amplitude Damping
Method Is No Longer the Industry Standard," American Papermaker, March
1992.
7. Astrom, K. J. and Hagglund, T., "Automatic Tuning of PID controllers," ISA,
1988.
8. Astrom, K. J., and Hagglund, T.,"A Frequency Domain Method for Automatic
Tuning of Simple Feedback Loops," IEEE 23rd Conference on Decision and
Control, Las Vegas, Dec. 1986.
9. Astrom, K. J., and Hagglund, T., "Automatic Tuning of Simple Regulators
with Specifications on Phase and Amplitude Margins," Automatica, 20,1986,
pp. 665-651.
10. McMillan, Gregory K., Wojsznis, Willy K., and Meyer, Ken, "Easy Tuner for
DCS," ISA Conference, Chicago 1993.
11. Chien, I-Lung and Fruehauf, P. S. "Consider IMC Tuning to Improve
Controller Performance," Chemical Engineering Progress, October 1990.
12. Bialkowski, W. L., "Control Engineering Course Material", Entech Seminar
1991, Toronto, Canada.
13. Wojsznis, W. K. and Blevins, T. L., "System and Method for Automatically
Tuning a Process Controller," US patent No. 08/070 090.
14. Shinskey, F. G., "Putting Controllers to the Test," Chemical Engineering,
December 1990.
15. Ott, Mike and Wojsznis, Willy, "Auto-Tuning: From Ziegler-Nichols to Model
Based Rules," Instrumentation and Control, ISA/95 Conference, New
Orleans, 1995.
16. Yu, Cheng-Ching, Autotuning of PID Controllers, Springer, 1998.
238 Advanced Control Unleashed

17. Wojsznis, W.K., Blevins, T. L., and Thiele, D., "Neural Network Assisted
Control Loop Tuner," IEEE Conference on Control Applications, Hawaii, July
1999.
18. Gudaz, John and Zhang, Yan, "Robustness Based Loop Tuning," ISA
Conference, Houston, September 2001.
19. Kiong, Tan Kok, Quing-Guo, Wang, Chieh, Hang Chang and Hagglund, Tore,
"Advances in PID Control," Springer, 1999.
20. Wojsznis, Willy K., Borders, Guy T. Jr., and McMillan, Gregory K., "Flexible
Gain Scheduler," ISA Transactions 33 (1994) 35-41, Elsevier.
21. Maršik, J. and Strejc V., "Application of Identification-free Algorithms for
Adaptive Control," Automatica, Vol. 25, No. 2,1989, pp.273-277.
22. Wojsznis, Willy K., Gudaz, John, and Blevins, Terrence L., "Adaptive Model
Free PID Controller," Fifth SIAM Conference on Control Applications, San
Diego, Ca, July 2001.
23. A. S. Morse, F. M. Pait, and S. R. Weller, "Logic-Based Switching Strategies for
Self-Adjusting Control," 33rd IEEE Conference on Decision and Control,
Workshop Number 5. Lake Buena Vista, Florida, USA, December 1994.
24. Narendra, Kumpati S and Balakrishnan, Jeyendran, "Adaptive Control Using
Multiple Models," IEEE Transactions on Automatic Control,. Vol. 42, No. 2,
February 1997, pp.177-187.
25. Lamaire, Richard O., Valavani, Lena, Athans, Michael, and Stein, Gunter, "A
Frequency-domain Estimator for Use in Adaptive Control Systems,"
Automatica, Vol. 27, No.l, 1991, pp.23-38.
26. Hagglund, Tore and Astrom, Karl Johan, "Industrial Adaptive Controllers
Based on Frequency Response Techniques," Automatica, Vol. 27, No. 4, 1991,
pp. 599-609.
27. Safonov, M.G., and T.C. Tsao, "The Unfalsified Control Concept and
Learning," IEEE Transactions on Automatic Control, vol 42, no. 6., Jun. 1997, pp.
843-847.
28. Jun, M., and Safonov, M.G., "Automatic PID Tuning: An Application of
Unfalsified Control," Proceedings of the 1999 IEEE International Symposium on
Computer Aided Control System Design, Hawaii, USA, August 1999.
29. "Emerson Process Management auto tuning," www.easydeltav.com.
30. McMillan, Gregory K., Tuning and Control Loop Performance — A Practitioner's
Guide, 3rd Edition, ISA, 1994.
31. Wojsznis, Willy K., "Discrete Fourier Transform Based Self-Tuning," Advances
in Instrumentation and Control, ISA/96 Conference, Chicago, October, 1996.
7
Fuzzy Logic
Control

Practice
Overview
In considering control applications for specific processes, Fuzzy Logic
Control (FLC) gains advantage when developing a process model is diffi-
cult or impossible. Another advantage of FLC is its easy and natural use of
the existing experience of human operators in process control and optimi-
zation. Such terms as "smooth operation," "good performance," and
"good quality" are commonly heard from people involved in building and
tuning FLC.

Fuzzy logic was developed in the 1960's and was first applied in the early
1970's for the control of a cement kiln [7.3]. Recent applications of FLC
spread over various areas of automatic control, mostly as a complement to
existing classical techniques. In some areas, FLC has become the dominant
control technology, primarily as embedded controllers. Some examples of
extremely successful fuzzy logic applications [7.4] are:

• Electrode positioning and power control in a high-power electrical


arc furnace
• Waste incineration plant
• Carbon testing equipment
• Ore crushing mill
• Tension control on paper ribbon, film, steel in a rolling mill or cable
winders
• Induction motor controllers
239
240 Advanced Control Unleashed

Opportunity Assessment
The greatest advantage of fuzzy logic control is evident when it is applied
to processes with insignificant dead time in order to accelerate the speed
of control while retaining a high-quality level of control. In the applica-
tions where set point changes and severe disturbances are common, and
the PID controller does not meet expectations, fuzzy logic control may be
considered as an alternative.

Examples

Temperature Control
A fuzzy logic controller was applied to a temperature control loop on a
chilled water refrigeration machine [7.14]. The temperature controller was
frequently exposed to large unmeasured load disturbances and large set
point changes, particularly on startup. On startup, the controller was
required to rapidly bring the water temperature down to set point without
significant undershoot. Significant undershoot could lead to chiller freez-
ing or compressor shutdown due to discharge pressure interlocks. The
conventional PID controller was replaced with a fuzzy logic-based con-
troller and tuned as per the vendor recommendations. The PID controller
response was relatively rapid but undershoot was significant. The fuzzy
logic-based controller was able to provide the rapid response to set point
changes with very minimal overshoot. Use of the fuzzy logic-based con-
troller greatly improved the chiller operation.

Moisture Control
Another example of the application of fuzzy logic is in the automating of a
drying oven in a continuous dyeing range for solid-color tufted carpeting.
The installed fuzzy logic controller has eliminated carpet overheating due
to overshoot during temperature set point changes. Tighter control and
other oven improvements have helped boost average line speed by 15%.
The reprocessing and scrap costs have been reduced, providing an invest-
ment payback of only four months. And finally, the consistency of color
and density across the carpet width has been improved, eliminating cus-
tomer complaints of poor matching at abutted sections. In addition, the
operation could be shifted to a higher average temperature without over-
heating the product. This is possible because the automation is more pre-
cise and more high-quality information is available to the operator,
producing better decision making.

Operator information was another factor in increasing line speed. Overall,


gas consumption per yard of product produced has decreased. The higher
line speed from the oven modifications and certain other changes made to
other parts of the line has cut manufacturing cost by about 20% per square
Chapter 7 - Fuzzy Logic Control 241

yard for an average product. The greater oven capacity also permitted the
mill to perform the in-house dyeing of some heavy solid-color products, a
task previously not possible because the line was sized for light carpet and
was too slow for the heavier grades.

All 14 of the oven's half-zones were initially controlled by PID loops.


Excessive temperature overshoots often occurred in Zone 1 when a large
increase in heat was required for transitioning from a light to a heavy car-
pet. A specific processing profile exists for every carpet style, and in some
cases a temperature increase of as much as 100°F is required. Zone 1 has
the largest burners, performs more than its share of drying, and is the
most difficult to modulate. When overshoots occurred, the oven had to be
manually slowed for 10 minutes or more.

Autoruned PID smoothed the loops running the last six zones but failed to
bring Zone 1 totally under control. Auto-tuned fuzzy logic loops for the
Zone 1 half-zones delivered fast and smooth step responses. In effect scrap
was substantially reduced at transition points and line throughput was
boosted.

This example illustrates selected control problems successfully solved by


fuzzy logic control. In some cases, simple fuzzy logic control may provide
improvement, even over a well-tuned PID control loop.

Application

General Procedure
Despite the many potential advantages of fuzzy logic, developing and
tuning a fuzzy logic controller for an industrial application is always a
challenge. It may be desirable to use fuzzy logic technology to build a
generic FLC that is designed to handle classes of control problems, as
opposed to building an FLC for a specific application.

One such generic class of control problem would be to mirror the PID con-
troller for applications where the FLC can provide better control perfor-
mance. By comparing the Integrated Absolute Error (IAE) of the FLC
controller with the IAE of the standard PID controller, it was found that
the IAE for the fuzzy controller could, in some cases, deliver up to a 20%
improvement in performance [7.14].

On the implementation side, the application of this generic FLC controller


is very similar to the implementation of the PID controller. Therefore, we
refer readers to the Application Detail subsection of the Application sec-
242 Advanced Control Unleashed

tion in Chapter 6, where the automatic tuning of a PID controller is dis-


cussed.

Rules of Thumb
Rule 7.1 — Fuzzy logic control excels at reducing the overshoot for processes
with a large lag time. For these types of processes, the non-linear feature of
fuzzy logic allows faster response without overshoot for both set point
and load-disturbance changes. Many of the successful applications have
been on temperature loops.

Rule 7.2 — To maximize maintainability, the implementation of fuzzy logic for


feedback control should be embedded in a user's control system so it can use an
auto tuner, standard function blocks, and a familiar interface. Manufacturers of
heater controls now offer stand-alone fuzzy-logic temperature controllers,
but their life-cycle cost is higher because of special setup, tuning, parts,
and repair requirements.

Rule 7.3 — Fuzzy logic can be used to compensate for a decreasing process gain.
For pH control systems with set points on the steepest portion of the titra-
tion curve, the higher control action for larger errors helps compensate for
the decrease in process gain as you move away from the set point.

Guided Tour
This tour illustrates the potential ease of use and convenience of an inte-
grated interface that is possible with fuzzy logic control embedded in an
industrial control system. The following areas are addressed:

• Fuzzy logic configuration


• Automated tuning
• Example of closed loop response

A version of fuzzy logic control is included in many modern control


systems. Even some stand-alone digital controllers for temperature control
are based on fuzzy logic control. Often the implementation of fuzzy logic
control in these systems is limited in scope to make it easier to engineer
and commission the control. The rules and membership functions are
predefined in some implementations to meet a specific application
requirement. Processes dominated by lag and where the control must
respond quickly to set point changes and load disturbances with minimal
overshoot are the best candidates for fuzzy logic.

To allow consistent implementation of control applications, fuzzy logic


control is implemented as a function block. The fuzzy rules and member-
Chapter 7 - Fuzzy Logic Control 243

ship functions are predefined except for the scaling of the membership
functions for error, change in error and change in output. Thus the fuzzy
block is used as a replacement for the PID block, as illustrated in Figure
7-1.

Figure 7-1. Fuzzy Logic Control Block for Control Applications

In this case, a tuning capability is provided to automatically set the scaling


of the fuzzy logic control block. The tuner interface is the same as that pro-
vided for PID except that the tuning parameter is for the membership scal-
ing, as shown in Figure 7-2.

Figure 7-2. Tuning Interface for Fuzzy Logic Control

Similar to the auto tuning for PID (see Chapter 6), the process dynamics
are automatically established when Test is selected. The process is under
244 Advanced Control Unleashed

two-state control while testing is done. The manipulated parameter is


changed by the specified step size from its initial starting value. Based on
the process dynamics, the recommended membership scaling is calculated
automatically.

If the user wants to see the performance that this tuning will provide
before using it in his/her process, then he/she can select Simulate. The
closed loop response to set point and load disturbances is shown. By
changing the values of the recommended scaling factors the new response
is displayed, as illustrated in Figure 7-3.

Figure 7-3. Fuzzy Logic Response for Recommended Tuning

By selecting Update, the recommended tuning is transferred to the fuzzy


logic block.

Since fuzzy logic control is non-linear, it is not possible to show the robust-
ness of the control in terms of phase and gain margin.

Theory

Introduction to Fuzzy Logic Control


Fuzzy logic control (FLC) can perform the same functions as classical lin-
ear control. Fuzzy logic can control simple feedback loops, interactive
loops or multivariable processes. In many cases fuzzy logic control can
provide better performance, smaller process variability or better robust-
Chapter 7 - Fuzzy Logic Control 245

ness. The difference comes from calculations based on fuzzy logic. Apply-
ing fuzzy logic calculations it is possible to design a non-linear controller,
without a detailed knowledge of the operating point nonlinearity, as
would be required for a classical control design.

Fuzzy logic has its roots in a classical two-valued "True" and "False" logic.
Adding "fuzziness" makes it possible to better address real-life situations.
To illustrate this connection and analogy, let's review a few basics of classi-
cal logic. Classical logic operates on crisp sets. A simple example, not actu-
ally used in industrial process control but illustrating the concept, is where
a set of temperature values above 33°C is "hot," thus below or equal to
33°C is "cold"; or a person 6 feet tall belongs to the set "tall," but someone
below that value by even a fraction of an inch is "short." Is it reasonable to
call a person of 5 feet 11.99 inches "short"? Applying common sense,
apparently it is not. This type of doubt stimulated the search for more flex-
ible expressions of such situations and led to the development offuzzy sets
and multivalued logic.

In fuzzy sets, bounds are defined in a more flexible way. Set "hot" for
example, is defined as not only above 33°C, but also including any value
below 33°C that is "hot" to some degree, until it is fully "hot" at the 33°C
point and above. The "degree of hotness" at 27°C would of course be
greater than at 26°C, for example. To define the degree of "hotness," fuzzy
logic applies membership function. Membership function defines the degree
of membership to a fuzzy set. The above discussion is summarized in
Table 7-1 and Figure 7-4.

Table 7-1. Classical Logic vs. Fuzzy Logic


Sets Membership Logic Values Operations
Crisp sets Unambiguous "True," "False" or 1,0 Two-valued
Classical Logic
logic
Fuzzy sets Various degrees of Any value in the Multi-valued
membership defined range [0,1], defined logic
Fuzzy Logic
by the membership by the membership
function function

In summary, we can say that in fuzzy logic the truth of any statement is a
matter of degree. We can observe as well that classical logic is a special
case of fuzzy logic with a step membership function.

After defining fuzzy logic values, the next step would be to define fuzzy
logic operations. Fuzzy logic operates on many values and operations in
fuzzy logic are based on multi-valued logic principles. Using again the
comparison with classical logic, we introduce the basics of multi-valued
246 Advanced Control Unleashed

Figure 7-4. Logic Values in Classical Logic vs Fuzzy Logic

logic operations in Table 7-2. Multi-valued logic uses the same operators
as classical logic, with the difference that the operations are performed on
the real values in the range [0-1], instead of on 0 or 1 only.

Fuzzy logic operation can be illustrated with a simple example. Assume


John is 6 feet 1 inch tall and today's temperature is 30°C. We assign to
30°C the membership function value = 0.8. Applying classical logic to the
question "Is it true that John is tall and today is hot?" we would answer:
"False." The fuzzy logic answer would be: "true with degree .8". Table 7-2
summarizes the operations.

Table 7-2. Two-Valued Logic Operations vs Multi-Valued Logic Operations


Operation "A" AND "B" "A" OR " B " NOT "A"

min(A,B) max(A,B) Complement to 1


min(0,0)=0 min(0,1)=0 max(0,0)=0 NOT(0)=1-0=1
Two-valued logic min(1,0)=0 max(0,1)=1 NOT(1)=1-1=0
min(1,1)=1 max(1,0)=1
max(1,1)=1
min(A,B) max(A,B) Complement to 1
Minimum of two val­ Maximum of two val­ NOT(A) = 1-A
Multi-valued logic
ues A, B in the range ues A, B in the range A ≤ 1
0-1. 0-1.

In the next sections we learn more about fuzzy logic control by developing
several fuzzy logic controllers. As a basic fact about fuzzy logic it is worth­
while to honor Lofti Zadeh, who developed fuzzy logic in the early 1960's,
and Jan Lukasiewicz, who pioneered in developing multi-valued logic in
the 1930's.
Chapter 7 - Fuzzy Logic Control 247

Building a Fuzzy Logic Controller


A typical fuzzy logic controller performs three basic operations. (See Fig-
ure 7-5.)

1. Translating input signals to fuzzy logic values or fuzzification.


2. Applying multi-valued logic inference rules to develop the control
action.
3. Re-translation of the fuzzy logic values to continuous signals, or
defuzzification.

Figure 7-5. A Typical Fuzzy Logic Controller Diagram

Fuzzification
Fuzzy sets and membership functions are used to translate controller input
values into fuzzy values.

At first we decide how we want to logically quantify control inputs, by


defining fuzzy sets. The typical fuzzy sets for input signals are Negative
Large, Negative Medium, Negative Small, Zero, Positive Small, Positive
Medium, and Positive Large. For simpler designs the fuzzy sets are Posi-
tive and Negative or Low and High.

The next step is to define membership functions for every fuzzy set, to
establish boundaries for the set. Membership functions are defined analyt-
ically. Examples of simple membership functions for fuzzy sets Low and
High are given by Formulas (7-1) and (7-2) and presented graphically in
Figure 7-6. Level is used as control input. Prior to applying fuzzification,
control input is normalized in the range 0-1.

M(High) = 1.0* Level 0 < Level < 1 (7-1)

M(Low) = 1.0-1.0* Level 0 < Level < 1 (7-2)


Formula 7-1 defines the membership function for fuzzy set High, and For-
mula 7-2 for fuzzy set Low. A level equal to 70% or .7 gives a degree of
248 Advanced Control Unleashed

membership of .7 for the High membership function and .3 for the Low
membership function. This can be expressed as

M(Low)=.3; M(High)=.7

Figure 7-6. Membership Functions for Fuzzy Sets Low and High

Rather than requiring full membership in one state or the other, fuzzy
logic allows for a tank's level, for example, to be somewhat high (degree of
membership=.7) and somewhat low (degree of membership = .3) at the
same time. In other words, a statement is typically true only to a certain
extent or degree.

Membership functions are of various types: linear, as in the above exam-


ple; parabolic; exponential; or normal distribution. Linear membership
functions are used most often due to the simplicity and satisfactory flexi-
bility in most applications.

Fuzzy Logic Inference Rules


In developing fuzzy inference rules, understanding of process behavior is
of primary importance. Using process knowledge, fuzzifying inputs and
outputs and applying IF-THEN reasoning leads to control inference rules.
Finally, applying fuzzy logic operations develops controller response.

Consider an example for developing a fuzzy logic controller to control the


level in a tank through the outlet control valve, measuring inlet flow and
tank level, as shown in Figure 7-7.

In this example, the membership functions describing the tank's level, the
tank's inlet flow and the outlet valve might be defined as shown in Figure
7-8.
Chapter 7 - Fuzzy Logic Control 249

Figure 7-7. Tank Level Controlled by a Fuzzy Logic Controller

Figure 7-8. Tank Level Membership Functions

The inference rules for controlling the tank level are defined as in Table
7-3.

Table 7-3. Tank Level Control Rules


Number Rule
IF level is Low and inlet flow is Normal THEN make outlet valve
Rule 1
Closed.
IF level is High and inlet flow is Normal THEN make outlet valve
Rule 2
Open.
IF level is High and inlet flow is Low THEN make outlet valve
Rule 3
Normal.

After applying the degree of membership for both level and inlet flow to
the fuzzy rules for tank outlet valve, as shown in Figure 7-9, we get fuzzi-
fied output values, that is, values for the membership functions Closed,
Normal, Open. Normal flow in this example is a flow that is not too low.
250 Advanced Control Unleashed

Figure 7-9. Fuzzy Logic Controller Operation

As you may notice from Figure 7-9, while applying the fuzzy logic AND
function to input membership values, the output membership function
value is a minimum of two input membership values. Valve Closed mem-
bership function value, for example, is equal to Level Low membership
function value, which is smaller than value Flow Normal. This follows
from inference Rule 1:

M(ValveClosed) = min(M(LevelLow),M(FlowNormal)) = M(LevelLow)

Defuzzification
Denazification translates fuzzy logic values into real control output val-
ues. The most common method of defuzzification is to calculate a
weighted average of all the activated output membership functions. Refer-
ring to Figure 7-9, the weighted average of membership functions Closed,
Normal, and Open determines the outlet valve position.

Applying the three-step development procedure for a fuzzy logic control-


ler, it is possible to build a significantly more complex controller, including
a multivariable interactive controller. The highest expected control perfor-
mance, however, will require adjusting membership functions and scaling
input/output. There are fuzzy logic tools available which simplify the
effort, such as the Matlab® fuzzy logic toolbox.

For someone with no fuzzy logic control experience, a pre-built fuzzy


logic controller might be the most effective way to start. In the next two
sections we will discuss the design and use of a fuzzy logic controller
functionally equivalent to a PID controller, that is available as a ready-to-
use product.
Chapter 7 - Fuzzy Logic Control 251

Fuzzy Logic PID Controller


Fuzzy logic PID control (FPID) provides an advanced alternative to stan­
dard PID control. FPID provides the control capability of PID with the
added benefit of superior response for both set point changes and external
load disturbances. By using fuzzy logic, the FPID minimizes overshoot
and provides good load-disturbance rejection. The scaling factors of the
FPID are automatically established using an auto tuner [7.7].

A FPID operates by using predefined fuzzy rules, membership functions,


and adjustable parameters known as scaling factors. The FPID translates
the loop's absolute values into fuzzy values by calculating first the scaled
error (e) and scaled change in error (Δe) and then the degree of member­
ship in each of the predefined membership functions. It then applies the
fuzzy rules and, finally, retranslates the values into a control move.

In a scalable control system, the FPID is available as a function block. This


function block supports mode control, signal scaling and limiting, feedfor­
ward control, override tracking, alarm limit detection, and signal status
propagation.

The nonlinearity built into the FLC function block reduces overshoot and
settling time, achieving tighter control of the process loop. Specifically, the
FLC function block treats small control errors differently from large con­
trol errors and penalizes large overshoots more severely. It also severely
penalizes large changes in the error, helping to reduce oscillations.

Membership Functions
The FLC function block uses three membership functions: the input sig­
nals are error and change in error, and the output signal is the change in
control action. The relations among these three variables represent a non­
linear controller. The nonlinearity results from a translation of process
variables to a fuzzy set (fuzzificarion), rule inference, and retranslation of
the fuzzy set to a continuous signal (denazification).

The two membership functions for error, change in error and change in
output are Negative (N) and Positive (P). The membership functions' scal­
ing (Se and SΔe) and the error and change in error value, respectively,
determine the degree of membership. First, error in percent is calculated
relative to the set point and then scaled to the range [-1, 1]:

Es = (PV - SP)/ Se
252 Advanced Control Unleashed

Similar calculations are done for change in error:

ΔES = [[PV(t) - SP(t)] - [PV(t -1) - SP(t -1)]] / SΔe (7 _ 4)

- 1 < ΔES < 1

Figure 7-10. Error Membership Functions

Degrees of membership for positive error Me(P) and negative error Me(N)
are calculated by using the following expressions:

Me(P) = 0.5 + 0.5ES


Me(N) = 0.5 -0.5ES
Membership functions for change in error are calculated in similar way:

(7-6)
MΔe(N)= 0.5 - 0.5EΔs

Figure 7-11. Change in Error Membership Functions


Chapter 7 - Fuzzy Logic Control 253

The changes in output membership functions are called singletons. They


represent fuzzy sets with a single point and a membership function of one.
The membership scaling (SA) determines the magnitude of output change
for an absolute value of scaled error and change in error equal to 1 with
the same sign.

Figure 7-12. Change in Output Singleton Membership Function

Fuzzy Logic Inference Rules


There are four fuzzy logic IF-THEN rules that the FLC function block uses
for a reverse acting fuzzy logic PID controller. The rules are listed in Table
7-4. Rules are also written in a matrix form as in Table 7-5, where matrix
cells represent fuzzy output membership functions.

Table 7-4. Fuzzy Logic Rules for Fuzzy PID Controller


Number Rule
IF error is N AND change in error is N THEN make change in out-
Rule 1
put POSITIVE
IF error is N AND change in error is P THEN make change in out-
Rule 2
put ZERO
IF error is P AND change in error is N THEN make change in out-
Rule 3
put ZERO
IF error is P AND change in error is P THEN change in output
Rule 4
NEGATIVE

Table 7-5. Fuzzy Logic Rules for Fuzzy PID Controller In a Matrix Form
Error\Change in Error Negative Positive

Negative Positive Zero

Positive Zero Negative


254 Advanced Control Unleashed

The membership functions for controller output are calculated by using


tables 7-4 and 7-5 and applying multi-valued logic operation rules.

MΔu(P) = min(Me(N), MΔe(N))


MΔu(P) = min(Me(N), MΔe(N))
MAu(Z) = max{min(Me(P),MΔe(N)), min(Me(N), MAe{P))} (7-7)
MAu(P), MΔu(N), MΔu(Z) are Negative, Positive and Zero
membership functions for the change in output.

Defuzzification
Denazification uses the center-of-gravity method to determine the fuzzy
scaled change in the output Δus.

Δu =MΔu(P)-MΔu(N)/MΔu(P) + MΔu(Z) + MΔu(N)

The change in output is calculated as:

Δu = ΔusSΔu (7-9)

Fuzzy Logic Control Nonlinear PI Relationship


The two membership functions associated with each input variable and
one membership function for the output variable make the FLC function
block nonlinear in its response.

For regions where the absolute error is greater than the error scaling factor
or the absolute change in error is greater than the change in error scaling
factor, the values for error and change in error are clipped at the error scal­
ing factor and change in error scaling factor, respectively. Figure 7-13
shows an example of FLC gain curve that illustrates change in controller
gain for FPID controller when e- Δe.

The straight line through the origin shows the linear relationship of a con­
ventional PI controller. As the error and change in error increase, the
change in output of a conventional PI controller increases linearly. Note
that the gain of the FLC function block is similar to the gain of the PI con­
troller when the error and change in error are small. The gain of the FLC
function block increases gradually as the error and change in error
increase.
Chapter 7 - Fuzzy Logic Control 255

Figure 7-13. FLC Function Block's Nonlinear Relationship

The nonlinearity built into the FLC function block reduces overshoot and
settling time, achieving tighter control of the process loop. To help antici-
pate a rapid change in the process, derivative action is provided in the
feedback path of the loop, as shown in Figure 7-14.

Figure 7-14. Fuzzy Logic Control with Derivative Action

The FLC function block treats small control errors differently from large
control errors and penalizes large overshoots and large changes in the
error, helping to reduce the oscillation.

Figure 7-15 shows how an FPID reacts to overshoot and oscillation. The
FPID applies stronger control action prior to points B, D, and F where
overshoot is significant and increasing, as compared to control action at
points B, D, and F where change in error is small, or after points B, D, and
F where change in error is negative. Similarly at points A, C, and E, control
action is defined jointly by minimum value of error and change in error.
256 Advanced Control Unleashed

Figure 7-15. Process Variable Oscillation Example

This type of nonlinearity allows the FPID to provide better control perfor-
mance than standard or nonlinear PID control. Following Joe S. Qin [7.7]
we compare FPID and error-squared controller with the gain Kesq defined
in Shinsky [7.13]:

Kesq = Kp (L + (1-L)|e|) (7-10)


where L is an adjusting parameter representing a weighting factor
between the error-squared term and a linear term. The error-squared PID
controller also tends to apply stronger action to larger control errors. How-
ever, a few differences between the FPID controller and error-squared PID
controller are immediately noticed:

1. The error-squared PID controller applies stronger gain when the


error is larger, while the FPID controller applies stronger gain
when both the error and the change in the error are larger.
2. The error-squared PID controller gain is extremely large given very
large controller error. However the FPID controller gain has an
upper limit.
3. Some versions of the error-squared PID controller uses a very
small or zero value for L, which indicates the controller gain
around the origin is very small or zero. However, the FPID gain is
designed as in Equations (7-3) through (7-9) around the origin to
give good performance.
A good way to represent FLC is a control surface. Figure 7-16 shows a con-
trol surface for a fuzzy PI controller as a 3-D plot of change in controller
output depending on error and change of error.

This representation clearly shows the analogy between FLC and a non-lin-
ear controller. It is possible in principle to design any non-liner controller
using a FLC control surface. Change in controller output would be simply
Chapter 7 - Fuzzy Logic Control 257

Figure 7-16. Control Surface of Fuzzy Logic Controller

defined by reading the values of the 3-D plot, if error and change in error
are known.

FPID and PID Relations


An auto tuner is used to establish the scaling factors (Se, SΔe, and SΔu). The
auto tuning of FPID is possible because there are relationships between
PID controller tuning parameters (Gain, Reset) and FPID scaling factors.
For a small control error and set point change less than a nominal value
(ASP), the FPID scaling factors are related to the proportional gain (Kp)
and reset (Ti), which would be used in a PI controller applied to control
the same process.

SΔe = βΔSP (7-11)

SΔu = 2SΔeKc (7-12)

Se = Ti SΔe/Δt (7-13)
where:
SΔe - change in error scaling factor
Se = error scaling factor
Se = change in controller output scaling factor
ΔSP = SP change relative to a nominal change (1%)
β = a function of process dead time (τ d ) and time constant ( r )
and is limited in the range .2 to .5 , as defined by the
258 Advanced Control Unleashed

Equation 7-14:

β =.2+ τd /τ .2< β< .5 (7-14)

Automation of Fuzzy Logic Controller Commissioning


The greatest advantage of fuzzy PID as compared to more complex fuzzy
logic controllers is automatic tuning of this controller in a quite similar
way as a typical PID controller. The automatic tuning applies relay oscilla­
tion technique (see Chapter 6) and exploits relationships between PID and
fuzzy PID as in the section 7.4.5.

The Fuzzy PID tuning technique has been developed by Qin and
enhanced by Qin, Ott, and Wojsznis [7.8].

A specific distinction of fuzzy PID tuning is that it takes into consideration


error and change of error level on the controller input and adjusts scaling
factors appropriately to avoid saturation of the membership function out­
puts. Formula SΔe = βΔSP calculates the scaling factor for change in
error, dependent on a set point change. The value of the factor is used until
PV comes to the SP, when the scaling factor for change in error is recalcu­
lated for ASP = 1 and scaling factors for error and output are adjusted
according to Formulas (7-11) and (7-12).

In a slightly modified approach, ultimate gain and ultimate period


obtained during relay oscillation tuning are used directly for calculating
scaling factors without intermediate calculation of PID controller parame­
ters. The following formulas are applied:

Se = SΔeTu/2 (7-15)
2
SΔu = 1.2SΔeKu (7-16)
Tu and Ku are ultimate period and ultimate gain of the loop.
The formulas provide a good tuning over a range .1 to .3 dead time-to-
time constant ratio. Tuning improvement over a wider range of dead
time-to-time constant ratio is obtained by applying non-linear estimators
already discussed in Chapter 6: Auto tuning.
Chapter 7 - Fuzzy Logic Control 259

References
1. Pedrycz, Witold, "Fuzzy Control and Fuzzy Systems," Second edition, John
Wiley and Sons, 1993.
2. Chen, Guanrong, and Trung Tat Pham, "Introduction to Fuzzy Sets, Fuzzy
Logic, and Fuzzy Control Systems," CRC Press, 2001.
3. Zadeh, L. A., "Fuzzy Sets and Systems," Information & Control, vol.8,1965,
pp.338-353.
4. Sugeno, M., "An Introductory Survey of Fuzzy Control," Information Sciences,
vol. 38, 1985 pp. 59-83.
5. Mamdani, E. H., "Application of Fuzzy Algorithms for Control of Simple
Dynamic Plant", Proceedings of the Institution of Electrical Engineers
(London) v 121 n 12 Dec 1974 p 1585-15881, 121(12), pp. 1585-1588.
6. Shaw, I.S., Fuzzy Control of Industrial Systems, Klewer Academic Publishers,
Boston, 1998.
7. Qin, S. Joe, "Auto-Tuned Fuzzy Logic Control," Proceedings of American
Control Conference, Baltimore, 1994, vol. 3 pp. 2465-9.
8. Qin, S. J.,. Ott, M, and Wojsznis, W., "Method of Adapting and Applying
Control Parameters in Non-Linear Process Controllers," US Patent No.
5,748,467, 1998.
9. Ying, Hao, Siler, William, and Buckley, James, "Fuzzy Control Theory: A
Nonlinear Case," Automatica, vol.26, no. 3, 1990.
10. Ling, Cheng, and Edgar, Thomas F., "The Tuning of Fuzzy Heuristic
Controllers," Proceedings of American Control Conference, Chicago, 1992, pp.
22842290.
11. Astrom, K.J., and Hagglund, T., "PID Controllers: Theory, Design, and
Tuning," ISA 1995.
12. Willy Wojsznis, "An Alternative Fuzzy Logic Controller Design: A Simulation
Study," Proceedings of the 1994 International Control Engineering Conference, pp.
159-167, March 1994, Chicago.
13. Shinsky, F.G., "Process Control Systems: Application, Design, and
Adjustment," 3 r d edition, McGraw-Hill Book, New York 1988.
14. Jay D. Colclazier, "Fuzzy Logic - An Effective Alternative to PID Control,"
Advances in Instrumentation and Control, ISA Conference, 1994.
15. DeltaV™ Home Page: http://www.easydeltav.com.
8
Properties
Estimation

Practice
Overview
When critical measurements are slow to reflect process changes or only lab
analysis is available, then parameter estimation can often be used to
improve the performance of control and monitoring applications.

The old saying, "You can't control what you can't measure," has haunted
many Advanced Process Control projects. Being able to either accurately
measure or predict an important process parameter is an integral part of
closing the loop, but one that is often afforded little attention. There are
many approaches available in building properties estimators for closed-
loop control. Clearly understanding the final control objective of the prop-
erty estimator and matching this objective with the best estimation tech-
nique is often the key to a successful control application.

The basic concept of inferred properties estimation involves using the


measurable inputs of a process and their relationship to some property,
such that this property is inferred or estimated from the values of the mea-
surable inputs. The final use of this inferred property may take one of
many forms, including:

• Estimation of an unmeasured physical property


• Validation of an online analyzer
• Generation of a continuous signal with feedback from a sampled
analyzer
• Continuous estimation of a laboratory quality measurement
261
262 Advanced Control Unleashed

• A steady-state predictor to remove delay and lag from a process


measurement.

Along with the many possible uses of the estimator, there are just as many
techniques available to build the relationships and implement the
estimator online. The proper selection of the estimation technique is an
important step. It is often possible to evaluate the quality of generated
estimators using different estimation techniques. The estimation
techniques most common in today's advanced process control
applications are:

Dynamic Linear Estimator


• A simple linear regression technique for inferred properties with a
small number of inputs and a feedback correction from an online
measurement or lab analysis.
• Capable of easily modeling complex dynamic and transient
behavior in the data, often important for closed-loop control.

Neural Networks
• Estimation technique applied in cases where the inferred property
has a significant nonlinear relationship with independent inputs.
• Capable of easily modeling dynamic and transient behavior even
where first-principal modeling techniques have failed.

The selection of the final form and technique is largely a function of the
final use of the estimator. It is important to remember that many of the
techniques available today have slowly evolved into industrial practice
from the world of academia or statistics, and have only recently been
adapted to the real-time world for process control. This is a learning
process and many of the available techniques are still evolving to properly
handle the peculiarities associated with the online control world. This
section will explore the various uses of the properties estimator and
discuss the attributes of the various techniques. The objective is to
understand how these techniques work and where these estimators fit into
the APC project. This knowledge will then allow us to determine which
estimation technique best fits the task at hand.

The inclusion of a feedback element in an online properties estimator is


what links theory to reality. Although our ultimate objective is to develop
the perfect inferential model relating inferential inputs to some estimated
quality, the reality is that our models are seldom perfect. The primary rea-
sons for the breakdown of estimator accuracy are:

• Oversimplification of the model form selected


Chapter 8 - Properties Estimation 263

• The process drifting outside the data region used for modeling
• Unmeasured disturbances.

The introduction of a feedback element into an online properties estimator


is a simple and effective way of handling these modeling shortcomings.
The basic concept is to always anchor the estimator with a known
measured feedback element such as an analyzer or lab measurement. The
online estimator is compared to this feedback element and a portion of the
error is factored into the estimator as an update term. This technique
insures that the online estimator does not drift from the known feedback
element. A small amount of feedback can convert a drifting estimator,
unsuitable for control or estimation, into a quality predictor capable of
being controlled in some higher-level application.

And what if no feedback element is available for feedback? This is seldom


the case, since the feedback element is essential in developing the original
estimator models. The challenge in introducing the feedback element is
insuring that the dynamics of the system are properly handled. Analyzers
and lab measurement have delays, process lags, and are often not sampled
at the same process point where we are estimating the property. A 5:00
A.M. walk through a chemicals plant or a refinery will often show the 6:00
A.M. sample bottles, sitting beside the operator's shack, already full,
tagged as the 6:00 A.M. samples and ready to be submitted to the lab for
analysis. This situation can present a problem if these lab samples are to be
used as feedback in an estimator and the plant is not at steady state. Han-
dling these real-life elements and their impact on incorporating feedback
into an estimator is a challenging task.

HINT: Consistency logic blocks on the analyzer signal can often detect analyzer
malfunctions and report these malfunctions for maintenance. During this period,
the feedback correction is set to zero, and the process control system can continue
operating on the inferential model. Predictive maintenance and increased control-
ler robustness are two benefits.

Opportunity Assessment

Dynamic Linear Estimator


The general dynamic linear estimator is the simplest approach but one
that is often overlooked for more complex and sophisticated techniques.
The linear estimator can add value in many model-based control projects.
The models are based upon simple time-series relationships that are often
developed as part of a model predictive control project. To better under-
stand the simplicity and uses of the dynamic linear estimator, we must
break the implementation down into the estimator functions.
264 Advanced Control Unleashed

Although processes have many measurements, only the ones that corre-
late well with the property in question are used in the properties estima-
tor. This correlation is expressed in terms of a model directly relating the
inferential inputs to the estimated property. Moreover, if the estimator is
based upon inputs with known causal affects, this will only increase the
robustness of the estimator since correlation relationships with no physical
basis can break down for numerous reasons.

The linear estimator is based upon simple time series-based models. For a
common distillation process, a simple linear dynamic model is sufficient
to relate the tower temperature and pressure to the overhead composition.
In contrast, estimating the properties of a high purity distillation tower,
operating across a wide range of purities, may require a more complex
nonlinear model, such as a neural network, to achieve the level of accu-
racy required for closed-loop control.

The use of the simple linear estimator can provide many benefits within a
larger APC application. The benefits range from improved performance,
to better operator understanding, to predictive maintenance of analyzers.
The implementation is simple and the modeling techniques are the same
as the ones being used by the MPC system.

The general form of the dynamic linear estimator is shown in Figure 8-8.

Neural Network
Neural network technology exploded into the world of process control in
the early 1990's. The technology showed great promise and for a while
was seen as a technology that could provide models for most of our sys-
tems without the need to understand the fundamental behavior or rela-
tionships of our process. Unfortunately, the free lunch did not have much
meat. People soon realized that blindly applying black-box modeling tech-
nology may provide a satisfactory fit for historical data but often leads to
poor performance for use online in a closed-loop control application, or on
time-variant processes.

Over time, people focused more on the true capabilities and power of neu-
ral networks: the ability to model nonlinear relationships in data without
having to define the form of the nonlinearity. This, coupled with some fun-
damental knowledge and understanding of the process relationships and
the careful selection of the variables and data required to generate the
model, has proven to be a powerful estimation technique. Today, neural
networks are being successfully applied in a number of online perfor-
mance improvement projects, with the best-established applications being
in the area of inferential estimators or soft sensors. The motivation to select
the neural-network technology over the simpler linear estimation tech-
Chapter 8 - Properties Estimation 265

niques or Partial Least Squares (PLS) is in the ability to accurately repre-


sent a nonlinear relationship in the data. The downside is that the neural-
network estimator is more of a black-box approach, making it difficult to
extract process understanding from the model; and it is only valid in the
operating region over which the data was modeled.

So what is a neural network? Neural-network technology is simply a


mathematical modeling technique used to describe the process behavior
from a set of input/output process data. The plant data is collected and
used to develop, or train, the neural network to be able to predict an out-
put. The general structure of the neural network consists of a number of
nodes, containing transfer functions, which connect the input variables to
the output variables. These nodes are connected with a series of weights,
which are regressed with historical data to minimize the output prediction
error. The power of this technique lies in the fact that the general layout of
the nodes can approximate any nonlinear relationship, without having to
explicitly define, or understand, the form of the nonlinearity in the data.
Thus, the neural-network path is a very powerful and charming approach
to modeling in a world where we often have an abundance of data but a
lack of process understanding or a lack of time to develop the understand-
ing. But be careful; this advantage can also lead to poor results if the tech-
nology is not properly applied.

A diagram of the general neural-network configuration is shown in Figure


8-22.

Example — Dynamic Linear Estimator


Using the function-block structure that is provided in most control sys-
tems, it is possible to easily create an effective Dynamic Estimator. The fol-
lowing are examples of typical applications.

Distillation Process
Consider the typical industrial distillation process shown in Figure 8-1.
This simple distillation tower exhibits the components of the process esti-
mator. The tower separates two components, having different boiling
points, into an overhead distillate product and a bottoms product. The
composition of the feedstream is measured and denoted by the instrument
(AT 1-1). The overhead pressure (FT 1-2) is measured along with the tem-
perature (TT 1-2) at the top of the tower, this temperature being a good
indication of overhead product quality at a fixed pressure and feed com-
position. An overhead analyzer (AT 1-2), located downstream of the tower
overhead accumulator, provides a measurement of the distillate stream
composition at an update frequency of 15 minutes. A lab sample is also
taken near the analyzer sample point twice a day. An APC application on
266 Advanced Control Unleashed

such a process would certainly involve closed-loop control of the composi-


tions or properties, and will therefore require some form of a properties
estimator.

Figure 8-1. Typical Distillation Process

The tower is well instrumented, with various temperatures, pressures and


flowrates being measured by the instrumentation system and transmitted
to the plant control system or process computer. These inputs in them-
selves do not represent the key values for control, but under certain condi-
tions their values can be used to directly infer the composition of the tower
product flows. The criteria for selecting the inferential inputs and the char-
acteristics of the inferential inputs that make them good candidates for an
estimator are:

• Fundamental relationship to the property being estimated


• Continuous well-behaved measurable signal
• Dynamically fast
• Highly correlated with the property or composition being
estimated.

In the example process shown in Figure 8-1, the overhead temperature


measurement and the overhead pressure meet the selection criteria above
and are strong inferential inputs in the estimation of the overhead
composition or heavy key component in the distillate flow.
Chapter 8 - Properties Estimation 267

Analyzer Validator
Online analyzers are very important elements in understanding what the
process is doing. These are all common things you can hear in any control
room:

"I don't believe the analyzer"... "The controller would run fine if only the
analyzer worked"... "The lab says high and the analyzer says low; take
your pick."

Operator confidence in these analyzers is often shaky at best. Implement-


ing a linear estimator that uses measured process inputs to predict the
analyzer result can provide a tracking value to validate the analyzer value.
Also, if the analyzer breaks down, a good validator can often act as a
backup until the analyzer is back in service. The configuration of the ana-
lyzer validator, Y(t), and sample output for the distillation example in Fig-
ure 8-1 is shown in Figure 8-2a.

Figure 8.2a. Example Structure for an Analyzer Validator

The most common analyzer malfunction, illustrated in Figure 8-2b, are


easily detected with the linear estimator include the following:

• Spiking behavior
• Frozen response
• Drifting

HINT: Integrating the analyzer feedback error term over time is a good indicator
of analyzer drift.

Online Prediction
Many MPC applications are designed to control product qualities, with
these qualities being measured by online analyzers. Online analyzers
present many challenges within a closed-loop control application includ-
ing the following:
268 Advanced Control Unleashed

Figure 8.2b. Sample Output of Analyzer Validator

• Analyzer sample delay and lag


• Analyzer update times much longer than the controller frequency
• Calibration and maintenance requirements
• Analyzer drift
• Analyzer malfunction (spiking and freezing)

The development of a linear properties estimator designed to simply


shadow the analyzer can help overcome some of the challenges presented
by online analyzers. For closed-loop control, the linear estimator will
replace the online analyzer signal as the actual control signal. It can
generate a continuous signal for control, detect analyzer drift and trap for
analyzer malfunctions subsequently eliminating the analyzer feedback
component. Depending upon the quality of the models, the online
predictor can often run for several hours without feedback, allowing the
control application to remain on and continue to generate benefits.

There are two basic configurations of the online predictor. The first is to
use the measured process inputs to generate a continuous signal that will
track the analyzer. The second configuration is designed to take advantage
of the fact that the estimator inputs often contain information on how the
analyzer will respond long before the analyzer changes. Thus we can build
the predictor to remove the analyzer delay and lag. For example, the over-
head temperature and pressure can predict the composition in the tower
overheads now, and simply use the analyzer to provide feedback correc-
tion once that material is finally sampled by the downstream analyzer.
Chapter 8 - Properties Estimation 269

The response of the Online Predictor with removal of the analyzer delay
and lag is shown in Figure 8-2c.

Figure 8.2c. Example Response of an Online Predictor

One of our objectives is to achieve maximum utilization of our applica-


tions, and the addition of an analyzer validator will greatly enhance the
reliability of the closed-loop application.

HINT: The feedback of these online analyzers into our properties estimators is
important. However, we can make use of the online estimator technology to some-
times reduce the number of online analyzers required to properly control the pro-
cess. For example, consider a distillation train with several towers in series.
Analyzers are located on the overheads and bottoms of each tower to provide com-
positional values for control. With an online predictor, the light key in the bottoms
of thefirst column is estimated with tower bottoms temperature and pressure with
feedback from the analyzer in the overheads of the second column. The linear esti-
mator can remove the delay time associated with only measuring the light key
impurity in the first tower bottoms with an analyzer located in the overheads of
the second tower. Reducing the number of analyzers will not only lower capital
expenditure but will also reduce maintenance costs.

Examples - Neural Networks

Soft Sensor Tracking an Online Analyzer


These are most often the simplest of the applications, since there is usually
a fairly high frequency of output data (analyzer data) to fit the model. Soft
sensors are most often applied within closed-loop control applications or
270 Advanced Control Unleashed

as high-frequency open loop estimators. Analyzer feedback is introduced


as a bias term to correct for any estimator drift.

The normal challenge with this application of the neural network is in rig-
orously handling the dynamics. In many cases the settling time between
the process inputs and the analyzer reading is significantly longer than the
cycle time of the analyzer. If the goal is to provide a continuous signal of
an analyzer that may update only every 15-30 minutes, then the dynamics
between inputs and outputs must be handled properly.

Another role for this type of estimator is to handle the case when an ana-
lyzer must be taken offline for maintenance or calibration. Having an
online soft sensor can provide a level of redundancy for periods when an
analyzer is offline for maintenance. If the neural soft sensor is biased with
feedback from an online analyzer, tracking the integral of the bias term is a
good indication of total drift in the soft sensor, which can often point to an
analyzer that requires calibration.

Soft Sensor Tracking a Laboratory Measurement


Building continuous soft sensors from laboratory data has the same goal
as above: to provide continuous estimates of a product quality such that
more frequent control action is taken. The challenge with lab samples is to
accurately know when the lab sample was actually taken. Such informa-
tion is needed to properly match the lab analysis up with the time scale of
the input variables. Using historical data can add significant variance to
proper time alignment of the data, acting as noise in the training phase,
and thus requiring greater amounts of data for the same quality fit. Many
neural network offline design packages recognize this challenge and allow
a gaussian confidence band to be assigned to the lab sample to reflect the
lack of certainty that the sample was taken at the exact time specified.

The dynamics may be used when fitting neural networks to laboratory


data. It is recommended that process tests be used and that high-frequency
lab sampling be implemented in order to properly estimate the dynamics
relating the inputs to the lab sample.

Soft sensor with No Feedback Element


This is one area where the neural soft sensor has made a significant differ-
ence. In this application, the model is built with process input data and
most often with a portable analytical device to measure some quality
parameter. Once the model is fit, the quality of the soft sensor is shown to
be as accurate as the analytical measuring device, allowing the soft sensor
to be certified for use without any correction or update from an online
measuring device. The most common form of this soft-sensor application
Chapter 8 - Properties Estimation 271

is in emissions monitoring with soft sensors for SOx and NOx emissions.
In these applications, routine calibration checks with portable measuring
devices must be performed to insure the soft-sensor application is not
drifting. The most common causes for this drifting are process design
changes or changes in the operating range.

In the early stages of deployment, neural networks were mistakenly sold


on the basis that they could replace the need for analyzers. The SOx and
NOx soft sensors succeeded because they were small and well-bounded
applications, but even here the errors were probably greater than the users
realized. Without feedback the user is unaware of transient errors and drift
caused by a myriad of unmeasured and unknown dynamics, changes in
burner wear and fouling, damper backlash and stiction, and waste fuel
composition and phases. A key point is that even the best models will drift
and exhibit transient errors because process parameters are not constant
and many are not known. Online and lab analysis feedback correction is
essential. Therefore, the value of the soft sensor is not in replacing an ana-
lyzer but in enhancing its reliability and capability by eliminating delays
and noise.

Continuous Digester

Within the pulp and paper industry, a continuous digester is used to con-
vert wood chips to pulp by chemically removing the lignin that holds the
cellulose fibers together. A typical continuous digester is shown in Figure
8-3. The purpose of using a neural network for this thermo-chemical pro-
cess is in the calculation of the degree of delignification as indicated by
Kappa number. Normally, Kappa number is determined by lab analysis of
a grab sample of the pulp downstream of the digester. Online measure-
ments of Kappa are expensive and often unreliable and inaccurate. If
Kappa is determined by lab analysis of a grab sample, this will introduce
about an hour delay in the analysis. It should also be noted that the time
delay of the process is about 4 hours. In such a process the ability to look
into the future comes as a handy tool to make corrections immediately.

Figure 8-3 shows the continuous digester process in diagrammatic detail.


A digester may have more than 40 measured inputs that potentially could
affect the Kappa analysis. Applying cross-correlation and sensitivity anal-
ysis, the number of inputs is reduced to 5 used for the neural network:
cold blow flow, outlet device load, main blow flow, throughput, lower
extraction flow. The Kappa number estimated by the neural network
agrees well with the lab analysis, as shown in Figure 8-4.
272 Advanced Control Unleashed

Figure 8-3. Continuous Digester Process

Figure 8-4. Kappa Lab Analysis and Virtual Sensor Estimate of Kappa

Fermentation Process
Within the food and pharmaceutical industries, numerous products are
manufactured using a batch fermentation process. One of the challenges in
operating such a process is to know when a batch is complete and the
batch processing terminated. In many cases, it is possible to determine the
state of the fermentation by doing lab analysis of the batch after it is com-
plete. However, by using a history of lab analysis and information avail-
able during the batch such as off-gas measurement ,batch temperature,
and charge rate, it is possible to create a soft sensor to predict when a batch
is complete. A typical fermentation process is shown in Figure 8-5.
Chapter 8 - Properties Estimation 273

Figure 8-5. Fermentation Process for Which End of Batch Was Predicted
Using a Neural Network

Utilizing a soft sensor, the end of batch is accurately predicted. As shown


in Figure 8-6, the prediction of time left in the batch corresponds to the
desired time to complete the batch.

Figure 8-6. Comparison of End of Batch Estimate to Actual Time at End of


Batch
274 Advanced Control Unleashed

Application

General Procedure
Often within the process industry you can find examples where a critical
control measurement is not available in real time. This may be because an
analyzer for online measurement is not available or because the infrequent
values and delays associated with a sampling system limit the control
response. In other cases, the analyzer is not reliable enough to use in
closed-loop control. As a result, it may take longer for correction to be
made by an operator to compensate for changes in the feedstock or in the
process. Such delay will increase the amount of process variation and thus
may directly affect product quality and production rate.

In these instances, it is often possible to utilize the analyzer or off-line


analysis in conjunction with upstream measurements to create a virtual
sensor that may provide a faster indication of process changes. Neural net-
works and dynamic estimators are two primary technologies commonly
use to create virtual sensors. A virtual sensor, more popularly known as a
'soft sensor,' is based on the use of software techniques to determine the
value of a process variable in contrast to a physical sensor that directly
measures the process variable's value. These sensors open a whole new
world of possibilities and options to address issues such as delay intro-
duced by sampled analyzers or the time required to perform an off-line
analysis. Ultimately, for closed-loop control, a fast-acting continuous sig-
nal is essential to performance. Process inputs supported by a feedback
signal can combine to deliver this signal in a soft sensor.

Soft sensors find widespread use in the process industry:

1. In situations where the mounting and use of a physical sensor is


practically not feasible.
2. As inexpensive online predictors of lab test results to provide a
faster indication of change and thus help the operator take action
to reduce process variation.
3. As cross-checking sensors for their less reliable online physical
counterparts.
4. In applications where time is at a premium, demanding immediate
changes in process conditions.
5. As an 'if-then' analyzer to predict values into the future.

Some typical examples of soft sensor usage in the process control industry
are:
Chapter 8 - Properties Estimation 275

• To predict boiler emissions such as the amount of NOX, CO, and


particulates.
• To estimate quality parameters such as the asphalt pen index,
refinery product cut points, LPG composition, batch end time,
viscosity, polymer melt index, headbox pH.
• In determining values of process variables.

In modeling and design applications, soft sensors can also provide a


nonlinear model of a process and therefore be used to identify key input
variables that influence the process output.

When the analysis from an online analyzer is slow to indicate process


changes, a measurement made further upstream of the analyzer often-
times may indicate the process change more quickly. If the relationship
between the analyzer measurement and this upstream measurement is
reasonably linear, it is possible to construct a dynamic estimator. This
dynamic estimator uses the analyzer output in combination with the
upstream measurement to more quickly indicate the process change.
When the relationship between the analyzer and the measurement is
highly nonlinear, a neural network may provide a more accurate indica-
tion of the virtual measurement. Also, when a process is characterized as
having multiple upstream measurements that indicate independent
changes in the process, it is often easier to use a neural network-based vir-
tual sensor to compensate for the different measurements in the soft sen-
sor prediction.

The following procedure for the development of an estimator is a list of


check points designed to insure a sustainable performance. Estimators can
look good during training and offline testing but be a disappointment
online. While each of these steps is important to both estimators, some are
more important for one type than for the other.

1. Develop an understanding of the process and the objectives.


2. Make sure historian data compression is turned off (or set small)
and the data interval is fast enough.
3. Use Principal Component Analysis (PCA) to make sure inputs are
not correlated, to better understand the key process variables, and
to compute the number of independent spaces that define the
variability of the process.
4. Confirm that there is a reasonable cause-and-effect relationship
from each input to the output.
5. Make sure historical data for each input has enough movement
around operating points and covers the entire operating range.
276 Advanced Control Unleashed

6. Conduct uncorrelated process tests or design of experiments to


supplement historical data to provide necessary richness of data
set.
7. Add nonlinear transforms to inputs based on first principles when
available.
8. Add time delays and either filters or multiple instances of inputs
with individual delays to align the input data to reduce noise and
inverse response.
9. Test the estimator with a data set that is different from the one used
to develop the estimator.
10. Add an output time delay and time lag to synchronize the
estimator output with the lab analysis or online analyzer reading.
11. Remove the output time delay and time lag to use the estimator for
closed-loop control.
12. Check that the use of closed-loop control does not change the
correlation structure (relationship between the inputs and
estimator output).
13. Build in and test the detection and handling of failed, frozen, out-
of-service, and noisy input and feedback measurements.
14. Put the estimator online and monitor it for a variety of operating
conditions before using it for closed-loop control.
15. Monitor and statistically analyze the feedback correction for auto-
and cross correlation with the inputs to check for drift, missing
inputs, and noise.
16. Add gain scheduling and variable transportation delays as needed.

Dynamic Linear Estimator


The process of building the dynamic linear estimator is the simplest of all
estimation techniques. Since the number of input variables is small and
their relationships to the inferred property most often well understood,
the modeling effort consists of modeling the dynamic relationship
between the input and the feedback element. This is done with either his-
torical data, or, if the historical data does not have sufficient excitation to
properly characterize the dynamic behavior, then plant test data may
improve the modeling effort. The modeling techniques used will most
often be the time series or the regression modeling techniques used for
developing dynamic models for model based controllers.

Once the dynamic models relating the inputs to the output have been
developed, then the feedback element is introduced. A feedback update
Chapter 8 - Properties Estimation 277

factor is selected to compensate for any modeling error and to eliminate


drift from the true value of the property being estimated.

Process knowledge from operations and simulation should be used to


select inferential input variables that not only correlate well with the esti-
mated property but also have known causal effects. This will help insure
robustness in the estimator.

If high-frequency noise in the inferential inputs is not present in the ana-


lyzer, filter the inputs. Conversely, if the noise in the inferential input
appears in the feedback element, do not filter it from the input or it will
simply be seen through analyzer feedback, thereby adding delay and pos-
sibly oscillatory behavior to the system.

Simple historical data, as opposed to test data, often contains enough


information to generate good estimator relationships. This holds true
especially if the inferential inputs have a causal relationship to the prop-
erty being estimated.

The amount of feedback taken is a function of the model quality and the
degree of noise in the feedback element. If the feedback element is noisy,
reduce the feedback update. If the inferential model is poor, increase the
feedback update.

If linear estimators are being developed for closed-loop control or an MPC


application, build them from historical data and get them online before the
MPC process testing stage begins. Watching the behavior of the linear esti-
mator during the process tests can provide good insight into performance.
The test data can then be used to verify and update the inferential models.

Monitor the feedback error to determine the estimator quality. The vari-
ability of the error can identify degradation in the model; integrating the
error can isolate model or analyzer drift; and the auto-correlation of the
error can identify opportunities for improvement. For example, strongly
auto-correlated model error over several time lags is a good indication
that the feedback correction factor should be increased.

The open structure of the linear estimator makes it very simple to apply
online gain scheduling. The individual gain term, for each input, can eas-
ily be updated. This will often allow the linear estimator to cover a nonlin-
ear operating range without the need to go to a more complex estimation
technique such as neural networks. This enhancement does, however,
require knowledge of the form of the nonlinearity.
278 Advanced Control Unleashed

There is a growing trend towards the development of more sophisticated


simulation tools and the deployment of these tools to an online environ-
ment. Traditional steady-state design simulators are now being applied
online for performance monitoring and steady-state real-time optimiza-
tion. Dynamic simulation programs are also available and capable of run-
ning online, mostly for use in operator training programs. The growth of
these simulators running in an online environment presents opportunities
for either generating the dynamic relationships required for the estimators
or providing gain update information to handle process nonlinearities.

Many times the linear dynamic estimator is developed and used within a
model-based predictive control application. In order to develop the
dynamic input/output relationship model for the constrained MPC con-
troller, the response of the estimator needs to be modeled against a step
change in the manipulated variable. If the dynamic linear estimator is
built and online before any process step testing is performed, then the task
is straightforward and we need only collect the estimator value and model
it directly with the manipulated variable.

But what if the estimator is built subsequent to the process test? If we


remember that the dynamic linear estimator responds with the dynamics
of the inferential inputs and with the final steady-state gain of the feed-
back element, the solution is straightforward. The MPC model can easily
be developed by obtaining the gain from a model that relates the manipu-
lated variable to the feedback element. The MPC model dynamics are then
defined by modeling the manipulated variable to the inferential inputs.
The gain and the dynamic terms are then combined to form the MPC
model relating the manipulated variable to the dynamic linear estimator.

Neural Network
Classical approaches to the problem of designing and training a neural
network for actual process conditions involved a sequence of complex
steps which often was a grueling experience for the normal user. More
often than not it required a highly trained professional to create models.
These models also had the drawback of not being able to constantly adapt
to drifts in process inputs and other process conditions.

Modern tools embedded in the control systems have largely overcome


these problems. Dynamic estimators and neural networks are represented
as a function block in a structured system, and with state-of-the-art tech-
niques and intuitive GUI applications, the commissioning process
becomes fast and easy.

When a virtual sensor is implemented, the online measurement or off-line


lab analysis is always required to correct for unmeasured disturbances
Chapter 8 - Properties Estimation 279

and changes in the process. Otherwise, the prediction provided by a vir-


tual sensor will over time drift away from the true measurement value. In
online mode, the actual process variables, for example those obtained as a
result of lab analysis, are used as inputs to the neural network for auto-
matic adaptation of its prediction in response to changes in process, as is
illustrated in Figure 8-7.

Figure 8-7. Soft Sensor Uses Upstream Measurements to Predict Process Out-
put Values

Application Detail
Dynamic Linear Estimator
The general structure of the properties estimator Y(t) is shown in Figure 8-
8, with the variables and components relating back to our original process
in Figure 8-1. The flow and connectivity is a simple structure and easily
implemented in most process computers or most control systems. Most
modern-day control systems will contain the standard blocks that can eas-
ily be configured to implement a linear estimator. Alternatively, many
online linear estimator packages are available on the market today that
may be layered on a control system.

Consider the distillation process shown in Figure 8-1. The flow begins
with the inferential inputs, denoted as the variables AT 1-1, FT 1-2 and TT
1-2. The inputs must first be conditioned. The conditioning step either
applies a filter to remove measurement noise, applies a nonlinear transfor-
mation, or places a hold on the input to insure that inputs measured at dif-
ferent points in the process are all synchronized in time to some selected
physical point in the process at which we are estimating the property. In
our process shown in Figure 8-1, variable TT 1-2 may be conditioned with
a filter to remove measurement noise, variable FT 1-2 may require a loga-
rithmic transformation to linearize its relationship to the estimated prop-
erty, and the variable AT 1-1 may require a hold parameter to insure that a
change in AT 1-1 does not immediately reflect a change in the estimator at
the top of the tower. Conditioning the inputs for noise or dynamic posi-
tioning will insure that the data is properly time aligned and the modeling
effort can extract the true input/output relationship.
280 Advanced Control Unleashed

Figure 8-8. Properties Estimator Equations

Figure 8-9. Example of a Process Dynamic Estimator

Following the conditioning step, the measurements are input to the esti-
mator model block in order to predict the property. The inputs must also
be input to the feedback model block in order to generate a property that
is compared directly to the feedback element for feedback correction pur-
poses. In most cases, the steady-state behavior of the estimator model and
feedback model are identical. However, the dynamic components are
often different since the feedback element has both a delay and a lag that
must be captured in the feedback model, but is often not desired in the
estimator model. Removing the delay and lag from a predicted quality
parameter has many advantages, ranging from the ability to simplify the
control algorithm to being able to present to operators the value of key
Next Page

Chapter 8 - Properties Estimation 281

properties at the point at which they are being controlled. Better under-
standing by the operator is often the difference between an APC applica-
tion that runs and one that is turned off.

To better demonstrate this concept, consider again the process shown in


Figure 8-1. A simple online estimator of the component measured by the
online analyzer AT 1-2 can be implemented by using the temperature as
the inferential input. The estimation point will be the top of the tower with
feedback correction from a downstream analyzer. Figure 8-10 shows the
dynamic responses of these variables.

Figure 8-10. Example of Process Dynamic Responses

A simple online estimator of component A is implemented by using the


overhead temperature, T, as the inferential input. The estimation point is
the top of the tower with feedback correction from a downstream ana-
lyzer. A step change in the temperature (TT 1-2) indicates an immediate
change in the composition at this point in the tower. Thus, the simplest
form of the estimator model will be a steady-state gain relationship (K),
multiplying the temperature signal by this gain relationship to generate
the estimated property, Y(t). This temperature must also be modeled to the
feedback element, in this case the downstream analyzer (AT 1-2), in order
to compute feedback correction via an estimation of the analyzer response.
This feedback model will have the same gain relationship but will also
require dynamic components to reflect the delay and lagged behavior of
the downstream analyzer. Figure 8-11 shows how the output from the esti-
mator model, Y(t), and the output from the feedback model differ. The
estimator model and feedback model have the same steady-state or gain
relationship, but they differ in their dynamic components.
Previous Page

282 Advanced Control Unleashed

Figure 8-11. Comparison of Estimator and Feedback Model

The final element in the estimator structure is the feedback correction. The
output of the feedback model block generates a signal that is compared to
the feedback element. The difference is the modeling error due to noise,
unmeasured disturbance or model inaccuracy. This model error can then
be used to update both the outputs from the estimator model and the feed-
back model blocks in order to insure that the estimator tracks the feedback
element and does not drift. It is every engineer's dream to create the per-
fect model, not requiring feedback, but every operating manager's objec-
tive to quickly move forward with an estimator that is used to improve
performance. Remember that a little feedback can go a long way!

HINT: One of the big benefits of applying the linear estimator to MFC projects is
that the dynamics of the estimation process relating to the MFC controller are
handled rigorously; an important aspect of estimators for MPC control.

Neural Network
There are many industrial applications of neural networks, but the appli-
cations that have dominated the process industries are centered on infer-
ential properties estimators or soft sensors. The steps in building neural-
network soft sensors are well defined and the methodology proposed
below should assist in insuring that robust, high-quality estimators are
generated.

1. Understand the process and objectives


There is no alternative to good process understanding. Although
we may be seduced by the simple "black-box" approach of neural
networks, it is not a substitute for process knowledge. Understand-
ing the relationships in the process, and how they are governed,
will enhance our ability to properly select the correct variables to
Chapter 8 - Properties Estimation 283

be introduced into our soft sensor and to better select the data
required for training. Process understanding must be applied at
each stage of the process to validate the results. Ultimately, it is
process understanding that will insure the robustness of the final
application.

The final objective of a soft sensor will govern how it is built


through variable selection and data selection. For example, if the
soft sensor is being built for use within a closed-loop control appli-
cation, it is important to select input variables that have a causal
relationship to the predicted output as opposed to a simple correla-
tion relationship. The subsequent application of a closed-loop con-
trol scheme will often alter the correlation structure of the process
variables, causing a breakdown in the performance of the soft sen-
sor. A more detailed discussion on the important aspects of build-
ing neural soft sensors for closed-loop control is given below.

2. Select the variables


Selecting the variables is an important step in the design, and one
that is highly dependent upon the process understanding
described above. One approach is to simply select all the measured
process variables surrounding value to be predicted. This
approach will often yield a good fit of training data, but is subject
to a lack of robustness due overfitting, producing poor results
when put the application is put online. In selecting variables, more
is not better. It is recommended to start the fit with a smaller set of
input variables with known causal relationship to the input. The
model can then be enhanced with the addition second pass vari-
ables.

3. Generate and select the data sets


One of the most attractive features of neural networks is the ability
to feed the system large amounts of historical data in order to build
the black-box model. This approach, often referred to as data min-
ing, is prone to many failure points. Historical data, in itself, is
often not sufficient to build a quality soft sensor. The shortcomings
of historical data include:

• Highly correlated inputs


• Compressed data
• Biased data
• Lack of "richness" in the data
284 Advanced Control Unleashed

Highly correlated inputs are a problem if all these inputs are included in
the model and especially if there is no causal relationship. This is often the
case with historical data as variables are moved manually by a plant
operator in an often systematic fashion. Consider a case recently
investigated where a neural network was configured to predict a quality
parameter on a draw stream of a fractionator shown in Figure 8-12.

Figure 8-12. Fractionator

The fundamental relationship behind this process is that manipu-


lating process conditions impact draw quality. Historical process
data was used to build the soft sensor. The data was selected over a
period where the process was run open loop and the operator
would make moves in the draw rates to control quality. If sound
process understanding is not applied in the variables selection pro-
cess, the draw could be selected as an input, since it demonstrates a
strong correlation behavior to draw quality. This configuration of
model inputs, although representative of the historical behavior of
the data, does not capture true causality of the process. There is no
way of determining how the gain relationship to the predicted out-
put is shared by the correlated inputs. It will cause major errors in
the predictive capacity of the soft sensor, without any warning.
This was in fact the case with this example, where a control loop
was implemented to control draw quality by manipulating draw
flow.
Chapter 8 - Properties Estimation 285

Handling highly correlated inputs is a common problem with neu-


ral-network soft sensors. One possible solution is to combine the
techniques of Principal Component Analysis (PCA) with neural
networks. The essence is to first let PCA generate a set of indepen-
dent latent variables, made up of linear combinations of the origi-
nal inputs, and then to use these new latent variables as inputs to
the neural network. This approach can overcome some of the prob-
lems associated with highly correlated variables but still delivers
the powerful nonlinear modeling capability of neural networks.

HINT: It is always useful to run the NN inputs through PCA analysis to


more clearly understand the correlation structure of the data and the
number of independent directions defining variability.

Compressed data is a problem in any modeling effort. It is particu-


larly troublesome with neural-network design, since the technique
often tries to exploit the data-mining approach of letting the exist-
ing historical data provide the input information. If operating data
is over-compressed, many of the true relationships will not be
uncovered. To avoid these pitfalls, it is highly recommended that
compression deadbands be carefully selected.

Data "richness" is a term often used to define the information con-


tent in the data. Two problems that often occur in historical data
are that the data does not cover the full operating range of interest,
or, if it does, the data is highly concentrated around a specific oper-
ating point. It is more important to have good quality data contain-
ing good information than lots of poorly conditioned data. Always
consider the use of experimental design or process testing to
enhance the quality of the data and to validate neural networks
built on historical data.

4. Build the model


If the task of selecting the right variable set and collecting good
quality data has been achieved, then the model-building exercise is
greatly simplified and will result in much greater success. Most
commercial packages will guide the user through the model-build-
ing process. In most cases the user is not required to specify the
number of nodes, layers or selected transfer functions. The data
must be divided into either 2 or 3 sets. A "training" set and a "vali-
dation" set are always required. The training data is used to fit the
model by manipulating the weights and bias terms in order to min-
imize a function of the prediction error. At every iteration, the pre-
dicted outputs are computed for the validation data set, and the
errors checked against specified tolerance criteria. If the tolerance
286 Advanced Control Unleashed

is not achieved, the training pass is executed again. This iterative


procedure continues until the parameters converge and the desired
level of prediction accuracy is achieved. In some cases a third data
set, the test data set, is defined to verify the structure of the neural
model. If the prediction errors of the test data set do not meet the
tolerance criteria, the structure of the neural network is modified.

5. Put the model online


Most neural network packages targeted at industrial manufactur-
ing can easily be put online. They are most often implemented in a
control workstation and interfaced through OPC connectivity. Fur-
thermore, there are online products that run directly in the DCS
platform, offering a high level of integration and lower infrastruc-
ture support requirements. There are nevertheless some important
issues that must be considered when a neural model is put online.

• Bad or faulty input values


The neural-network model is a black-box model that handles
all inputs together. It does not break down individual inputs
into independent causal relationships to the outputs being
modeled. Thus the neural model is highly dependent on all the
input values being good. It may produce erroneous predictions
if one of the inputs goes bad and is removed from the input
variables set. Therefore, it is important that any online applica-
tions carefully monitor the status of the input set and provide a
diagnostic flag to indicate uncertainty in the prediction.

• Extrapolating to operating regions outside of data


The predictions of the online neural model are considered valid
only in the operating range for which the data was modeled.
As discussed above, it is therefore important to select the train-
ing data to cover the true operating range of the process.
Although the model may perform adequately outside this
range, there is no indication of prediction quality and these
results should be treated as suspect.

• Analyzer or lab feedback


Any online prediction model is subject to drift and model error
due to changes in the process, changing feedstocks, etc. This
model error can easily be handled with some form of online
model update. There are many approaches to compensating for
this drift, but the simplest for online applications is to intro-
duce the neural network into a linear estimator feedback struc-
ture discussed in the Theory section.
Chapter 8 - Properties Estimation 287

HINT: When putting a neural prediction online, consider adding a prediction-


error calculation term. Analyzing both the variability and auto-correlation of this
error can provide insight into the structure of the drift and the need for feedback
correction.

There are two special applications of online neural networks in the realm
of an APC application that warrant special attention. They are the applica-
tions of high-frequency neural predictions where the dynamics of the pre-
diction response are important and the development of a neural prediction
for subsequent closed-loop control.

The approach most often adopted in building neural predictors is to


extract steady-state averaged data from a plant historian and to generate
steady-state neural predictors. This approach completely ignores the
dynamics of the process, and if the required frequency of the neural net-
work is high enough, the dynamic transient behavior of the process may
need to be incorporated into the neural predictor.

There are various techniques for incorporating the transient behavior of


the data into the neural model. One involves simply extending the input
variables from a scalar value to a vector by taking past values of the
inputs. The number of past values will depend upon the settling time of
the process. A second method involves delaying and filtering the input
values before introducing them into the input layer. This method requires
specific knowledge of the dead time and settling time of the inputs relative
to outputs. A third technique is to introduce a first-order filter at the out-
put of each hidden node. The filter factors are equivalent to time constants
or lags and are fit during network training. This architecture benefits from
the fact that no a-priori knowledge of the dynamics is required and that it
can model the majority of nonlinear dynamic systems.

There are two issues in developing neural networks for closed-loop con-
trol that must be understood. The first is changing behavior in the system
inputs due to a closed-loop controller being applied to a neural network
and the second is modeling correlation versus causality. In the first case,
we are dealing with the fact that neural-network estimators are built by
modeling the correlation of the input variables to the output variables,
with the sole objective to give the best fit. If closing the loop on the neural
network significantly changes the correlation structure of input informa-
tion going into the neural network, then the neural network may break
down, with little more notification than a poorly performing estimator
and even poorer control.

To demonstrate this situation, consider the problem of building a neural


network of a composition in overheads of a distillation tower. The tower
288 Advanced Control Unleashed

currently runs in open loop and the temperature and reflux flow are
selected as inputs to our neural network. It is important to note that if a
good estimator is generated, there is incentive to control this estimator to a
target value. A good neural model is developed with the temperature and
reflux as inputs. The loop is now closed on the neural-network estimator
by automatically adjusting the reflux. This change in controller configura-
tion will map the variability in the temperature onto the reflux variable,
significantly altering the behavior of the estimator input signals. This
change in the input signals will affect the performance of our neural-net-
work estimator and may render it useless as an estimator for closed-loop
control. Be aware that taking control action on the variables acting as
inputs in a neural network or on the neural estimator itself may signifi-
cantly affect the quality of the estimator generated.

Significant improvements have been made over the last ten years in the
tools used to develop neural networks for soft-sensor applications. These
tools are specifically designed to take into account the dynamic nature of
the processes for which a soft sensor is to be developed. The difference in
the traditional approach to that available today is illustrated in Figure
8-13.

Figure 8-13. Steps in Neural Network Development


Chapter 8 - Properties Estimation 289

Rules of Thumb
Rule 8.1. — Neural networks excel as estimators for nonlinear heuristic
relationships, a large number of inputs, batch end points and cycle times,
and wide variety of process dynamics.

Rule 8.2. — Dynamic linear estimators are best for a linear heuristic rela-
tionship with a small number of inputs, batch trajectories, complex tran-
sient behavior, and properties with a long settling time compared to the
analyzer cycle time or lab analysis time.

Rule 8.3. — High-fidelity dynamic process simulation programs adapted


to match the process can be used as a supplemental source of data to
explore the capability of the estimator for new operating regions (see
Chapter 10 on the Virtual Plant).

Rule 8.4. — Alignment of data is more critical for continuous and semi-
continuous (fed-batch) operation property estimation and fault detection
than for the end-of-batch property estimation and long time intervals.

Rule 8.5. — The effect of noise in analyzer readings is reduced by a


decrease in the fraction of analyzer feedback used to correct an estimator
and an increase in the time interval between corrections to increase the sig-
nal-to-noise ratio.

Rule 8.6. — Noise in inputs that appears in the analyzer should be left in
the delayed and filtered estimator output synchronized with the analyzer.

Rule 8.7. — Noise in inputs that is not in the analyzer reading should be
filtered out before used in the estimator.

Rule 8.8. — Estimators should have a maximum allowable period of oper-


ation without analyzer feedback or between validations by portable
devices to protect against drift, unmeasured inputs, and changes in pro-
cess equipment (e.g., fouling) and operating conditions (e.g., raw material,
recycle, and waste composition).

Guided Tour
This tour illustrates the potential ease of use and convenience of an inte-
grated interface that is possible for property estimation embedded in an
industrial control system. The following areas are addressed:

• Neural Network Configuration


• Selection of Upstream Conditions
290 Advanced Control Unleashed

• Screening Data
• Examining Input Sensitivity
• Determining Input Delay
• Training the Neural Network
• Options in Training

Property estimation is implemented in many ways in a control system. A


simple linear dynamic estimator may often be created using the standard
function blocks of the control system. However, other tools are required to
address more complex applications involving multiple measurements that
are related to the estimated value in a nonlinear manner. For these types of
applications, some modern control systems include the ability to create
property estimation based on neural-network technology. By embedding
this latter capability in the control system as a function block, it is possible
to reduce the time and complexity of engineering and commissioning the
estimator.

An application involving property estimation is created by adding a neu-


ral-network (NN) function block to a module. When the neural network is
used to estimate a measurement that is only available through lab analy-
sis, a lab entry block is linked to the NN block. When the estimator is used
as a backup for a continuous analyzer or to fill in values from a sampled
analyzer, the measurement input is provided to the sample input of the
NN block, as shown in Figure 8-14.

Figure 8-14. Neural-network Block for Estimation


Chapter 8 - Properties Estimation 291

To configure the neural-network block, the upstream measurements that


affect the estimated property must be identified. The user can do this by
browsing the control system to find modules to specify the measurements
that are used, as illustrated in Figure 8-15.

Figure 8-15. Identifying Upstream Measurement

When the module containing the neural-network block is downloaded,


communication connections to the inputs are automatically established
within the control system. Also, all inputs to the block are automatically
assigned to the historian for collection.

To commission the neural-network estimator, a special application is pro-


vided to select and screen data that is required for training. Inputs to the
neural-network block are trended in this interface. From this trend view,
the user may select the time frame over which data is to be used to train
the neural network, as shown in Figure 8-16.

The user must provide an estimate of the time it takes for the process
inputs to affect the measurement. Once this information is provided and
the data selected, the neural network is automatically trained by the user
selecting "Autogenerate." In response, a sensitivity analysis is automati-
cally done for the configured inputs to identify the contribution of each
input to the estimated parameter. Inputs that do not affect the measure-
ment are automatically eliminated from the neural network. The results of
the sensitivity analysis are presented to the user as illustrated in Figure
8-17.
292 Advanced Control Unleashed

Figure 8-16. Selection of Training Data

Figure 8-17. Inputs Sensitivity

An important part of the sensitivity calculation and the training of the net-
work is to accurately account for the delay between the input changing
and it having an impact on the estimated measurement. This delay is auto-
matically calculated by doing a cross-correlation between each input and
the sample input values. The calculated cross-correlation and input delay
Chapter 8 - Properties Estimation 293

are shown in the detail view for the sensitivity of each input, as shown in
Figure 8-18:

Figure 8-18. Determination of Input Delay

As part of the training process, the number of hidden neurons that should
be used for best results is automatically determined. Part of the data is
used for training the neural network and part for testing. By comparing
the results for the two sets of data, it is possible to automatically determine
when to stop the training process, as shown in Figure 8-19.

Expert features allow the user to provide input into the inputs that are
used in creation of the neural network. For example, the user may choose
to calculate input sensitivity without training the neural network. When
this is done, the user may change the input delay or manually remove
inputs that were automatically selected, as shown in Figure 8-20.

After generating and examining the input sensitivity, the user may train
the neural network based on a selected input sensitivity. In response, the
expert is allowed to modify some of the parameters used in training, as
shown in Figure 8-21.

The default training parameter value will provide the best results. How-
ever, options such as the training-test data split are helpful in addressing
special requirements associated with the test data.
294 Advanced Control Unleashed

Figure 8-19. Automated Training Process

Figure 8-20. Expert Options for Input Screening

Theory
Dynamic Linear Estimator
The dynamic linear estimator operation as shown in the equation (8-1) can
be defined as follows:

Yt = Yt-1 +Σi [Ki * (Xi,t - Xi,t-1) ] + Kfb * (A - Y) (8-1)


A = Analyzer reading or lab analysis result
Chapter 8 - Properties Estimation 295

Figure 8-21. Training Options

Kfb = feedback correction gain (0.05 To 0.5)


Ki = process gain for input Xi
Xi,t = process input i at time t
=
Xi,t-1 process input i at time t-1 (last update)
Y* = old Y (delayed and filtered to synchronize with analyzer
reading)
Yt = Online property estimator output at time t
Yt-1 = Online property estimator output at time t-1 (last update)

The sum of individual changes in each delayed and filtered input


multiplied by a process gain is corrected by a bias that is the difference
between a delayed and filtered estimator output (Y*) and the analyzer
reading multiplied by a feedback correction factor. This corrected change
in estimator output is added to the last estimator output (Yt-1) to create the
current dynamic linear estimator output (Y).

The analyzer feedback correction is only triggered for an analyzer reading


outside the noise band. If an input goes bad or is out of service, the input
should be frozen at the last known good value and the estimator status
should be changed from good to questionable or bad depending upon the
severity of the input problem. If the analyzer measurement fails or is out
of service, feedback correction must be stopped and the estimator status
changed to questionable or bad depending upon the degree of reliance of
the estimator upon correction. There should be a maximum allowable
period of operation without analyzer feedback correction.
296 Advanced Control Unleashed

Neural Networks
Neural networks (NN) are essentially nonlinear function approximators
that utilize process inputs to predict process output. Much of the initial
neural network research project and development done some fifty years
ago focused on emulating the network of neurons that make up the
human brain. The significant progress happened after the development of
feedforward NN with backward propagation training in 1983 [8.12]. Since
that time, much work has been done to adapt this technology for use in the
process industry. [8.1, 8.2, 8. 3]. The technical promise of neural-network
technology comes from the fact that universal approximators are created
using a multi-layer network with a single hidden layer that can approxi-
mate any continuous function to any desired degree of accuracy [8.4, 8.5].

Soft sensors that utilize neural networks must be adapted to the special
requirements of the process industry. In particular, it is necessary to com-
pensate for the delay in the process output for changes in upstream condi-
tions. Thus, a neural network typically has one output (the predicted
variable) and any number of upstream measurements as inputs with com-
pensation of process delay. Figure 8-22 shows a three-layer feedforward
NN.

Figure 8-22. Three-layer Feedforward Neural Network

In this example, the neural network supports n inputs, a single layer of


hidden neurons, bias nodes at the input and hidden layers, and one out-
put. Each input is delayed by a certain amount to allow the values used in
the network to be time coincident. The weight wij connects the ith node in
the previous layer to the jth node in the next layer. Weighted values are
summed at the node before being passed through an activation function.
Chapter 8 - Properties Estimation 297

For a sigmoidal hidden neuron, the summed node input Sj' and the output
hj, are given by the following formula:

1-e - s1-e-sj/1+e-sj
j
where
hj= Sj=Σwijxi (8-2)
i

Typically the output layer has a linear activation function, that is, a sum­
mation of the inputs to the output layer.

The structure of a neural-network soft sensor is relatively simple and is


implemented in real-time controllers with little processor loading. The real
challenges in the design and implementation of tools used to create a
working neural network are:

Collecting historic data on the process inputs and the process output mea­
surement for screening of the potential inputs to the neural network. Also,
this data is needed to determine the neural network structure and value of
parameters used in the neural network.

• Identifying the delay between each input and its impact on the
process output predicted by the neural network
• Determining which of the process inputs significantly affects the
process output through a calculation of input sensitivity
• Determining the weighting factors and number of neurons
included in the hidden layers for best results
• Validation of the network.

Data Collection
The data collection is by far one of the most critical steps in the develop­
ment of a neural network. When collecting data to be used to train the
neural network, it is important that the inputs and outputs vary over their
normal operating range. If the process output is available only as a lab
analysis, then this data must be merged with historical data on the input
measurements to allow further analysis. Some simple rules that should be
observed in the data collection are:

• Only if the inputs change during the time that data is collected will
it have an impact on the process output and thus be identified in
training. Where possible, a uniform number of sample values
should be collected over the operating range of that input.
• Provisions should be made to automatically exclude values that fall
outside this normal range of operation. Statistical techniques often
prove useful in defining the outlier boundaries. A good rule of
298 Advanced Control Unleashed

thumb for valid data limits is Mean +/- 3.5 * Standard deviation,
which includes approx. 99.9% of the data in the given region.

Also, it is important that the user be able to flag regions of the historic data
to indicate periods of abnormal operation so this data, for all inputs and
outputs, will not be used in the development of the network.

Identification of Input Delay


A change in an upstream measurement may not immediately be reflected
in a process output. Identification of this delay is the first step in the cre-
ation of the neural network. The overall objective is to determine how
much each process input needs to be delayed to allow best alignment with
the output response. This is done automatically by calculating the cross-
correlation between each upstream measurement value selected as neural-
network inputs and the process output value selected as the neural-net-
work output. The cross-correlation, Figure 8-23, is calculated for a process
input and a process output.

Figure 8-23. Cross-correlation Calculation

The time shift, K, between the input and the output that produces the max-
imum cross-correlation coefficient is used as the input delay that should
be introduced into the input processing of the neural network, as shown in
Figure 8-24.

The cross-correlation value indicates the magnitude (and sign) of the effect
of the input on the output. For example, for a simple first-order process,
input at delay that equals approximately (dead time + time constant/2) has
most relevance, as the maximum correlation value occurs at that delay.
Once the most significant delay is known, the input data is shifted by that
delay.
Chapter 8 - Properties Estimation 299

Figure 8-24. Determining Process Delay Using Cross-correlation

Input Sensitivity
In the initial definition of the neural network, it may not be possible to
know which of the upstream measurements influence the process output
to be predicted by the soft sensor. Only those that have a significant
impact should be included in the network. The sensitivity is denned as the
change in dependent variable (output) y for a unit change in independent
variable (input) x, or, mathematically:

(8-3)

The initial sensitivity estimate is calculated from a simplified linear model,


prior to developing the NN model. A linear model for computing the sen-
sitivities may be obtained using the standard PLS algorithm, for example.
The principle of PLS is outlined later in this section; for more detail there
are several good references [8.7, 8.8]. The delayed input values and pro-
cess output are used in the development of this model. Using this model,
the input sensitivity is calculated by changing the input by unit while all
other inputs are kept constant, and determining the output change.

In a multi-variable system, a higher value of sensitivity indicates that


change in that input has higher influence on the output. The sum total of
all sensitivities is normalized to 1 (one). Information on the sensitivity of
the inputs to the output indicates their relative importance. The sensitivity
value at the delay identified in the previous step is used to exclude inputs
that the output shows little or no dependence on. One technique is to
exclude inputs whose individual sensitivities are small compared to the
average sensitivity.
300 Advanced Control Unleashed

Determining Input Weights


Having determined the inputs and delays to be used in the neural net-
work, it is now possible to determine the weighting factors and number of
neurons included in the hidden layers to provide the best results. The
weights in the network are initialized with random small non-zero values.
Randomness insures lack of bias, while small values give more freedom
for modifying the weights to avoid saturation. For a given number of neu-
rons in the hidden node, the squared error between the calculated soft sen-
sor output and the actual output measurement for one point in time is
expressed as:

(8-4)

The cumulative error for set of data is calculated as follows:

(8-5)

By incrementing the weighing factor Wij associated with input i and nodej
on the value d Wij and observing change in cumulative error E [see Figure
8-22] we can determine gradient,

or in final differences

Gradient defines change on the output for a unit change of the weight Wij.
The back propagation algorithm [8.6] is used to calculate new weights that
minimize the cumulative error in the direction of negative gradient (steep-
est descents) for each pass through the data set (an epoch):

(8-6)

where defines the step size of change in the gradient direction, more
popularly known as the learning rate. The speed of convergence is
improved by modifying the back propagation algorithm to incorporate the
conjugate gradient technique as described by Brent [8.15]. Rather than use
a fixed step size, the new direction is based on a component of the previ-
Chapter 8 - Properties Estimation 301

ous direction. To avoid settling on a local minima, the training algorithm is


designed to automatically turn the direction remembrance on and off
depending on whether the error is improving or not. Each time this is
done, it results in starting with a brand-new direction. One complete pass
through the data set is known as an epoch.

During the training of the neural network, the cumulative error for the
training set of data will decrease monotonically, approaching a constant
value in an asymptotic manner, as shown in Figure 8-25. However, if the
cumulative error is calculated using a validation data set not used in train-
ing, then at some point the error will begin to increase. Past this point
where the training and validation error begin to diverge, the neural nets
start learning features specific to the training data set rather than the gen-
eral process. The goal of training, though, is to learn to predict the output
given real process inputs, and not just to memorize the training set. This is
known as generalization.

Figure 8-25. Neural Network Training Error

To detect over-training, a certain portion of the data set is kept aside for
validation during the training phase. This is called the test set. At each
epoch, while the weights are modified based on the error on the training
set, the test set is used to detect when overfitting of the training set has
occurred. At each epoch, the error on the test set (test error) for the new set
of weights is calculated and compared with the best test error. In order to
prevent training from stopping at some random choice of weights for
which the test error turns out to be small, the algorithm runs for at least a
fixed number of epochs before establishing any minima. Also, the training
error is added to the test error to define a stringent minimum total error
condition. In this way it is made sure that both the data sets have accept-
able errors when the algorithm converges, and the weights at the best
epoch are picked at the end of training.
302 Advanced Control Unleashed

Nodes in the Hidden Layer


The number of nodes in the hidden layer has a large impact on the accu-
racy of the soft sensor. In general, a poor fit is achieved with a smaller
number, while a larger number may lead to overfitting the training set.
This is indicated graphically in Figure 8-26, where overfitting occurs at the
5 th hidden node, when train error decreased despite increased test error.

Figure 8-26. Impact of Hidden Nodes on Error

To determine the optimum number of hidden nodes, it is possible to train


the network starting with one hidden node u p to the maximum number of
nodes, which is the number of inputs to the neural network. For each
increment, the minimum cumulative error is stored. If the difference
between errors for different numbers of hidden nodes is within a tolerance
level, the NN with a smaller number of hidden nodes is given preference.
The algorithm then automatically picks the weights for the best combina-
tion of train/test error obtained. In the example considered in Figure 8-26,
the NN with four hidden nodes is automatically chosen. In this manner
the generalization of the network is maintained while exploring for the
best possible network configuration.

Correction for Process Changes


The property of a process output stream predicted using a neural network
and measured upstream conditions is automatically corrected for error
introduced by unmeasured disturbances and measurement drift. This cor-
rection factor is calculated based on a continuous measurement or sam-
pled measurement of the stream provided by an analyzer or lab analysis of
a grab sample. An adaptive correction of the neural network is created by
incorporating this automatically generated correction factor as an integral
part of the neural-network algorithm.
Chapter 8 - Properties Estimation 303

Two approaches are used to calculate the correction factor that must be
applied to the NN prediction. Both are based on calculation of the predic-
tion error using the time-coincident difference between the uncorrected
predicted value and the corresponding measurement value. Depending on
the source of the error, a bias or a gain change in the predicted value is
appropriate. To avoid making corrections based on noise or short-term
variations in the process, the calculated correction factor should be limited
and heavily filtered; for example, equal to twice the response-time horizon
for a change in a process input. During those times when a new process
output measurement is not available, the last filtered correction factor is
maintained.

An indication is provided if the correction factor is at the limit value. Also,


a configurable filter on the corrected prediction value allows noise in the
input measurements to be filtered. The basic implementation is shown in
Figure 8-27.

Figure 8-27. Adaptive Neural-network Structure

Incorporating the adaptive correction as part of the neural network simpli-


fies implementation and dramatically improves neural-network online
performance.

State-of-the-Art Implementation
The following features should be provided in a dynamic linear estimator.

• Identification software to develop dynamic models of a


downstream process variable with significant time delay based on
step and PRBS tests that exercise process inputs from simple and
reliable measurements such as temperature and pressure.
• Linear combinations of the dynamic models of many inputs.
304 Advanced Control Unleashed

• Time delay and lags on individual inputs to create a smooth


combined response.
• Time delay and lags on estimator output for comparison to
downstream process variable.
• Friendly interface and software to identify the dynamic model of
estimator.
• Adjustable factor on feedback correction from comparison to
downstream process variable.
• Detection of insufficient or invalid change in downstream variable
and option to suspend feedback correction.
• Detection of invalid or out-of-range input variables and option to
use last good value.

The neural-network features that are normally supported in a modern


control system are:

• Feedforward neural network-based intelligent sensors that are


tightly integrated with a scalable process control system.
• The applied algorithms are based on the back-propagation
technique with significant modifications for use in process
industries.
• Statistical pre-processing techniques remove data unrepresentative
of the general region of operation.
• Tools allow you to automatically realize a suitable input-output
configuration for the data set.
• For all inputs, the input is time shifted through the use of cross-
correlation values between the output and input, to account for
dead time in the neural-network model.
• Sensitivity analysis determines the relative importance of the
various inputs.
• Conjugate-gradient back-propagation training with direction
remembrance and optimal learning rate calculation, and specific
checks against overfitting, is used to realize a robust neural
network-based identifier.
• Network model predictions are validated against actual data.
• In online mode, the actual process variables, for example those
obtained as a result of lab analysis, are used as inputs to the neural
network for automatic adaptation of its prediction in response to
changes in process.
Chapter 8 - Properties Estimation 305

• The N N function block is imbedded in the process control system to


simplify implementation and improve reliability.
• An intuitive and user friendly GUI minimizes the engineering and
implementation effort while maintaining the underlying N N
technology.
• A two-stage pre-processing algorithm for identification of process
input delays.
• Automatic determination of the number of hidden nodes.

References
1. Mehta'A., Ganesamoorthi, S., and Wojsznis, W., "Feedforward Neural
Networks for Process Identification and Prediction," in Proceedings of ISA
Expo/2001, September, 2001, Houston, TX.
2. Tzovla, V. and Mehta, A., "Automated Approach to Development and Online
Operation of Intelligent Sensors," in Proceedings of ISA Expo/2001, September,
2001, Houston, TX.
3. Ganseamoorthi, S., Colclazier, J., and Tzovla, V., "Automatic Creation of
Neural Nets for Use in Process Control Applications," in Proceedings of ISA
Expo/2000, 21-24 August 2000, New Orleans, LA.
4. Qin, S.J., "Neural Networks for Intelligent Sensors and Control - Practical
Issues and Some Solutions," in Progress in Neural Networks: Neural Networks for
Control, edited by D. Elliott, New York: Academic Press, 1995.
5. Morrison, S. and Qin, J., "Neural Networks for Process Prediction,"
Proceedings of the 50th Annual ISA conference, New Orleans, 1994, pp. 443-450.
6. Fisher Rosemount Systems, Installing and Using the Intelligent Sensor Toolkit,
User Manual for the ISTK on PROVOX Instrumentation, 1997.
7. Masters, T., Practical Neural Network Recipes in C++, Academic Press, London,
UK, 1993.
8. Hornik, K., Stinchcombe, M., and Halbert, W., "Multilayer Feedforward
Networks Are Universal Approximators," Neural Networks, 2:5, 1989, pp. 359-
66.
9. Cybenko, G., "Approximation by Superpositions of a Sigmoidal Function,"
Mathematical Control, Signals, and Systems, vol.2, 1989, pp. 303-314.
10. Press, W. H., Flannery, B., Teukolsky, B., and Vetterling, W., Numerical Recipes
in C, Cambridge University Press, New York, 1988.
11. Wold, S., Jellberg, S., Lundsted, T., Sjostrom, M., and Wold, H., "PLS
Modeling with Latent Variables in Two or More Dimensions," Proceedings,
Frankfurt PLS Meeting, September, 1987.
12. Rumelhart, D. and McClelland, J., Parallel Distributed Processing, MIT Press,
Cambridge, MA, 1986.
13. Geladi, P. and Kowalski, B., "Partial Least Squares Regression: A Tutorial,"
Analytica Chimica Acta, 185, 1986, pp. 1-17.
306 Advanced Control Unleashed

14. Polak, E., Computational Methods in Optimization, Academic Press, New York,
1971.
15. Brent, R., Algorithms for Minimization without Derivatives, Prentice-Hall,
Englewood Cliffs, NJ, 1973.
16. Blevins, T., Wojsznis, W., Tzovla, V., and Thiele, D., "Integrated Advanced
Control Blocks in Process Control Systems," US Patent Application, Fisher
File No: 59-11211.
17. Blevins, T., Tzovla, V., Wojsznis, W., Ganesamoorthi, S. and Mehta, A.,
"Adaptive Process Prediction," US Patent Application, Fisher File No: 59-
11243.
9
Model Predictive
Control

Practice
Overview
Model predictive control has been proven by a group of companies to pro-
vide benefits, in the right applications, that are greater than those achieved
from the improvement of basic control systems [9.17]. The greatest bene-
fits are realized in applications with dead-time dominance, interactions,
constraints, and the need for some optimization. The constraints define
the operating limits of processes or equipment. Optimization is often as
simple as the maximization or minimization of a flow. The advantage of
model predictive control lies in its knowledge of the effects of past actions
of manipulated and disturbance variables on the future profile of con-
trolled and constraint variables.

For systems with large dead times, interactions, and multiple constraints,
the ability to provide the necessary patience, anticipation, and forecast of
the future approach to targets and limits is essential for moving an opera-
tion closer to its optimum. "A good control engineer can draw straight
lines; a great one can move the lines." Model predictive control is a tool
that can make a good control engineer a great one.

Many good review papers [9.1], [9.2], [9.3] and reports [9.4], [9.5] have
been published on model predictive control (MPC). In contrast, there are
only a few books dedicated to the subject [9.11], [9.12]. The literature pub-
lished to date focuses on the theory or on an overview of the application
and benefits of MPC. Information on how to design, build, commission,
and tune an MPC controller has been lacking.

307
308 Advanced Control Unleashed

This chapter is structured in such a way as to fill the gap and to benefit an
average control engineer who wants to refresh his knowledge on MPC or
has not yet had a chance to learn about MPC and wants to understand
MPC basics. It also opens the door for potential users of new, small and
easy-to-use MPC products, designed to be applied by a broad spectrum of
process and process control engineers [9.8], [9.10], [9.18].

The main mechanism of any type of control is feedback. Model predictive


control (MPC) has a specific type of feedback that makes it distinctive. As
opposed to a traditional control loop where the controller uses as its input
a difference (error) between the set point and the recent values of the mea-
surement, the MPC controller applies as its input the difference between
the future trajectory of the set point (whether a constant or known func-
tion of time) and the predicted trajectory of the controlled variable. The
difference is expressed not by a single value as in a traditional feedback
loop, but as a vector of error values from target for a time period from the
present to some set time in the future, usually defined to cover the settling
time of the process.

Figure 9-1. Feedback of Future Trajectories for Model Predictive Control

Figure 9-1 shows an MPC controller for a process with two inputs and one
output, in a form that allows one to see the analogy with a typical feed-
back control loop. The process has a manipulated variable (MV) and a dis-
turbance variable (DV) on the input and a controlled variable (CV) on the
output. A simple MPC controller used in this configuration has three basic
components:

1. A process model that predicts the process output for 120 or more
scans ahead. (For consistency, we use a prediction horizon of 120
scans throughout this discussion.)
2. A future trajectory of the set point for the same number of scans as
the trajectory of the predicted process output.
Chapter 9 - Model Predictive Control 309

3. A control algorithm for computing a control action based on the


error vector as the difference between the future trajectories of the
set point and predicted process output.
The controller output is a manipulated variable (MV) applied to the inputs
of the process and the process model, and is a part of every MPC control-
ler. A measured load upset to the process input is also applied as a distur-
bance variable (DV) to the model input. The process model computes a
predicted trajectory of the controlled variable (CV) that is the process out-
put. After correction of this trajectory for any mismatch between the pre-
dicted value and an actual measured value of the controlled variable, the
predicted trajectory is subtracted from the future trajectory of the set point
to form an error vector (Figure 9-2).

Figure 9-2. Graphical Illustration of the Operation of an MPC controller

An MPC matrix is developed from the matrix of the dynamic model of the
future trajectory of the process formed from step responses. A specific
gain is used for every element of the error vector. For a simple process
with just one MV and CV, the MPC matrix has 120 gain coefficients. In the
MPC algorithm, the predicted errors are multiplied by the appropriate
gains and all the terms are summed together to compute an increment of
the controller output. The prediction is adjusted every scan by an actual
measurement.

The major differences between the operation of an MPC controller and a


classical PID feedback controller are:

• A predicted error vector is applied to the MPC controller algorithm


instead of the scalar values of recent errors that are used in a
classical PID feedback controller.
• The error vector is computed as the corrected model prediction
subtracted from the future set point values instead of the
310 Advanced Control Unleashed

measurement subtracted from the set point value for the current
and previous scans as in a classical PID feedback controller.
• The increments in the controller output required to bring the
process output trajectory as close as possible to the set point
trajectory are limited and spread over several moves into the future.
Only the first move is implemented, however, and the computation
procedure is corrected and repeated for the next scan.
• Disturbances with the proper dynamics based on identified models
from process step responses are included in the prediction of the
process output. In comparison, the feedforward signal in a basic
control system provides a single current value. Some simple
dynamic compensation is applied to this value but the lead and lag
times and time delays are often adjusted online by trial and error
rather than being computed. Since a simple feedforward signal
added to a PID controller output has no knowledge of where the
process is headed in the future, it may be taking the wrong action.
Some special logic is added to anticipate this problem by looking at
the slope of the controlled variable, but the correction is crude and
incomplete.

The operation of an MPC controller relies on a good process model. If the


model is accurate, the MPC controller can demonstrate superior
performance for processes with long dead times, inverse responses, and
higher-order dynamics that are difficult to control with a classical PID
feedback controller. For multivariable processes, the MPC controller
output takes into account process interactions. In addition, the MPC
controller handles constraints on the process output and the manipulated
variables. Finally, MPC controllers are set up to integrate and manage
optimization. All together, these capabilities make MPC the right choice
for incorporation of process knowledge and the solution of complex
control and optimization problems on the unit-operation control level.

Opportunity Assessment
An MPC controller can simultaneously adjust multiple manipulated vari-
ables to prevent projected process excursions that would be too close to
constraints in a future time horizon. The MPC controller can forecast the
trajectory of a constraint since it has a process model for each constraint-
and-manipulated variable pair. This multivariable constraint-prediction
capability is the source of most of the advantage of an MPC controller over
a PID override controller.

While an increase in production rate or process efficiency (yield) is often


sought and realized by the use of MPC, side benefits are also achieved that
are not as well recognized or anticipated. Constraint control will not only
Chapter 9 - Model Predictive Control 311

push capacity or maximize efficiency but will also back off the production
rate or sacrifice efficiency when absolutely necessary. Constraint control
can correct for the projected effect of costly excursions caused by unusual
upsets or failures that cannot be handled by conservative set points alone.
The questions to ask about constraints (process or equipment limits) are:

1. Are there limits to operating values that are important for product
quality, process efficiency, or for environmental, personnel, and
property protection?
2. Can these limits be measured online, analyzed in the lab, or
calculated?
3. Has there been down time attributable to violations of these limits?
This can show up as an increase in the maintenance cost or number
of failures of equipment, a decrease in the run time between
catalyst replacement or regeneration, a decrease in the run time
between clean-outs or defrosts from a faster rate of fouling or
coating, and trips from interlocks for personnel and property
protection.
4. Has product been downgraded, recycled, returned, or discarded as
the result of excursions beyond these limits?
5. Would operation closer to a limit significantly decrease utility use?
6. Would operation closer to a limit significantly decrease raw
material use?
7. Would operation closer to a limit significantly increase production
rate?
8. Have there been any environmental violations or near misses?
9. Does the operator pick set points to keep operating points away
from limits?
10. Is there a batch operation with a feed rate that depends upon a
process variable, where the batch time could be reduced by an
increase in a feed rate by operating closer to process or equipment
limits?

If the answers to questions (1) and (2) are Yes, there is definitely a potential
application for model predictive control to optimize a process and assist,
but not replace, a safety interlock system. MPC can improve the efficiency
and capacity of a plant besides reducing the number of interruptions of a
batch and trips by an interlock system that is the last line of defense. If the
limit (constraint) is analyzed in the lab or calculated, a dynamic linear
estimator or virtual sensor may be developed to put the constraint online
as part of MPC. Standard operating procedures, equipment design
312 Advanced Control Unleashed

specifications, and interlock settings should be checked and the limit


settings for the constraint and manipulated variables reviewed with
process technology and operations.

If the answers to any of questions (3) to (8) are Yes, the benefits are signifi-
cant. If the answer to (9) is Yes, the benefits are usually larger than esti-
mated.

If the answer to (10) is Yes, there is an opportunity to significantly reduce


batch time by an MPC controller that optimizes a feed rate. The increase in
capacity for these batch processes has proven to be an order of magnitude
greater than for continuous processes.

The next-largest opportunity for MPC exists when there are interactions.
A good operator and engineer can reason through the relationships for
two manipulated and controlled variables. However, even the most atten-
tive person cannot do this for every scan and cannot consistently or accu-
rately forecast future effects.

The automatic decoupling of PID controllers to reduce interaction is rarely


fully achieved. The implementation is usually based on some steady-state
gains or rough approximations and consists of the simple addition of a
portion of one controller output to another. It assumes that the time delay
and time lag of each MV and CV pair are similar so that dynamics need
not be included in the decoupler design. In practice, PID controllers are
usually detuned rather than decoupled. This is accomplished by an
increase in the closed-loop time constant of PID controllers ranked lower
in importance. However, the load rejection performance of the detuned
PID controllers is sacrificed.

In addition, the change in the output of a PID algorithm is based on just


the current and past value of a single process variable. The only predictive
capability of a conventional PID is derived from rate action, which uses
the slope of the change in the process variable or error to provide some
anticipatory action to minimize overshoot. The addition of a dead-time
compensator can extend this projection to one dead time into the future
but with a considerable sacrifice in robustness associated with an extreme
sensitivity to an overestimate of the actual dead time [9.19].

Model predictive control can continuously and simultaneously handle


many variables and predict trajectories of the combined effects by linear
superposition. The time horizon of the prediction extends to the resting
value (steady state) of each controlled and constraint variable for self-reg-
ulating processes. Also, the update of the trajectories from actual measure-
Chapter 9 - Model Predictive Control 313

merits accounts for model mismatch and adds robustness. The questions
to ask about interactions are:

11. Are there more than two controlled or constraint variables affected
by more than one manipulated variable?
12. Are these controlled or constraint variables important?
13. Do the controlled variables have the same order of magnitude lag
and delay?
14. Can the PID controllers effectively use rate action?

If the answer to questions (11) through (13) is Yes, the application is a


candidate for an MPC controller. The output of two PID controllers are
somewhat effectively decoupled based on static gains or selected as part of
an override control system for a single constraint. The first line of entries
in Table 9-1 compares the percents of times none of the constraints is held
by operators and control systems. The second line compares the amount of
time that one or more constraints are held (the first and second lines must
total 100%). As the number of active constraints increases, the
performance advantage of MPC becomes more dramatic. The table shows
that the performance of an override control system falls off rapidly if there
is more than one active constraint and an operator is not able to deal with
three or more simultaneous constraints. This doesn't even take into
account the predictive advantage of MPC.

If the answer to question (14) is No, it is a sure sign that there is excessive
interaction, noise, or time delay, which is the next major reason you might
consider MPC.

Table 9-1. How Effectively Can Operators and Control Systems Hold Coincident
Constraints?
Number of % Time
% Time Operator % Time Override
Coincident MPC
Can Hold Can Hold
Constraints Can Hold
None 70% 10% 2%
One + 30% 90% 98%
Two + 20% 45% 90%
Three + 0% 30% 80%

If a control loop has a time delay larger than the time lag, it is called dead
time-dominant and is a candidate for MPC. The abrupt delayed response
is difficult for PID controllers to handle effectively. Gain and rate action
has to be minimized and mostly reset action is used in the tuning of the
314 Advanced Control Unleashed

PID controller. Since an MPC controller provides a trajectory of the future


set point and a prediction of the process output based on past moves and
values of the disturbances, it has the knowledge and patience to offer sig­
nificant performance over a PID controller for set point changes and mea­
sured disturbances.

With chromatographs, large sample-transportation delays and cycle times


plus erratic signals from spurious results make them prime candidates not
only for an MPC controller but also for a dynamic estimator or virtual sen­
sor to provide a faster, smoother, and more reliable measurement for con­
trol. Consider the following questions:

15. Is the time delay more than1/4of the total time to steady state or
time to reach 98% of the change (T98)?
16. Is a chromatograph used for a controlled or constraint variable?

If the answer to question (15) or (16) is Yes, the total time delay is large
enough for the control loop to benefit from MPC. If a chromatograph is
used, it is a sure sign that MPC is needed because slow noise and an
abrupt response preclude the use of derivative action. Since a PI instead of
a PID controller must be used on chromatograph loops, there is a greater
opportunity for better control from MPC because a PI controller has no
anticipatory action, which is particularly important for a slow composition
response.

For loops with a time lag (ι) much larger than the time delay (ι d ), theoret­
ically the PID controller could be tuned for superior performance by over­
driving its controller output through high gain and rate action. However,
analog-to-digital converter noise from large temperature scale ranges and
the irregular or erratic response of analyzers preclude the use of large PID
controller gain and rate settings [9.18] [9.19]. Also, the time required to
tune these controllers tests the patience of most mortals to the point where
mistakes are made. Consequently, in practice an MPC controller can do as
well as or better than a PID controller for these loops dominated by a large
time lag.

An application where an MPC controller might not offer an advantage is a


single, relatively independent PID controller with either a high degree of
rate action, large frequent unmeasured load upsets, or a control valve with
excessive stick and slip or insufficient sensitivity. Control valves should be
fixed or replaced to provide sufficient stroke resolution and sized to insure
a valve gain (slope of the installed characteristic) that is not too small or
too large in the operating range. In some pH loops and column tempera­
ture loops, the process gain is so nonlinear and large, and there are so
many unknown disturbances, that rate action is essential to determine the
Chapter 9 - Model Predictive Control 315

current direction of the process variable, and knowledge of the past moves
of the manipulated variable is less important. The best solution might still
end up as an MPC controller with signal characterization for linearization
and some further investigation and mitigation of the sources of the upsets.

PID is also effective when tight level control is important and there is no
inverse response or excessive time delay. A PID controller might provide
the best control of a reactor's residence time and a distillation column
overhead receiver's level. However, an MPC controller would provide the
most effective level control of a surge volume because the predicted
approach to level limits is employed to smooth the manipulated flow. A
proportional-only controller reacts only to current deviations and noise
and does not know whether the current situation is temporary. The MPC
sees the effect of the net imbalance in flows over the time horizon and
schedules several gradual moves.

Often overlooked is the ability of an MPC controller to provide an accurate


dynamic model of a disturbance variable for feedforward control. Most
feedforward signals have just a gain parameter that is the suspected
change in MV for a change in DV. In some cases material and energy bal-
ances are used to compute this feedforward gain. Often, not enough is
known about the parameters in the equations to provide an accurate esti-
mate and the feedforward gain ends up being tuned online. Even less is
normally known about the feedforward dynamics. Consequently, no
dynamic compensation is used, or a simple lead-lag function is adjusted to
slow down or speed up the feedforward signal. The MPC controller pro-
vides the proper dynamics by identifying and using a model of the effect
of the disturbance on each controlled and constraint variable. It is also able
to predict the future effect of the upsets.

If the answer is Yes to question (17) and Yes to either question (18) or (19),
the predictive capability of MPC should be used to provide a feedforward
correction of the size and timing to minimize present and future errors
and violations.

17. Can you measure or calculate the upsets?


18. Do these upsets affect more than one controlled or constraint
variable?
19. Are equations or parameters not known completely enough to
calculate the feedforward gain and timing requirements?

A less common but still important situation that can benefit from MPC is
the occurrence of inverse response. If the initial response is the opposite of
the final response, it is called inverse response. It is often associated with
316 Advanced Control Unleashed

the level control of boiling liquids when a cold stream or steam is


manipulated, such as boiler drum level where feed water flow is
manipulated, or column sump or evaporator level when steam flow is
manipulated. Other applications are furnace temperature control where
cross limits cause the air to lead the fuel on an increase in firing demand,
and fermentor cell concentration control by manipulation of nutrient feed
where an increase in nutrients first decreases cell concentration due to
dilution. The last question is:

20. Are there any loops where the initial response is opposite to the
final response?

Examples
In the mid-1990's it was reported that there were more than 2,200 MPC
applications in industry, mostly in refining and petrochemicals. A current
estimate could be triple that number, with the number of new MPC appli-
cations growing rapidly. MPC applications are expanding in such areas as
chemicals, pulp and paper, gas processing, food processing, and metal-
lurgy. The applications range in size from simple controllers with only a
few manipulated variables, controlled variables and maybe 1 or 2 con-
straints, to very large-scale applications encompassing up to 100 manipu-
lated variables and sometimes up to 200 output variables. These larger-
scale controllers are designed to meet the control objectives of an entire
processing unit as well as to provide simple optimization.

As the cost of the commercial MPC tool sets has come down and the
implementation knowledge has broadened, the application of MPC tech-
nology is more easily justified in smaller industrial applications. Although
the large-scale refining applications still represent the lion's share of
today's applications, it is the smaller applications and the untapped indus-
trial sectors of mining, metals, food and pulp and paper that represent the
areas of new growth for MPC technology.

Distillation Tower Control


In this section the functionality and advantages of an MPC application are
shown on a generic process of 2 distillation towers in series. This example
is selected to isolate the specific features of MPC technology in order that
we may clearly understand the advantages and benefits over traditional
methods of control. In most cases, the entire MPC application would be
built as a single project. But for demonstration purposes this application is
developed in sections. Each addition to the MPC application example is
designed to demonstrate an additional feature of the technology. We
should note that although many very large MPC controllers exist in indus-
try today, applications could start small and grow in scope, the addition of
Chapter 9 - Model Predictive Control 317

new functionality being required to push our manufacturing processes


with a goal of optimizing performance.

Figure 9-3a. Distillation Tower A

Consider the 2 distillation towers, tower A and tower B, shown in Figures


9-3a and 9-3b. The first tower separates the feed into a heavy stream
(referred to as A bottoms) and a light stream (A overheads). Steam pro-
vides the reboiler energy and the overhead reflux is cooled with an air fan
exchanger. The primary products are tower A overheads and tower B
overheads, with tower B bottoms being a waste byproduct. The product
compositions are important specifications and we must control the impu-
rities in these streams. Thus on-line analyzers exist and are located on A
overheads (AT 1-2), to measure the heavy key component or product
impurity, and on B overheads (AT 2-2), to measure the light key compo-
nent in A bottoms. Locating the analyzer on the overhead stream of tower
B enables both the A bottoms composition and B overheads composition
to be measured with a single analyzer.

This astute specification of analyzer location may reduce capital expendi-


ture but will present challenges for control with the addition of significant
dead time to the measurement of A bottoms composition. Often, on-line
analyzers do not provide a continuous signal at the frequency required for
318 Advanced Control Unleashed

Figure 9-3b. Distillation Tower B

control and they must therefore be supported with properties estimators


as described in Chapter 8. But for simplicity purposes of this MPC applica-
tion, we will assume that these analyzers generate a continuous signal
suitable as a direct control signal.

The main manipulated variables for composition control are the steam rate
to the reboiler and the reflux flow. Currently the reflux rate is in closed-
loop control with the overhead temperature. The tower pressure is con-
trolled by manipulation of the overhead vapor rate of the non-condens-
ables. Plant economics dictate that the feed to tower A should be
maximized. The primary operating constraints are tower A flooding, indi-
cated by a maximum tower differential pressure, and tower A overhead
cooling, indicated by a maximum reflux flow or maximum reflux tempera-
ture. Also, there are benefits associated with minimizing tower A pressure
in order to optimize energy efficiency by reducing steam utilization in the
reboiler.

Our challenge as control engineers is to design a control system capable of


the following for tower A:

• Product composition control


Chapter 9 - Model Predictive Control 319

• Feed maximization
• Energy minimization.

Before we proceed with the design of an MPC system for our distillation
problem, it is important to reflect on exactly what benefits MPC offers to
insure that the MPC technology is being properly utilized. The primary
benefits of MPC technology are:

• Predictive capabilities, which enable the MPC controller to model


the process across the future settling time of the process. The
predictive capability also enables disturbance variables, such as
feed rate and composition, to be handled easily with feedforward
control.
• Multivariable capabilities, which allow the interactions to be
modeled and decoupled such that all the inputs can be used for
control in an optimal fashion.
• Constraint handling, which insures that important operating,
product and environmental limitations are observed while trying to
meet the control targets and objectives.
• Optimizing capabilities, which identify opportunities where we
may exploit available degrees of freedom in the controller in order
to push the process to a more profitable operating region and hold
it against this active constraint set.

The most challenging control objective in our problem is to control the


amount of light key component exiting in A bottoms as being measured in
B overhead product stream. The principal manipulated variable for this
task is tower A reboiler rate (FC 1-3). There is a long delay between the
time a move is made on the reboiler and the resulting change is seen by the
analyzer (AT 2-2). This dead time is a problem for conventional control,
but is easily handled with the predictive capabilities of MPC. A predictive
model defining how the tower B overhead analyzer changes, subject to a
step change in reboiler rate, can eliminate the challenges posed by this
dead time-dominant system.

It is now possible to build the first phase of our MPC application as a sin-
gle input-single output MPC controller with predictive capabilities, as
shown in Figure 9-4. It is interesting to note that with MPC control, the
reboiler rate will come to its final resting value long before the change in
the product composition is ever seen by the analyzer, which clearly shows
the advantage of the predictive capability. Any error in the predictive
model is captured by the analyzer and corrected by feedback.
320 Advanced Control Unleashed

Figure 9-4. Distillation Tower MPC, Version 1

The single-loop MPC controller designed above handles tower A bottoms


composition control well, but the closed-loop changes of the reboiler will
have an impact on tower A overheads composition control. We must now
look at incorporating tower A overhead composition control into our MPC
and build a truly multivariable application.

We must first examine the configuration of the existing regulatory control


scheme, specifically the overhead temperature controller in cascade with
the reflux. Should the existing temperature control loop be broken, with
the reflux becoming the new manipulated variable for the MPC controller?
Should the temperature controller remain intact with the overhead tem-
perature set point becoming the new manipulated variable for the MPC?

In the past, the regulatory loops were most often broken, with the MPC
being designed to manipulate the slave elements. However, there are often
benefits in retaining the existing regulatory controls within the MPC struc-
ture. For example, the regulatory controls, such as the overhead tempera-
ture controller, can provide fast disturbance rejection in between MPC
controller executions, adding to the stability of the application. Careful
thought should always be given to the question of removing existing regu-
latory controls from the MPC. In our illustrative example, we will retain
the overhead temperature control loop (TC 1-2) and select the top temper-
ature set point as the second manipulated variable in our MPC controller.
Version 2 of our MPC controller is shown in Figure 9-5.
Chapter 9 - Model Predictive Control 321

Figure 9-5. Distillation Tower MPC, Version 2

The MPC controller now has 2 manipulated variables, reboiler flow and
top temperature, and 2 controlled variables, tower A overhead heavy key
and tower A bottoms light key (as measured in tower B overheads product
stream). This 2 input-2 output system will require 4 dynamic models.
These models, as shown in Figure 9-5, define the dynamic response over
time of each controlled variable subject to a unit change in each manipu-
lated variable. Imbedding these models in the controller insures that
responses in time are decoupled and that the controller can use both
manipulated variables to better achieve its control objectives.

How the controller actually uses the available manipulated variables is a


function of a move-suppression weight, or tuning weight, assigned to each
manipulated variable in order to penalize movement. The larger the tun-
ing weight on a given manipulated variable, the less the MPC controller
will want to utilize that variable to achieve its control objectives. The mul-
tivariable capabilities of the MPC controller can now handle typical prob-
lems of interacting or fighting single-loop controllers and deliver better
performance. The application demonstrates the capabilities and benefits of
the predictive and multivariable components of the MPC technology, with
both of these features greatly enhancing the stability and robustness of the
control applications.

The third area of benefit is in the technology's ability to handle constraints.


This constraint-handling capability goes hand in hand with optimizing
plant performance. The benefits or economic return of the MPC applica-
tion will usually be dictated by how well our MPC controller can operate
322 Advanced Control Unleashed

against some active constraint set, as we continuously push our control


targets in the direction of higher plant profitability.

A few simple operating variables are introduced as constraints into our


existing MPC application. In controlling the 2 product quality specifica-
tions, we notice that we may violate a key temperature located in the
lower section of tower A (TT 1-5), as well as a maximum allowable reflux
flow (FC 1-2) and reflux temperature (TT 1-4). The dynamic models relat-
ing how these constraint variables respond to a step change in the reboiler
rate and the top temperature set point are shown in version 3 of our con-
trol matrix in Figure 9-6. The amount the MPC controller gives up on the
control targets relative to the constraints is a function of the degree of
importance assigned to the variables via the priority factor, or tuning
weight, assigned to each process output.

Figure 9-6. Distillation Tower MPC, Version 3

The MPC application for our distillation tower process is now a true con-
strained multivariable predictive controller. The application is further
improved by incorporating feedforward action into the MPC controller.
Fluctuations in the reboiler steam header pressure (FT 1-3) will affect the
reboiler capacity and introduce an unmeasured disturbance into the
tower. This disturbance will affect tower A bottoms composition which
can only be handled through feedback control once the disturbance is
detected in the analyzer located on tower B overhead product stream. The
steam header pressure is introduced into the MPC controller as a feedfor-
Chapter 9 - Model Predictive Control 323

ward variable, as shown in version 4 of our MPC controller in Figure 9-7.


The dynamic model relating how the output variable changes subject to a
step change in the steam pressure enables the MPC controller to take
immediate action and essentially nullify the effect of the steam pressure
fluctuations.

Figure 9-7. Distillation Tower MPC, Version 4

The last feature of our sample problem is to look at incorporating optimi-


zation capability into our controller. Although there are significant bene-
fits in reducing variability in control parameters, the real benefits are often
achieved by taking advantage of this variability reduction to push the tar-
gets to a more profitable operating region. "A good control engineer can
make straight lines; a great one can move the lines." There are many meth-
ods by which we can push our targets, and this defines the optimizing
mode of the MPC controller.

The simplest form of optimization within an MPC controller is called a


"ramper," or "pusher." It relies on the fact that the MPC controller has
degrees of freedom available when it has more manipulated variables than
fixed control targets. When degrees of freedom are available the controller
is referred to as a "fat," or "underspecified" system, and additional speci-
fications can be imposed on the MPC. These additional specifications are
the optimization functions designed to push key economic variables in the
direction of higher profitability.
324 Advanced Control Unleashed

One of the objectives of our MPC controller is to maximize the feed rate.
Past experience with our distillation tower has shown that maximum feed
rate is most often limited by either tower A overhead cooling capacity or
tower A flooding, as measured by the differential pressure (DP 1-1). Thus,
the feed rate and tower delta P are added to our MPC in an effort to maxi-
mize feed rate. The proposed MPC matrix now has 3 manipulated vari-
ables, 2 control targets and 4 constraints. Assuming that none of the
constraints are active, we now have an underspecified system with more
control knobs (3) than objectives (2), as shown in Figure 9-8.

Figure 9-8. Distillation Tower MPC, Version 5

To meet the objective of maximizing feed rate, a ramper is implemented on


the feed rate such that this variable is maximized until one of the con-
straints becomes active. The active constraint then becomes the third con-
trol objective and the system is now fully specified with no degrees of
freedom available. This is achieved by inserting a model with a unity gain
between the feed rate and feed-rate target, allowing the feed-rate target to
be ramped up from its current value. A ramper optimizer structure within
an MPC controller is very simple to understand and implement, and is
shown in Figure 9-9.

This simplicity is an important factor in addressing the complexities of the


MPC technology with respect to operator understanding. Keeping MPC
strategies small and no more complex than necessary will greatly enhance
Chapter 9 - Model Predictive Control 325

operator acceptance of the control schemes and translate directly into


higher online service factors, and, ultimately, economic benefits.

Figure 9-9. Distillation Tower MPC, Version 6

There are cases where the simple ramper or pusher optimizer strategy is
not sufficient to properly deal with the optimization demands of the MPC
controller. In these cases, a linear program (LP) and/or a quadratic pro-
gram (QP) sits above the MPC controller to provide an optimization layer.
This optimization structure is best suited to MPC control problems with
many degrees of freedom or with many interacting constraints. The LP/
QP formulation allows each variable to be assigned an economic cost such
that the optimal steady-state position of the process is determined and the
MPC controller is used to drive the process to this economic optimum,
while observing defined targets and constraints.

If our MPC controller on the distillation tower is modified to include the


tower pressure, energy optimization and feed maximization are both pos-
sible, but will in most cases be opposing driving forces for the controller.
Also, if the overhead product composition is defined as a range and not a
set target, our MPC controller now has 4 manipulated variables, 1 control
variable and 5 constraint variables. This configuration has 3 degrees of
freedom and a strong interacting constraint set with relationships to most
manipulated variables.
326 Advanced Control Unleashed

The optimization of this system would be difficult to implement with sim-


ple rampers or pushers, but a linear program-or-quadratic program (LP/
QP) approach is formulated by assigning costs to the feed, steam, and
product streams. It should be noted that in many MPC LP/QP implemen-
tations, the cost factors of the various streams are difficult to obtain since
we are often attempting to cost intermediate production streams. Thus,
these cost factors are often arbitrarily set to get the controller to behave
according to some production plan as defined by plant operations.

The distillation MPC problem described here is considered to be a small


application compared with other industrial applications. It was designed
to clearly demonstrate the specific benefits of MPC and how the MPC
technology truly handles these control challenges. These concepts can then
be easily extended to MPC controllers much greater in size and scope that
are often seen in industry today.

There are many good examples of industrial MPC applications. The


remaining examples explore a rotary evaporator, a dryer, a lime kiln, a
bleach plant, and a batch reactor. These examples are considered small in
scope but nevertheless demonstrate the benefits of the MPC approach. In
the past such applications were not targeted by MPC because the develop-
ment of a small application could not be justified economically. Introduc-
tion of a new generation of MPC tools, with a focus on ease of use and
integration into the control system, has made MPC available to an average
control or process engineer [9.18].

Evaporator Control
Evaporators are used in the food, chemical and pharmaceutical industries
for efficient concentration of liquid. Most frequently used are falling film
evaporators, which are particularly suitable for heat-sensitive products.
The heating of the evaporator is typically done by live steam or by thermal
vapor recompression. The operation of a single-effect evaporator is
depicted in Figure 9-10.

Liquid to be concentrated is preheated to the boiling temperature while


flowing downwards. The evaporator includes product density as a con-
trolled variable and temperature and pressure as constraint variables. The
conventional solution was based on three PID controllers and posed prob-
lems of interactions, tuning and decoupling. A simple MPC controller
replaced the PID strategy. The MPC controller was easier to develop and
operate. The application improved product uniformity and reduced
energy consumption.
Chapter 9 - Model Predictive Control 327

Figure 9-10. Single-effect Evaporator: A - Product, B - Vapor, C - Concentrate,


D - Heating steam, E - Condensate

Dryer Control
Another simple MPC application is spray dryer control. Spray drying is
the most widely used industrial process involving particle formation and
drying. It is well suited for the continuous production of dry solids in
either powder or granulated form from liquid feed stocks, such as solu-
tions, emulsions and pumpable suspensions. Spray drying is an ideal pro-
cess, where the end product must conform to precise quality standards
regarding particle-size distribution, residual moisture content, bulk den-
sity, and particle shape. It is used in the chemical, food, pharmaceuticals
and ceramics industries. A typical spray dryer consists of feed pumps,
atomizer, air heater, air dispenser, drying chamber, and systems for
exhaust air cleaning, as shown in Figure 9-11.

Final product moisture is determined by the air flow, heater outlet temper-
ature, feed rate and pressure, viscosity of feed, and nozzles in service. The
MPC controller configuration for the spray dryer is shown in Figure 9-12.

The MPC controller manipulates airflow to control product moisture and


manipulates air temperature to minimize fuel input. Slurry flow is used as
a feedforward, or measured disturbance, input.

Rotary Kiln Control


An example of a medium-size MPC application is rotary kiln control.
Rotary kilns, as illustrated in Figure 9-13, are used widely in the thermal
processing of solids for cement production, calcination and reduction of
328 Advanced Control Unleashed

Figure 9-11. A Typical Spray Dryer

Figure 9-12. MPC Configuration for Dryer

metal ores. They are essentially refractory-lined steel tubes, rotated slowly,
mostly at 0.5 to 2 revolutions per minute. The kiln is fired by fuel and air,
usually injected at the lower end. The flame travels up the kiln, counter to
the solids, heating them directly by radiation and indirectly through the
refractory lining.

The feed consists of a source rich in calcium such as limestone. For the
cement process, the feed also contains silica. The processing occurs in
three zones:
Chapter 9 - Model Predictive Control 329

1. Drying zone, where water evaporates from the raw materials.


2. Calcining zone, with temperature up to 2200°F. The intense heat
causes the reaction that produces lime: CaCO3 CaO+CO2
3. Burning zone, where calcination, or, in cement processing, clinker
formation, is completed.

Figure 9-13. Rotary Kiln Diagram

The kiln process is a highly interactive process that is difficult to control,


primarily because of the process dead time and very long process lags.
Thus, most kilns will run open loop, giving up energy efficiency to guar-
antee product quality and stability. The MPC controller is well suited to
address these control challenges. The MPC configuration and matrix for
lime kiln control are shown in Figures 9-14 and 9-15.

Figure 9-14. MPC Configuration for Lime Kiln Control

This MPC application example demonstrates benefits in the following


areas:

• Throughput/capacity increase
• Decrease in energy consumption
330 Advanced Control Unleashed

Figure 9-15. MPC Step-response Matrix for Lime Kiln Control

• Decrease in environmental emissions


• Improved product quality through a uniform residual alkali level
• Increased lime availability.

Variable-Dead Time Bleach Plant Control


In most implementations of MPC, it is assumed that the process is charac-
terized as linear and time invariant. If these assumptions are true, then the
control generated from a process model identified during normal plant
operation will perform well. However, if the process dynamics change
from those captured by the model, such changes may degrade the control
that is based on this model. In some cases the process feed rate may
change the delay time required for a process output to reflect a change in a
process input. Such variations in the process response can often be traced
to the effect of process feed rate on the transport delay through the pro-
cess.

For example, the time required for pulp stock to move through a tower in
the bleaching process is strictly a function of the feed rate, assuming plug
flow through the tower. Thus, the time required for the brightness sensor
after the tower to indicate changes in chemical addition before the tower
will vary with the stock flow rate, as illustrated in Figure 9-16.

For this example, the brightness control using MPC might be generated
using the step response model identified at a stock flow rate of 500 GPM.
The control would be degraded at flow rates other than 500 GPM because
the model used to generate the control matches the process only when the
flow rate is 500 GPM.
Chapter 9 - Model Predictive Control 331

i
Figure 9-16. Impact of Throughput on Process Response in a Bleach Plant

MPC for Processes with Varying Delay


The primary objective of many traditional MPC installations in the process
industry is to maximize throughput of a large unit operation, e.g., a distil-
lation column. Using the ability of MPC to maintain a process at its operat-
ing constraint, it is often possible to achieve a 1- to 3-percent increase in
throughput in such installations. For these operating conditions, the
throughput of the process is expected to vary over a very small range.
Thus, the impact of throughput on the process response is often negligible
during normal operation for a particular product.

An MPC controller generated from a process model identified at normal


throughput could provide good control when the feed rate is varied over a
narrow range. However, for startup and upset conditions and product
changeovers, feed rates can change drastically. The instantaneous produc-
tion rate of different unit operations usually doesn't match up and the
surge volumes are often too small. Thus, problems anywhere in the plant
can ripple through as feed-rate changes. There are also many non-tradi-
tional installations, such as brightness control in the pulp and paper
industry, paper machine MD (machine direction) control, H / N ratio in the
ammonia industry, and MD control for webs and sheets, that are charac-
terized by large changes in throughput [9.20]. These types of applications
have large changes in the process response as a function of process
throughput. Thus, to provide best control at all operating conditions, the
MPC control needs to be modified to match the varying delay.

Many MPC products address changes in the process response by provid-


ing the ability to switch controllers. To use this technique, a process model
332 Advanced Control Unleashed

is identified at each operating condition. A controller is then generated for


each of the process models. As the processing condition changes, the MPC
switches controllers to match the current process condition, as illustrated
in Figure 9-17.

Figure 9-17. Controller Switching to Match Changes in Process Response

Controller switching is a very general approach that allows any nonlinear-


ity that is a function of a measured process input to be addressed. How-
ever, to apply this technique, multiple process models must be identified
to allow control at different operating conditions to be generated. Also, the
introduction of multiple models and control definitions increases the com-
plexity of commissioning and maintaining the MPC.

When the changes in time delay as a function of throughput are the pri-
mary variation in the process response, it is possible to avoid the complex-
ity of control-model switching. By comparing the step responses of such a
process at different throughputs, some insight is gained into how this is
accomplished, as shown in Figure 9-18.

Examining the two step responses, the only difference is that each
response is spread over a different time to steady state; that is, the coeffi-
cients of the step response are the same but the time difference between
the time samples is inversely proportional to the feed rate. Thus, the con-
trol matrix generated for models created at different throughputs would
be the same except for the control execution rate (since each model's time
to steady state is unique). Thus, for this type of process, adjusting the con-
trol execution rate compensates for the variation in dead time with
throughput.
Chapter 9 - Model Predictive Control 333

Figure 9-18. Comparison of Step Responses at Different Throughput Values

Execution Period = (block execution period) * (execution modifier)


= (Tss/Nc) * (Fn/Fa)
where:
Tss = Time to steady state of the model used to generate the
control
Nc = Number of coefficients used in the model
Fn = Throughput of the process when data was collected to
generate the model
Fa - Current throughput of the process
In any practical implementation, the calculated execution period must be
limited, and restricted to values that are supported by the control system.
Also, the current throughput value would need to be heavily filtered to
eliminate the affects of process noise.

This technique assumes that the time constant is inversely proportional to


flow to the same degree as the time delay. This is not true for such applica-
tions as the temperature control of reactors, since the process time constant
is also a function of heat-transfer area, heat-transfer coefficient, reactant
volume, and reaction rate. Also, some time constants are relatively fixed,
such as the thermal time lags for die bolt heaters that are manipulated to
control the cross-directional profile of plastic sheets (webs) [9.20]. For
these applications, where the time process delay varies significantly but
the time constant remains fixed, technology exists that allows the dead
time to be adjusted online, effectively shrinking and compressing the pre-
diction horizon in time.

To allow the user to address processes that are characterized by a variable


dead time, an execution modifier block is provided with the MPC block.
334 Advanced Control Unleashed

Figure 9-19 shows an example of how this execution modifier block is


designed to automatically change the MPC function block execution
period as a function of a throughput input.

Figure 9-19. MPC Implementation that Addresses Variable Dead Time

If the MPC technology adopted does not allow model swapping, online
dead-time adjustment, or adjustment of the controller execution rate, and
the model is known to cover a range of dead times, then a more robust
MPC controller can be designed by overestimating the dead time. An
MPC controller in which the modeled dead time is longer than the actual
process dead time will be sluggish in performance, while a controller in
which the modeled dead time is underestimated can produce cycling and
stability problems. Thus, if uncertain of the actual dead t i m e , . . . " go
long!"

Addressing Mixed Dynamics


Some applications are characterized by the process outputs responding in
very different time frames to changes in the process inputs. Since the MPC
must be generated based on the slowest-responding output, the associated
MPC execution period may not be fast enough to address the require-
ments associated with the fast-changing output. In such cases, it is possi-
ble to break the control into two MPC blocks, one addressing the fast-
responding outputs and the other addressing slow-responding outputs.
This is done if the process is two processes in series where the intermedi-
ate measurements are accessible, as illustrated in Figure 9-20.

For example, the lime kiln process is characterized by a mixed dynamic


response. The hot-end temperature responds to changes in fuel and air
flow much more slowly than the cold-end temperature, the kiln 0 2 , and
Chapter 9 - Model Predictive Control 335

Figure 9-20. Structuring the MPC to Meet Both Fast and Slow Responses With-
out Compromise

the draft pressure do. Since the 0 2 and hot-end temperature are highly
correlated, it is possible to use the 0 2 measurement as a controlled param-
eter of an inner-loop MPC that is manipulated by an outer-loop MPC, as
illustrated in Figure 9-21.

i i
Figure 9-21. Structuring the MPC for Kiln Process

In many industrial applications, the very slow dynamics or objectives of


the MPC relate more to the optimization of the process than to the control
(i.e., moving the targets once straight lines are achieved). In these cases,
the slower objectives can be handled by incorporating an LP or QP formu-
lation above the MPC controller to allow the optimal steady-state targets
to be set for the controller.
Next Page

336 Advanced Control Unleashed

Batch Reactor Control


In many process industries, operating constraints are addressed using tra-
ditional override strategies based on the selection of PID controller out-
puts. Take for example the control of an exothermic batch reaction in
which the addition of catalyst is limited by the cooling capacity of the pro-
cess. Such a process would be instrumented as shown in Figure 9-22.

Figure 9-22. Traditional Approach to Batch Reactor Control

The disadvantage of this traditional approach is that the process is often


run at a point below its capacity because the constraints are based on the
current temperature. Thus there is often room for improvement. The
installation of an MPC controller allows operation closer to the limit of the
process and thus reduces batch time. Such reductions allow production to
be increased, as illustrated in Figure 9-23.

By using MPC for the batch reaction control, it is possible to achieve


tighter temperature and feed-flow control with minimum overshoot on
initial feed ramp. Using the optimization features of MPC, the catalyst
flow is automatically regulated to maintain the cooler at the maximum
capacity, thus achieving minimum batch time and best product quality.
The MPC configuration for control of the batch process would be struc-
tured as illustrated in Figure 9-24.
Previous Page

Chapter 9 - Model Predictive Control 337

Figure 9-23. Potential for Reducing Batch Time Using MPC

Figure 9-24. MPC Configuration for Batch Reactor Control

Application

General Procedure
The general procedure to be followed to develop and install a model pre-
dictive controller is as follows:

1. Develop an understanding of the process.


2. Define control/economic scope and objectives.
338 Advanced Control Unleashed

3. Develop dynamic estimators or virtual sensors to provide faster


and smoother composition measurements.
4. List candidates for manipulated (MV), disturbance (DV),
controlled (CV) and constraint (AV) variables (independent and
dependent variables).
5. Reduce the data compression and increase the update rate of the
data historian as necessary for the candidates.
6. Tune\fix regulatory loops and valves that affect or are driven by
candidates.
7. Install flow loops or secondary loops to avoid direct manipulation
of a valve.
8. Define and document baseline of operating conditions.
9. Define and implement performance indices.
10. Define and document PID baseline of control and economic
performance.
11. Determine bump size and time to steady state for each MV and DV.
12. Design and document bump test plan, if conditions do not allow
automated testing.
13. If possible, leave level loops in automatic and compute their
model.
14. Conduct automated or manual preliminary step (bump) test at
steady state of baseline conditions.
15. Define and configure control strategies to download to the
controller.
16. Do automated testing, exercising each MV. If manual testing is
necessary, design and document a Pseudo Random Binary
Sequence (PRBS) test plan with at least 16 flips, a minimum flip
time that is at least ¥i the time lag, and a step size larger than 6
times the noise amplitude or resolution limit of the measurement
or valve.
17. Conduct automated or manual PRBS test of each DV at steady state
of baseline conditions.
18. Identify models between independent and dependent variables.
19. Choose simplest/best models.
20. If necessary, refine list of candidates and pick best control interval.
21. Generate the controller from the model.
22. Determine if the process matrix is stiff or ill conditioned.
Chapter 9 - Model Predictive Control 339

23. Enter limits, targets, tuning weights, and move suppression.


24. Simulate/test/tune controller offline with unrestricted move size
limits. (Tune controlled variables individually and then bring in
constraints one by one.)
25. Simulate and test the ability of the controller to deal with changes
in the process matrix (model mismatch) and to handle the removal
of controlled or manipulated variables that could be caused by
instrumentation problems or failures.
26. Conduct training sessions to make sure the operator understands
and has confidence in the intent and basic functions of the
controller.
27. Provide displays to enable the operator to turn the controller on
and off and to assess true performance problems.
28. Run controller in manual mode and observe the future prediction.
29. Start controller online with restricted move size and output limits.
30. Relax move size and output limits, tune move suppression for
desired performance and robustness, and set tuning weights
(penalties on error) based on relative importance.
31. Exercise the controller by individually changing each target.
32. Test each controller variable by pushing it to the constraint or
temporarily lowering the constraint limit.
33. Evaluate the performance of the feedforward variables.
34. Verify that simulated responses match actual process responses.
35. Benchmark the performance of MPC control and audit economic
benefits.
36. Make sure ownership is assumed by a local person who is
available to answer the questions of the operator and to make
adjustments for changing process and equipment conditions to
keep the onstream time from deteriorating over time.

Application Detail
A typical application procedure for small and medium-size applications
consists of several steps as outlined in Figure 9-25. The details of the proce-
dure are discussed below.

Process Analysis
The Process Analysis phase of the project is the starting point. Its purpose
is to insure that the control problem is well defined and fits into the
340 Advanced Control Unleashed

i
Figure 9-25. A Typical Procedure for MPC Application

requirements of MPC technology. Also, this stage is designed to insure


that the process, underlying controls, IT systems and resources are in a
state of APC readiness. The main tasks addressed in the Process Analysis
phase include:

• Selection of variables and preliminary controller formulation


• Assessment of regulatory control performance
• Definition of anticipated benefits and auditing methods
• Preliminary assessment of process responses
• Development of properties estimators.

The process analysis should deliver a clear formulation of process control


objectives and limitations as well as the MPC application's means of
achieving those objectives. In other words, process analysis should result
in a process control configuration in MPC terms. The process inputs and
outputs are grouped into four different categories based on how they are
utilized in the control of a process:

1. Manipulated (MV) — Process input that is adjusted to maintain a


process output at set point or that is manipulated to affect the
behavior of the process.
2. Controlled (CV) — Process output that must be maintained at a
specific set target or set point
Chapter 9 - Model Predictive Control 341

3. Disturbance (DV) — Process input that affects the behavior of the


process and the selected output variables, but is not used to
maintain the process within its specifications. Selection of
disturbance variables is often an iterative process whereby the
requirement for a disturbance variable is only identified while
commissioning an MPC controller. In this case, the disturbance
variable is simply tested and subsequently added to the MPC
controller.
4. Constraint (AV) — Process output that must be maintained within a
defined operating range as opposed to a set target. It should be
noted that these are considered soft constraints since manipulated
variables are adjusted in an attempt to maintain constraint
variables within their limits. But we cannot guarantee that a
process output will not temporarily violate an output limit. Once
an MPC controller is designed, put on-line and optimized, the
process often hits new constraints that were not identified in the
original design. These constraints are simply modeled and easily
added into the MPC formulation.
Process inputs and outputs can also be defined as optimized if the process
has degrees of freedom—a greater number of control knobs than control
objectives. An optimized variable is a process input or output for which
there is an economic or performance incentive to push the variable in a
certain direction until some constraint becomes active and therefore limits
any further pushing of this variable. There are two modes of optimization
found in many APC controllers, an LP term and a QP term. (See the dis-
cussion following Figure 9-9.) The LP term is the generic, multi-input
implementation of a ramper that simply pushes the selected variable in a
direction defined by the LP cost toward the active constraint set. The QP
term is often selected to push a degree of freedom to some pre-defined
steady-state target.

Looking at a process in question and categorizing the variables in the


types above defines the controller formulation and establishes the testing
requirements. A test plan is required in the subsequent stage of the MPC
project and the information necessary to properly design the test plan is
obtained in the Process Analysis phase. Performing simple step changes in
the manipulated variables will provide good preliminary estimates of the
gains and process dynamics that can form the basis of a good test plan.
Off-line process simulators, either dynamic or steady-state, can also help
provide preliminary estimates of process behavior.

A review of the existing regulatory controls and instrumentation is an


important part of assessing the APC readiness of a process. The regulatory
controls are the foundation of any APC controller. Any shortcomings in
342 Advanced Control Unleashed

design or tuning must be addressed before the testing stage. All regulatory
loops, whether they are included as manipulated variables or not, should
be checked and tuned. The selected tunings will affect the response of the
process and are therefore considered part of the process from the MPC
perspective.

Tuning parameters should be recorded and any future modifications


should consider the impact of any tuning change on the modeled process
relationships developed for the MPC controller. A question often raised is
whether a regulatory loop or cascade loop should be opened and handled
with the MPC controller. The regulatory loops provide high-frequency dis-
turbance rejection and provide stability during the APC execution inter-
vals. They should only be broken and incorporated into the MPC structure
if they are better controlled with the added MPC functionality. There are
many examples of MPC projects that have not achieved success because
regulatory loops were opened that should have remained closed.

Process Testing
The cornerstone of any MPC control application is the dynamic input/
output relationship model that describes how a process output changes in
time subject to a change in the process input. In most cases, these models
are obtained by analyzing plant operating data and using statistical mod-
eling software to extract the relationships from the data. Normal plant
operating data suffers from being corrupted by process noise, containing
highly correlated behavior and having insufficient excitation in the input
variables of interest. Thus, historical operating data can rarely be used
directly for the development of the MPC models.

The solution and best path is to perform a structured test on the process,
whereby every input variable is perturbed in a manner that generates data
best suited for the identification and modeling of the input/output rela-
tionship. Advances have been made in the use of off-line dynamic simula-
tors to generate models required for MPC controllers. Although the true
capability of the simulation option is still at the research level, the use of
simulation tools can greatly enhance process-testing efforts. As of today,
process testing is still the primary path to the generation of the MPC
model.

Many techniques are being used and promoted for testing a process in
order to obtain optimal data for analysis. Although it is important to
understand the strengths and weaknesses of the different plant-testing
techniques, we must remember that the goal of the Process Testing phase
is to obtain the best model of the process behavior for the MPC controller.
The most common testing procedures used today are
Chapter 9 - Model Predictive Control 343

• Step tests and


• Pseudo random binary sequence.

The step test is simple to implement and understand, and, on a process


relatively free of process noise, will often generate data acceptable for the
development of an MPC model. One disadvantage of the step test is that
the process must be at steady state at the start and end of the test. Also, in
order to get a representative move, a large change in the input is often
required to clearly get outside the process-noise bandwidth of the data.

The principle behind the pseudo-random binary sequence (PRBS) test is to


make many small random changes in the input variable in order to gener-
ate a continuous perturbation in the output variable. Figure 9-26 shows a
typical PRBS test on an input and the result on 3 outputs with differing
noise characteristics. One benefit of this approach is that the amplitude of
the change in the input variable can be smaller than that required for the
step test. Also, the moves do not require that the process come to steady
state.

If the PRBS moves are implemented in a random manner, the input signal
is considered uncorrelated (white); theoretically, a white input signal will
produce better parameter estimates for the generated model. The fre-
quency of the PRBS moves generates fast and slow flips in order to excite
the full frequency of the process response. For example, the fast flips will
provide good information of the dead time, the long flips are necessary for
determining the steady-state gain, and the medium-length flips provide
good information for determining the time constant.

Once all the dust settles, the PRBS test makes more changes to the process
and thus produces data with more information content and therefore the
potential for better models. In designing a PRBS test, there are a number of
factors that must be set. These include the amplitude of the move in the
input variable, the hold time of the flip sequence, and the total length of
the test. How these parameters are set will affect the quality of the test
data, and although there are no hard-and-fast rules, there are some basic
practices that are followed.

Move amplitude:

• Select the amplitude big enough that you can see the effect of the
move in the raw data but not so big that the process is upset or the
operator needs to make compensating moves in other variables.
• The amplitude can always be adjusted during the test if required.
344 Advanced Control Unleashed

• If the process is very noisy and the move is small or within the
noise bandwidth, the test period (length) will need to be longer to
achieve the same quality model.

Flip time:

• The flip time should be selected as a function of the time constant of


the process. It is normally set between 1/5 of the time constant and
1 time constant.
• The greater the noise in the process, the longer the flip time, to
insure that the gain is properly established.
• The PRBS sequence should contain at least one long hold sequence
where the process comes close to steady state.

Test length:

• The test length is purely a function of the quality of the data. The
greater the noise in the data, the longer the test length in order to
generate a model with an equivalent confidence level.
• The test length is reduced by increasing the amplitude of the
moves and increasing the signal-to-noise ratio of the data.
• For most chemical processes a test sequence that generates 16 to 32
flips is usually sufficient to generate a good model.

Consider a process with an estimated time constant of 15 minutes and a


moderate noise level in the data. A PRBS test is configured for this process
with a flip time of 10 minutes and a total of 32 flips, generating a total test
length of 320 minutes.

The Process Testing segment of the MPC project consumes a large portion
of the project cost. Some effort has been applied to developing automated
test sequences that will automatically calculate and implement the moves.
These automated programs can reduce testing effort but must still be care-
fully monitored to insure that the process remains within desired operat-
ing limits. An automated testing program [9.10] that generates a pseudo-
random testing sequence with only one parameter (process time to steady
state) set by the user is shown in Figure 9-26.

Process Model Development


The modeling stage of the MPC project is where we define the model that
will be imbedded into the MPC controller. The model is the heart of the
MPC controller and is a key factor in the success of the MPC application.
Although proper testing can generate input/output data rich in process
Chapter 9 - Model Predictive Control 345

Figure 9-26. Testing a Process Using PRBS Changes

response information, test data is seldom perfect. Applying the correct


modeling approach will enhance the quality of the final model.

The first phase of the modeling effort is to analyze the time series trends of
the variables collected over the test period. It is important to review the
values of the data and remove any periods of time that do not reflect nor-
mal operating conditions. Most MPC packages allow this to be done via a
graphical interface to the data.

Once the appropriate data has been analyzed and selected, the next step is
to select the inputs and outputs and to identify the time series relation-
ships that exist between them. Before the advent of automated testing and
modeling, a cross-correlation model was used to examine the relationship
between the inputs and outputs and to manually identify the time delay.
The cross-correlation model is analogous to the finite impulse response
(FIR) model used in many MPC packages, and is obtained by simply com-
puting the cross-correlation of the input series against the output series
across a number of lags equal to the estimated settling time of the process.

If the data is of high quality, the cross-correlation model can be used


directly in the MPC controller as the identified model. The quality of the
cross-correlation model can be interpreted through confidence factors on
the step response representation or by generating several cross-correlation
models with different time horizons to insure that the models come to
steady state in some common time period. If the tested input can be iso-
lated and perturbed independently, a single-input/multi-output cross-cor-
relation algorithm may be used. If compensating moves in other inputs are
required during the testing phase, a multi-input/multi-output cross-corre-
lation algorithm may be required in order to extract the proper model.
346 Advanced Control Unleashed

Since there are many advantages to the use of parametric models, cross-
correlation models are mostly employed to manually identify the time
delay and make sure the direction and size of the change in the process
output is reasonable. It is a good practice to always examine the process
model that was identified and compare it with the knowledge of the pro-
cess.

The process model is displayed as a process step response. The step


response for a process input shows the impact of a unit of process input
change on the process outputs (with all other inputs maintained constant).
Figure 9-31 shows an example of an overview step-response model. A
detailed view of an individual output response for a particular input is in
Figure 9-32.

A number of factors may cause the cross-correlation modeling approach to


break down and generate a model without the level of accuracy required
by the MPC controller. These include poor excitation of the input variable,
highly correlated moves in the input, compensating (correlated) moves in
other inputs, and significant low-frequency noise in the output variable. In
this case a more sophisticated modeling approach called parametric mod-
eling can be applied.

The cross-correlation model, or non-parametric model, assumes no form


to the model and the step response is simply computed by cross correlat-
ing the input and output series over time. In the parametric model, a form
(transfer function) is imposed and the test data is used to fit the model
parameters that minimize the difference between the modeled output and
the actual output. The initial guess of the model form is usually based
upon the form of the FIR or cross-correlation model and is referred to as
the identification step.

There are many forms of the parametric model, including the ARX model,
the output error model, and the Box-Jenkins model. These parametric
model forms differ mostly in the way they handle process noise and
dynamics. The more complex the process dynamics and process noise, the
more sophisticated the model form required, Box-Jenkins being the most
generic.

The accuracy of the process model can be examined by comparing the


models generated with two different techniques, FIR and ARX (Figure 9-
32). Comparable results for these different methods are expected when the
model is accurate. However, if the models do not match, it is a good indi-
cation that there is significant process noise in the data and that parametric
modeling will be required to extract a meaningful relationship.
Chapter 9 - Model Predictive Control 347

Once the form of the model has been selected and the model is fit to the
data, the diagnostic stage of the model fit is next. This is an important step
in the modeling phase. A simple and very intuitive diagnostic is to com-
pare the measured process output values against those calculated by the
model based on the process inputs. This comparison allows one to deter-
mine how closely the model matches the actual plant operation. An exam-
ple of this type of output validation is shown in Figure 9-33.

Statistical tests on both the model residuals and the difference between the
modeled output and the actual output can provide insight into the model
deficiencies. For example, if the model residual terms demonstrate strong
autocorrelation, there is a well defined pattern in the data that we are not
accounting for within our process model but that should be accounted for
with the addition of a noise model. Further, the cross-correlation of the
input series with the residual series will identify whether the selected
form of the parametric model is sufficient to accurately model the data.
For a more detailed review of these statistical model diagnostics, refer to
Box-Jenkins [9.22].

In the end, we want to extract the best possible model from a good set of
test data. In some cases, the model may need to be manually set or
adjusted to accommodate process knowledge and/or the unavailability of
good test data. Therefore, editing the associated response models based on
your knowledge of the process would be a practical solution.

Controller Design
The MPC controller design stage consists of rationalizing the preliminary
controller configuration with the models obtained from the test data in
order to build an MPC controller configuration that will meet the control
objectives of the process. The first step is to lay out the preliminary matrix
defined in the Process Analysis stage. This preliminary matrix can then be
populated with the precise models obtained from the testing stage. Know-
ing the actual models, the preliminary MPC matrix design can now be re-
evaluated to insure the identified relationships generate a controller capa-
ble of achieving the defined control strategy and objectives. For example,
if a preliminary design anticipated a model between MV1 and AV2, but no
model was identified for this relationship, we must assess the impact this
will have on the original control strategy.

Once the final matrix structure is defined and the models entered into the
MPC matrix, it is a good practice to evaluate the stability of the proposed
MPC controller. The purpose here is to verify that the proposed controller
matrix is not singular, a condition that arises when there is linear depen-
dency in the output variables. A singular matrix will make it impossible to
independently control selected CVs to their targets.
348 Advanced Control Unleashed

This singular behavior occurs when the models with respect to the manip-
ulated variables for a given output are a linear combination of the models
for another output. This is easily identified in the 3x3 controller gain
matrix, shown in Figure 9-27, by noting that the rows for output variables
CV1 and CV2 are linearly dependent; that is, the steady-state gains for
CV2 are 4 times the gains of CV1. Thus any attempt to move CV1 will
cause a corresponding move in CV2, making it impossible to move CV1
and CV2 targets independently of each other. Checking the controller for
linear dependents can avoid many problems often detected only later at
the tuning, simulation or commissioning stages.

Figure 9-27. An Ill-conditioned (Singular) Controller Matrix

MPC control technologies differ in their ability to decouple the process-


response matrix from the controller matrix, but separating the two matri-
ces provides greater flexibility in defining the controller strategy. The pro-
cess-prediction matrix will most often contain all input/output models
and is used to predict the future projection of the process. The controller
matrix contains models for input/output relationships that are used to
maintain the outputs at its targets or within constraint limits.

There is an academic argument that every input that affects an output


must be included in the controller matrix in order to obtain optimal per-
formance. This approach may prove to be optimal, but it often imposes
controller behavior that does not conform to the desired operating strat-
Chapter 9 - Model Predictive Control 349

egy of a plant. It is important to consider the desired operating strategies


of a plant in the design of an MPC controller.

However, if the model is only in the prediction matrix, the effect of one
MV on another MV is delayed by one scan. When tight timing is impor-
tant for ratio control, such as when a speed that is manipulated for
machine-direction control of plastic sheets (webs) is also ratioed to another
speed used for production-rate maximization, it might be better to include
the model in the control matrix and adjust the MV and CV tuning weights
to provide the desired ratio effect and suppression of manipulation for
control.

In addition to defining the models for the MPC matrices, the controller
execution frequency must also be set. The MPC controller runs at a much
slower frequency than the PID regulatory layer. Ultimately the MPC con-
troller's execution interval is defined by the dynamic of the process: the
faster the process responds, the faster the MPC must execute. A good rule
of thumb is to execute the controller at least as fast as a rate of 5 moves per
time constant. Also, since the integrated error for unmeasured load upsets
is proportional to the square of the dead time, it is beneficial to make sure
the MPC does not increase the total loop dead time by more than 10%.
This is achieved by choosing an MPC execution interval that is less than
1/5 of the loop dead time since the additional time delay is on average
1/2 the control interval. MPC execution intervals of less than 5 seconds are
now possible except for very large matrices.

MPC Controller Testing in Simulation


The speed of MPC control response and control robustness to process
changes is influenced by parameters used in controller generation. During
controller generation, these parameters, known as MPC tuning parame-
ters, are set to starting values based on the criteria set by the MPC product.
The tuning parameters will differ with the selected MPC product, but
most will allow the user to adjust the following controller-generation
parameters:

• Penalty on Move — Control sensitivity to changes in a process is


determined by the robustness of the controller. The parameter used
in controller generation that most affects robustness is the Penalty
on Move (PM) parameter. The PM defines how much the MPC
controller is penalized for changes in the manipulated output (MV).
The Penalty on Move parameter is typically defined independently
for every MV. High PM values result in a slow controller with a
wide stability margin. With such settings, the control is relatively
insensitive to change in either the process or model errors. Low PM
values result in a fast controller with a narrow stability margin. The
350 Advanced Control Unleashed

penalty on move is also known as "move suppression" or an MV


tuning weight.
• When the model used in generating the control accurately reflects
the process gain and dynamics, changing the PM value does not
affect the controller performance significantly. However, a
difference in the controller performance will occur if the model
does not match the real process. Be careful to set the maximum
move limit large enough, when tuning the PM, so that its effect is
real and not clamped by the move-size limit, which is a hard
constraint.
• Penalty on Error — The Penalty on Error (PE) parameter allows
more importance to be placed on a specific controlled variable.
Typically, the default value for Penalty on Error gives equal weight
to all controlled parameters, which may not be the desired objective
for some applications. If the control strategy clearly indicates that
one of the controlled variables should be of lower priority, the
associated PE is decreased. Generally, however, the PE should not
be used to change overall control performance. In most cases, it is
recommended to change the PE only after adjusting the MPC
controller using the PM and testing the MPC in simulation. The
primary criterion for adjusting the PE is the acceptable variability
on a specific controlled parameter. In some systems an "equal
concern" factor is used to account for differences in scale spans and
engineering units where a smaller equal concern places a greater
importance on the error. The penalty on error is also known as a CV
or AV tuning weight.

In addition to the parameters used during controller generation, the user


can influence the control behavior and robustness in online operation. One
way is to apply a set point trajectory to provide funnel control over the
control horizon. It is possible to incorporate range control to penalize
control variables only for large deviations from set point, as shown in
Figure 9-28.

Before the new control is tried on the real process, the user must observe
how well the control responds to set point changes and load disturbances
in simulation. The process response is simulated based on a process
model. The model used in simulation is the one identified for the process.
However, by using a modified model, the user can test the control
response for changes in the process.

The ability to introduce disturbances and test the control for changes in
process gain or dynamics is extremely valuable when you want to test the
robustness of the control. If the process response is too aggressive, it is
Chapter 9 - Model Predictive Control 351

i
Figure 9-28. Set point Trajectory for Funnel Control

easy to improve results by modifying the default values for the penalty on
error, the penalty on move, and the set point filter.

If a process responds slowly to input changes, it may take many hours in


an operating plant to see the full control response. In a simulated environ-
ment, the process and control simulation can support faster than real-time
execution to allow the response of even the slowest processes to be seen in
a few minutes. Similarly, the ability to simulate the control and process
response slower than real time is valuable when working with a very fast
process. The ability to adjust the speed of execution allows the control
response to be verified quickly for a variety of operating conditions.
Through such testing, you can gain confidence in the control performance.

MPC Commissioning/Operation
Putting the MPC controller on-line is where the rubber meets the road. If
you took the time and effort to properly conduct the process tests, collect
high-quality test data, build significant models and generate a control
matrix that is well conditioned, the commissioning phase of the controller
will most often proceed with few surprises. If shortcuts were taken in the
previous stages, poor on-line controller performance will be obvious, but
the root problem may not be so easily identified.

In the commissioning phase of the MPC application, we must first per-


form the obvious task of checking all communications between the pro-
cess instrumentation and the MPC controller. This includes master
switches, watchdog timers (if necessary), transmitter failure, controller
shed modes, variable modes, controller wind-up action, etc. Following the
communications checklist, we can activate the MPC in what is often
referred to as "shadow" or "tracking" mode. This mode allows the con-
troller to look at the current state of the process and the controller targets
352 Advanced Control Unleashed

and to compute moves in the manipulated variables. These moves are dis-
played but not actually sent to the local controllers. In this shadow mode,
the application developer or operator can easily see what moves the MPC
controller would make if the MPC were active. This step gives the opera-
tor confidence and serves as a good learning tool to explain controller
action without actually being in closed-loop mode.

Once you are assured that the controller is making moves in the correct
direction, the MPC controller can be put on line. For small applications, up
to a 3x3 matrix, the application can be fully activated on the first pass. For
larger MPC controllers it is recommended that a limited number of vari-
ables or subsets of the controller be activated at one time in order to more
easily identify any performance issues. Once the controller is fully opera-
tional and operating in a stable manner, the performance elements that
must be checked out during the commissioning phase include the follow-
ing:

• Set point changes


• Constraint control
• Steady-state giveaway
• Manipulated-variable limitations
• Feedforward disturbances
• Optimized, or ramper, functionality

To evaluate set point changes, the targets can be manipulated individually


and with multiple target changes to insure the MPC controller can get to
target in the desired time. At this stage, one would look for either too
sluggish a controller or oscillatory behavior, most often caused by
aggressive tuning or a poor process model. Most MPC applications will
provide some form of online tuning parameter to modify the speed of the
response online. For example, a tuning parameter for the set point filter or
reference trajectory enhances controller robustness and flexibility. When
model mismatch comes from the dead time in a dead time-dominant
process, set point filter may not be enough to eliminate chattering of the
controlled output.

Testing MPC action against the active constraints is an important part of


MPC commissioning. Any MPC application designed to optimize will be
constantly pushing the controller constraints, and the MPC controller's
ability to ride these constraints is key. In order to test MPC constraint con-
trol, one must make target changes to push the process to these constraints
and see how the controller dynamically approaches the constraints. The
problem to look for is "saw-toothing" or "bouncing" off the constraint
Chapter 9 - Model Predictive Control 353

limit. This occurs when the controller action is too aggressive and then
rapidly pulls back from the constraint. This is followed by the controller's
desire to meet the target as it once again pushes back towards the con-
straint only to cause another rapid pull-back. This behavior over time will
cause a sawtooth pattern in the constraint variable and can cause excessive
variability in the other MPC controller variables. The solution is to reduce
the aggressiveness of the constraint-tuning action or to modify the con-
straint model by increasing the dead time or increasing the gain. A well
tuned constraint controller will be able to approach the limit in a smooth
dynamic fashion and ride the limit over time.

If constraints become active or manipulated variables are removed from


the MPC controller, there is a loss of degrees of freedom such that all MPC
control is over-specified and the objectives (targets and constraints) may
not all be satisfied. At this point the MPC controller will prioritize on
selected control objectives based upon the assigned tuning for steady-state
offset. The amount of giveaway on a control target or constraint should be
determined during the commissioning phase. Monitoring the final steady-
state values of the MPC controller targets is a simple way to understand
where the controller will give up, if it cannot meet all the objectives.

Manipulated variables can hit hard limits defined in the controller, can indi-
vidually be taken out of the MPC mode, or can be driven into a wind-up
condition. All these occurrences will cause an over-specification of the
controller and a loss of degree of freedom in the MPC. This will cause
giveway to arise and can cause controller stability problems. These condi-
tions should be verified during the commissioning stage.

Once the control performance is satisfactory for the MPC, the optimization,
or ramper, functionality can be evaluated. The different optimization modes
can be tested and the optimization tuning speed can be modified as
required. A good rule of thumb is to optimize the process at a rate 1/5 that
of the control action. This usually insures good control action with a slow
push towards the unit optimum.

Selectively testing and verifying the MPC controller objectives and func-
tionality will insure that the MPC application is robust, and will provide
plant operators with the confidence that the application will deliver the
identified benefits.

Rules of Thumb
The process model that is identified for use in MPC is influenced by pro-
cess noise, unmeasured load disturbances, nonlinearities, and inverse
response. The following rules of thumb apply when these conditions are
present during process testing.
354 Advanced Control Unleashed

Rule 9.1. — Use the simplest model with identified parameters that are consis-
tent with more complex models despite poor statistical indices for the simpler
model. Model parameter changes of less than 10% are negligible due to
nonlinearities. The actual process gain, time lag, and time delay normally
change more than 10% as a result of changes in operating point and oper-
ating conditions. The simplest model is a first-order ARX model.

Rule 9.2. — For fast noise, use either a signal filter to attenuate the peak-to-peak
amplitude of the noise to less than 1/5 of the step size for a bump test, or use a
Box-Jenkins noise model to avoid a much shorter-than-actual identified lag time.
For example, fast noise of just 5% peak to peak for a step size of 10% can
cause the identified lag time to be 80% less than the actual time lag and the
identified process gain to be slightly low.

Rule 9.3. — For a shift in the steady state (nonstationary behavior), either
remove the affected area of the test or calculate the difference between the consecu-
tive data points to avoid the identification of a larger-than-actual time lag. For
example, a positive and negative shift that approaches the step size can
cause a time lag to be identified that is more than 75% larger than the
actual time lag. A single differencing of adjacent data points can account
for a change in the magnitude of the mean, whereas a second differencing
of adjacent data points can account for a change in the slope of the mean.

Rule 9.4. — For slow noise, either remove the affected area of the test or use an
autoregressive Box-Jenkins noise model to avoid the identification of a larger-
than-actual time lag. For example, slow sinusoidal noise of just 5% peak to
peak with a period about 1/2 of the response time can cause a 30% larger
than actual time lag for a step size of 10%.

Rule 9.5. — For inverse response, either use an increase in the time delay or add
a partial delay to the ARX model to avoid a severe overestimate of the time lag.
For example, the identified time lag is more than 100% larger than actual
due to an inverse response that approaches half of the step response. The
most accurate results are obtained with a partial delay (a second ARX B
coefficient). However, if it is not important to have the model prediction
match the process during the inverse response, a first-order ARX model is
sufficient.

Rule 9.6. — For bump or PRBS tests where none of the steps were held long
enough to reach a steady state, the time lag identified is too small and the process
gain identified is slightly low. In PRBS or bump tests, some of the steps must
be held long enough for a self-regulating process to reach steady state.

Rule 9.7. — Use a control interval that is large enough to reduce A/D noise and
minimize modeling error but that does not add excessive dead time. The true rate
Chapter 9 - Model Predictive Control 355

of change of the process in the control interval for an upset should be


larger than the resolution limit (typically 0.05% of input analog-to-digital
converter card span) to avoid A/D chatter. Small control intervals make
the controller more sensitive to changes in the time delay and A/D noise.
However, the additional dead time introduced into the control system is
on average 1/2 of the control interval. If this is more than 20% of the origi-
nal dead time, the performance for unmeasured load upsets has deterio-
rated.

Rule 9.8. — For an integrating response, calculate the difference for the consecu-
tive output data points to develop a first-order model. The rate of change of the
process variable is now modeled.

Rule 9.9. — For a level response, compute the rate of change of level based on the
geometry of the vessel and location of the sensor taps. These responses are typi-
cally slow and PRBS testing would be time-consuming. The actual
response for a given density is computed for a change in difference
between the flow going into and out of the vessel. If the MPC controller
output does not manipulate a flow controller set point but directly throt-
tles a control valve, the calculation is nonlinear and depends upon the
installed characteristic of the control valve.

Guided Tour
This tour illustrates the potential ease of use and convenience of an inte-
grated interface that is possible for model predictive control embedded in
an industrial control system. The following areas are addressed:

• MPC Configuration
• Process Testing
• Process Model Identification
• Verification of Step-Response Model
• Testing Control Off-line
• Operator Interface
• Expert Options in Model and Control Generation

Model predictive control may often be the best control solution for
processes characterized as dead time-dominant. Also, MPC is often the
preferred solution if a process has measurable disturbances, constraints, or
interaction between control parameters. The application of this
technology, however, has traditionally been limited to a few high-
throughput processes because of the cost and complexity of the tools used
356 Advanced Control Unleashed

to implement MPC. Many of these issues have been addressed in modern


control systems by embedding MPC capability at the controller level.

To provide a consistent environment for control definition, MPC is imple-


mented as a function block. This allows the same tools and blocks that are
used to define other measurement-and-control applications to be used in
defining MPC strategies. To create a strategy that involves MPC control,
an MPC block is added to a module. The MPC block is extensible, that is, it
can be resized to match the number of process inputs and outputs
involved in the control. Measurement blocks are added to the module and
linked to the MPC block by graphically defining connections to the MPC
block. The outputs of the MPC block may go directly to a block that works
with an actuator or the MPC block may connect to another control block.
In either case a second connection is made to allow the MPC block to be
aware of the downstream condition. The status provided in this second
path is used to provide bumpless transfer and take action under limited
conditions. Also, these connections to the MPC block are used to automat-
ically collect scaling information and to change the mode of downstream
blocks.

A typical MPC application is shown in Figure 9-29.

Figure 9-29. Configuration of an MPC Strategy

A few parameters associated with the MPC block, such as set point limits
and constraint limits, are defined as part of module configuration. All the
block inputs and outputs are automatically assigned for historical collec-
Chapter 9 - Model Predictive Control 357

tion. Once the block has been configured, the module containing the MPC
block is downloaded to the controller.

Then an application is launched to commission the MPC block. The inter-


face to this application provides a trend window that displays the MPC
block inputs and outputs. When MPC Mode is selected in this interface,
the mode of the blocks downstream of the MPC block will automatically
switch to allow- their set points to be adjusted by MPC during testing of
the process. Before initiating testing, the user must put in his best guess at
the time it will take the process to respond to changes in the process input.
Once this time to steady state has been specified, the testing of the process
is initiated by selecting Test. In response, the manipulated parameters are
changed from their starting values in a pseudo-random manner, as illus-
trated in Figure 9-30.

Figure 9-30. Automated Process Testing to Identify the Process Model

The time over which testing was done is shown by a different color in the
trend. After testing completes, selecting Autogenerate will cause the pro-
cess model to be identified and the controller generated. On completion of
control generation, the identified step response is automatically displayed,
as shown in Figure 9-31.

By selecting an individual step response it can be examined in more detail.


Also, as a cross-check on the accuracy of the model, the user may display
the identified FIR model with the ARX model used in the control, as
shown in Figure 9-32.
358 Advanced Control Unleashed

Figure 9-31. Step-Response Model

Figure 9-32. Comparison of FIR (short horizon) and ARX Step-Response Mod-
els

As a means of further checking the accuracy of the model, the user may
select Verify Against Original Data. In response, the calculated process
output based on the actual process inputs vs. the actual process output is
plotted, as shown in Figure 9-33.
Next Page

Chapter 9 - Model Predictive Control 359

Figure 9-33. Verification of Identified Model

To test the control that was generated based on the model, the user may
select Simulate. In response, a simulation environment is automatically
created based on the block definition, the identified process model and the
generated control. The control is automatically executed in this off-line
environment with a simulated process based on the identified process
model. The control response is automatically trended along with a trend of
the future response calculated by the MPC block, as shown in Figure 9-34.

Figure 9-34. Testing of Control Using Simulation Environment


Previous Page

360 Advanced Control Unleashed

In the manual mode of operation, the user may examine the process
response for changes in the manipulated inputs to the processes. The con-
trol response to a set point change is observed in the automatic mode. To
allow the control to be tested in the presence of noise and changing distur-
bance inputs, the user may select to add noise to process inputs and out-
puts. Also, the ability is provided to execute control and process
simulation u p to 200 times faster than real time. This feature significantly
reduces the time required to check out the control associated with slower
processes.

Once the control has been generated, the module is downloaded to the
controller to put the control on-line. Predefined dynamos are provided to
allow each control, constraint, manipulated or disturbance input to be
included on the operator display. Also, a dynamo is provided that allows
all the MPC inputs and outputs to be viewed together. An example of the
MPC operator view is shown in Figure 9-35.

Figure 9-35. Operator Interface t o MPC

For many applications, the commissioning of the MPC control is accom-


plished with the only input from the user being the specification of the
time to steady state. However, expert features are also included that allow
abnormal conditions to be addressed. For example, if measured distur-
bances do not change during the automated test their impact cannot be
determined. For this case, the user may select Design Response and either
enter known values for the step response in terms of gain, delay and time
constants, or graphically indicate the shape of the response, as illustrated
in Figure 9-36.
Chapter 9 - Model Predictive Control 361

Figure 9-36. Editing Step Response

If testing of the MPC is interrupted by an abnormal condition, the corre-


sponding period of time must be excluded from the data used to generate
the model. The start and end of the time of the data used in model genera-
tion can be adjusted by graphically stretching the green (shaded) area
using the mouse. Other portions of the data in the time frame can be
excluded by indicating that exclusion areas be added, as illustrated in Fig-
ure 9-37.

Figure 9-37. Selecting Data Used in Model Identification

The tuning parameters associated with the generation of the MPC control-
ler are set automatically based on the process dynamics. Thus in most
cases there is no need for the user to adjust the tuning parameters. How-
362 Advanced Control Unleashed

ever, to include a special control response if needed, the user can choose to
generate the control from the model. When this option is selected, the user
can modify the tuning parameters used in controller generation. The dia-
log for this is shown in Figure 9-38.

Figure 9-38. Adjustment of Controller-Generation Parameters

The ability to generate the control from a selected model is also useful
when a model has been modified to include user knowledge.

Theory
The advantages of MPC are most evident when it is used as a multivari-
able controller. Therefore, the acronym MPC is sometimes read as multi-
variable predictive control. In applications to date, MPC has been used
predominantly as a multivariable controller. For the sake of consistency,
however, this book stays with the original use of the acronym MPC to
mean model predictive control.

A generic multivariable process controlled by MPC is presented as a black


box in Figure 9-39.

In MPC, the process inputs are manipulated variables (MV) and measured
disturbances (DV). The process outputs are controlled variables (CV) and
auxiliary, or constraint, variables (AV).

Manipulated variables are managed by MPC controller outputs, predomi-


nantly by the manipulation of set points of the fast loops that stabilize the
Chapter 9 - Model Predictive Control 363

Figure 9-39. Multivariable MPC-controlled Generic Process Configuration

inputs to the process. In calculating the optimal solution, an MPC control-


ler assumes in its calculations that it can make several moves of the manip-
ulated variables ahead. Only the first move is implemented. The
procedure for an optimal solution is repeated every scan. The control hori-
zon is the number of manipulated-variable moves ahead to be taken into
account in developing the MPC optimal solution.

Manipulated variables have hard constraints, which means that no viola-


tions of the limits are allowed at any time. Both absolute and rate, or incre-
mental, constraints are set for manipulated variables.

Measured disturbances are inputs to the process that are not managed by
MPC.

Controlled variables are process outputs kept at specific set points (tar-
gets) or within specified ranges. Process models predict the future values
of controlled variables for a number of scans ahead. The prediction horizon
is the range in scans of process output prediction.

Finally, constraint variables are a type of controlled variables with only


range control and no set points. Constraint variables have so-called soft
constraints. Violations of this type of constraint may happen temporarily
for the satisfaction of other criteria or constraints of the system.

Some use the term controlled variables for both controlled and constraint
variables, differentiating between them by using terms like controlled vari-
ables with set points for controlled variables and controlled variables with lim-
its for constraint variables.

A dynamic matrix is a matrix built from step responses to predict the


changes in the process outputs that result from moves of the manipulated
variables over the control horizon. Dynamic Matrix Control (DMC) has
historically been the most successful MPC implementation approach,
based on the dynamic matrix [9.15].
364 Advanced Control Unleashed

The Basics of Process Modeling


The process model is the basis for MPC technology. Most MPC implemen-
tations to date use step-response models proven in DMC applications.
Step-response models provide an explicitly available future prediction of
process outputs. The future prediction is used for computing a predicted
error vector as an input to the MPC controller.

Figure 9-40. Step Response with 120 Coefficients

The step response is represented by a number of coefficients (120, in Fig-


ure 9-40). Each coefficient corresponds to the value of the model step
response at a specific time instance.

In MPC terms we see the step response as a prediction of the process out-
put up to 120 scans ahead, for a unit step input that was applied at scan
zero. The value of the prediction at a particular point in the future is repre-
sented by a specific coefficient. Figure 9-40 marks coefficients representing
the process output at 30, 60, 90, and 120 scans ahead after applying unit
step value on the input.

An MPC controller uses an incremental model, whereby real process input


and output values are assigned to the model at the initialization of the
model but the controller subsequently accounts only for increments on the
inputs and calculated increments of the model outputs.

MPC application assumes a linear process model. Linear processes and


systems possess the two properties of superposition and homogeneity
[9.6]. A system that produces an output y1 for an input xl and an output
y2 for an input x2 and an output yl+y2 for an input xl+x2 has superposi-
tion. Similarly, a system that produces an output y for an input x and pro-
duces an output ay for an input ax has homogeneity. The properties are
Chapter 9 - Model Predictive Control 365

used for building multivariable models as in Figure 9-41. Every process


output is a sum of input actions.

Figure 9-41. Superposition and Homogeneity are Applied on Every Output of


the MPC Process Model

The superposition principle is illustrated graphically in Figure 9-42.

Figure 9-42. Superposition Principle in Linear Models

Mathematically we can consider the predicted output trajectory of a pro­


cess as a process state and use a modified state space form for process
modeling [9.7]. For single input- single output (SISO) process prediction,
the equations are in the form:

xk+1=Axk+BΔuk+Fwk

where:

is vector of process output prediction at a


time k,0,1,2,...p-1steps ahead
366 Advanced Control Unleashed

A is shift operator defined as Axk = [y1,y2,...yi,...,yp_1,yp_1]T

B =[b0,b1,...,bi,...,bi,...,bp-1]lis vector of p step response coefficients


Δuk = uk-uk-1 is change in the process input/controller output
wk= yp - ym is process output measurement minus the model output
(the mismatch between the process and the model that
results from the noise, unmeasured disturbances, and
model inaccuracy)
Filter F is p dimensional vector with unity default values
C is the operator for selecting current model output defined as

For a process with n outputs and m inputs, vector xk has dimension n*p
and vector B becomes a matrix with dimension n*p rows and m columns.

The graphical illustration of the equations in Figure 9-43 explains the pre­
diction principles.

Figure 9-43. Graphical Illustration of Linear Process Modeling

At any time instance the k process output prediction (bottom curve) is


updated in three steps:

1. Prediction made at the time k-1 (the bottom dotted curve) is shifted
one scan to the left.
2. A step response, scaled by the current change on the process input,
is added to the output prediction.
Chapter 9 - Model Predictive Control 367

3. The prediction curve is moved to the point to match the current


measured process output, for filter coefficient =1; or, in general, the
prediction shift is Fwk.

Identifying the Process Model


As a rule, MPC is used for multivariable and highly interactive processes.
Therefore, there is little chance that the application of a single step change
to a specific process input will produce outputs that would provide a good
process model. Multivariable process outputs are influenced by other
inputs, disturbances and noise. Better results are obtained by applying a
special pulse test sequence instead of a single step and then using mathe­
matics to process test data and build a process model. An example of the
testing for the identification of a 3x3 process is shown in Figure 9-26.

Using test data, we can build for every output as many equations as the
number of collected samples. The number of collected samples in test data
normally is significantly higher than the number of unknown model coef­
ficients. Such equations are solved using the least squares technique. This
technique finds the coefficients that may not fit perfectly in any equation,
but fit optimally for all equations, in such way that the total squared error
for all equations is minimized. The form of the equations used for process
modeling is defined by the modeling technique, a number of which are
used. The most common identification (modeling) techniques are Finite
Impulse Response (FIR) and Auto Regressive with external inputs (ARX).

FIR identifies pulse response coefficients, as in the Equation 9-2 where::

for SISO process (9-2)

where:
p = prediction horizon, with a typical default value for MPC
models of 120
Δyk = change on the process output at the time k
Δuk-i = change on the process input at the time k-I
hi = pulse response coefficients of the model
Step-response coefficients are calculated directly from pulse response as
follows:

(9-3)
368 Advanced Control Unleashed

An advantage of FIR is that it does not require any preliminary knowledge


about the process. However, identifying step response with full prediction
horizon with as many as 120 coefficients results in low confidence levels in
the identified coefficient values.

Therefore, a shorter horizon with 60 points is more suitable for an FIR


model. An FIR model with a shorter horizon provides the initial part of
the step response, adequate for defining process dead times using a heu-
ristic approach.

On the other hand, the ARX model has many fewer coefficients, which are
defined with higher confidence, provided the process dead times are
known.

for SISO process (9-4)

where:
V, A = autoregressive and moving average orders of ARX. A=4,
V = 4 satisfy most applications
ai , bi = autoregressive and moving average coefficients of the ARX
model
d = dead time in scans
Applying FIR first and defining dead times and then applying those dead
times to ARX provides the best identification results.

Finally, having an ARX model and applying unit step on the input, step
responses for any prediction horizon can be calculated directly from Equa-
tion 9-4.

For MIMO processes, superposition is applied from all inputs to every


output both in FIR and ARX models.

Combining FIR and ARX identification techniques, and properly testing


the process, delivers a high-quality process model. An example of an iden-
tified 3x3 process model is shown in Figure 9-31. For this example errors
for process steady-state gains were less than 5%, and dead-time error was
one scan.

The identified model should be validated by applying real input data to


the model inputs and comparing model output with real output. A valida-
tion plot for output 1 of the cited example is shown in Figure 9-33.
Chapter 9 - Model Predictive Control 369

The validation procedure may also include statistical techniques for calcu-
lating confidence intervals. A confidence interval forms an area around a
nominally defined step response, as shown in Figure 9-44.

Figure 9-44. Step Response with Marked Confidence Intervals

The true value of the step-response coefficients is found within confidence


intervals with a predefined confidence (or probability). Normally 95%
confidence intervals are used.

Unconstrained Model Predictive Control


The unconstrained MPC algorithm is a core of MPC operation, so it is use-
ful to present some details on deriving the basic dependencies of the algo-
rithm. The derivation follows the DMC approach and assumes for
simplicity's sake a SISO process and a control horizon =1.

Let's assume that as a result of a set point change or disturbances a control


error occurred Figure 9-45. Applying the set point trajectory r(k) and
future prediction of the process output y(k), the predicted errors e(k) are
expressed as:

e(k) = r(k)-y(k) k= 1,2,...,p (9-5)


To compensate for the errors, the change in the MPC controller output
should cause a change in the process output with predicted errors that are
the mirror image of the existing errors relative to the set point trajectory:

e'(k) = -e(k) for k= 1,2,...,P (9-6)


370 Advanced Control Unleashed

Figure 9-45. Compensating Prediction Errors e(k) by Adding Control Errors


e'(k), Being Mirror Images of e(k)

e'(k) is calculated as e'(k) = b(k)*Au = -e(k) k= 1,2,...,p (9-7)

To design a controller we have to solve p equations, finding one controller


move Δu . This is called an over-determined problem; that is, the solution
cannot satisfy all of the equations perfectly, so we need to find the solution
that best satisfies the equations. This is a least squares solution. Using
matrix notations we define the MPC controller design problem as
finding Δu that mirtimizes the total squared error e'(k)-e(k) for all
k = 1,2,...,p.

mim {B*Δu-E}
where:
E - error vector of p dimension.
B = step response vector of p dimension.

From linear algebra we know the solution of the least squares problem as:

Δu = (B T *B) - 1 * BT * E (9-8)
Designing a controller with a control horizon greater than 1 requires that
we account for a number of moves ahead over the entire control horizon.

The compensating error that results from a number of control output


moves is found in a way similar to Equation 9-7. For the purpose of apply­
ing a multi-step control horizon, a dynamic matrix is required.

The dynamic matrix is formed from the step-response coefficients in such


a way that the first column of the matrix are coefficients of original step
response bk k = 1,2,...,p. The second column of the matrix consists of
Chapter 9 - Model Predictive Control 371

coefficients of the same step response shifted to the right with the first
coefficient equal to zero. The next columns are built in a similar way from
the preceding columns. The number of columns in the dynamic matrix is
equal to the control horizon. Equation 9-9 is an example of dynamic matrix
control for a control horizon equal to 2 with a 5-coefficients step response.
Multiplication of the matrix over the vector of the 2 sequential control-out­
put moves develops a prediction of process-output change resulting from
the two moves.

(9-9)

Change of output prediction


Dynamic matrix for control horizon = 2
The solution of the MPC design problem for two control moves is analo­
gous to 9-8, except that instead of step response B, dynamic matrix S is
applied.

A more general formulation of the MPC controller includes minimization


of both the squared sum of predicted control error and squared sum of cal­
culated controller moves.

To get the desired MPC controller performance, the two tuning parame­
ters, penalty on moves and penalty on error, are used in the formulation.

The MPC controller objective for minimizing the squared error of the con­
trolled variable includes the penalty on error over the prediction horizon;
and the objective for minimizing the squared changes in the controller out­
put over the control horizon includes the penalty on moves. Both of these
are expressed in the following way:

min {//Iy[X (k) - R(k)I//2+ //Iu Δu(k)//2}


Δu(k)
where:
X(k) is controlled output, p - step- ahead prediction vector
R(k) - is a p-step ahead set point vector
AU(k) - is a m-step ahead incremental controller output moves vector
Ty = diag{ry1 ,...,ryp} is a penalty matrix on the output error
372 Advanced Control Unleashed

Γu = diag{ru1/...,rum} i s a penalty matrix on the control moves

The solution is in the form:

(9-10)
where:
Su is a p x m process dynamic matrix
Ep(k) is error vector over prediction horizon

For a multivariable nxn process, the dimensions of the Su matrix would be


n*p x m. The dimension of Ty would be n*p,Tun*m, and Ep(k) n*p.

The performance of the control algorithm is modified by the adjustable


parameters p, m, Γu, andΓy.From the implementation point of view, p and
m axe not convenient to use as tuning parameters; instead, Γu and Γy are
applied as scalars. The control error is multiplied by the scalar Γy over the
whole prediction horizon, and the controller moves over the control hori­
zon are multiplied by the scalar Γu.

By making the shape of Γy and Γu coefficients dependent on the control


and prediction horizon, the controller robustness is improved. Various
functions, linear or exponential, are used for shaping the coefficients of Γy.

Γu is a basic controller-tuning parameter at the controller-generation


phase. Increasing Γu damps control response and action; in reverse,
decreasing Γu makes the control action more aggressive and the control
response faster.

Another degree of tuning is achieved by applying decreasing penalties on


future moves. This makes the first controller move less aggressive, since
the next moves are relaxed, with lesser penalties for error correction. It is
easily seen that the second and subsequent moves in this case might vio­
late move constraints. However, this is not a practical concern, since for
every scan, controller moves are recalculated and only the first move is
implemented. As a result controller robustness is increased.

It follows from experience that the dead time should be accounted for as a
major factor in setting the penalty on moves. Equation 9-11 defines a pen­
alty on moves that provides stable and responsive MPC operation for a
model error of up to about 50%:

(9-11)
Chapter 9 - Model Predictive Control 373

where:
DTi is the dead time in MPC scans for an MV{i -CVipair
Gi is gain (no units) for the MVi - CVi pair
p denotes the prediction horizon
MV and CV denote manipulated variable and controlled variable,
respectively

In addition to the shaping of controller structure at the generation phase, a


number of features are added to the unconstrained MPC controller to
manage controller behavior and robustness online; see Figure 9-28.

A primary, intuitively understandable, tuning parameter is the set point


filter or reference trajectory. One trajectory design acts in such a way that
instead of penalizing any departure from the trajectory, only those devia-
tions that are below the trajectory or above the set point value are penal-
ized (Area A and area C if range = 0; see Figure 9-28). In addition, the
control error is considered zero if the controlled variable is within range
(Area C with range >error). These features constitute a funnel control,
which is shaped online by changing the set point filter time constant and
range, further enhancing controller robustness and flexibility.

Presenting MPC control equation in the form:

(9-12)

where:
(9-13)
is the MPC controller gain, we see that MPC uses integral action. This,
however, does not mean that we can draw a complete analogy between an
MPC controller and a conventional integral-only feedback controller. The
major difference is that the MPC controller uses predicted errors and inte-
grates over the prediction horizon.

Integrating Constraints Handling, Optimization and Model Pre-


dictive Control

Simple Constraints Handling


A typical MPC configuration includes both controlled variables and con-
strained variables. The unconstrained MPC controller does not manage
constrained variables directly. A supervisory constraints-handling algo-
rithm or optimizer performs the task. First, let's review the simple solution
with the supervisory algorithm.
374 Advanced Control Unleashed

The constraints-handling algorithm uses two basic assumptions:

1. MPC predicts future outputs for both control and constraint


variables.
2. There is an interactive gain between control and constraint
variables.
The algorithm becomes active when a predicted constraint variable vio­
lates a constraint limit in the future. To prevent constraint violations, the
algorithm adjusts the working set point of a selected controlled variable
on the value:

ASP cv = -r * G c v - A V * ΔAV (9-14)


where:
ΔAV is the magnitude of the predicted steady-state constraint
violation
GCV-AV is the gain relationship between AV and CV
r is the relaxation factor; r <1
ΔSPcv is the requested working set point change for the controlled
variable
The association between a constrained variable and a controlled variable is
established during configuration; thus the gain relationship is defined
from the process model as follows:

(9-15)

GCV-AV gain tells how much we need to move the CV's set point to com­
pensate for a unit constraints violation.

The working set point is adjusted every scan according to constraint viola­
tions. If a constraint violation ends, the working set point returns gradu­
ally to the original set point value. Figure 9-46 illustrates the concept.

Referring to Figure 9-46, it is seen that both CV1 andAV1depend on the


same manipulated variables MV1 and MV2. IfAV1violates constraint lim­
its, changing the set point of CV1 in the proper direction will cause MV1
and MV2 moves that result in the return ofAV1to within its desired lim­
its.
Chapter 9 - Model Predictive Control 375

Figure 9-46. Relationship Between Controlled Variable CV1 and Constrained


Variable AV1

Simple Optimization
The objectives of optimization are usually to maximize a product value
and minimize a raw-material cost. A simple optimization is applied for the
cases where a priori knowledge defines optimization targets as minimiz­
ing or maximizing one of the process parameters.

To perform optimization, the number of MVs should be greater than the


number of CVs. A simple solution involves defining one or more of the
excessive MVs as optimizing MVs. For example, in a configuration with
three CVs, up to four AVs, and up to four DVs, three MVs are required for
CV control and at least one more MV for optimization. An MV used for
optimization should be a process input that directly affects production
rate or raw-material cost.

Introducing an optimized CV as a shadow of an optimized MV makes it


possible to generate an MPC nxn optimizing controller. By setting the set
point of an optimizing CV as the maximum (or minimum) achievable
value of an optimizing MV, the MPC controller drives an optimizing MV
toward an optimal value—that is, toward the set point of the optimizing
CV—until any manipulated variables, or more exactly their predicted val­
ues, violate their own limits. If it happens, the optimizing MV moves out
of an optimal value until other MVs are out of the limit. For managing MV
limits, a similar approach is applied as with AV constraints.

The set point change for optimizing CV is calculated as:

(9-16)

where:
ΔMV is the magnitude of the predicted MV limit violation.
GOPTMV -MV is the gain relationship between optimized MV and MV
with limits being violated.
376 Advanced Control Unleashed

r is the relaxation factor; r <1.


ASP°ptcvis the requested working set point change for optimized CV.

The association between a constrained variable and a controlled variable is


established during configuration; thus, the gain relationship is defined
from the process model as follows:

(9-17)

GoptMV-MVgain tells how much we need to move an optimal CV's set point
to compensate for a unit MV limit violation.

If the constraint limit violation ends, an optimal CV set point returns in a


stepwise manner to the original set point value. Figure 9-47 illustrates the
concept.

Figure 9-47. MPC configuration for simple optimization.

In Figure 9-47, MV3 is a manipulated variable to set the production rate or


production cost for the MPC controlled unit. MV3 adds a degree of free-
dom to the system, required for optimization. CV3 is a virtual parameter,
being the shadow of MV3 created for integration of optimization into
MPC.

The CV3 set point is set as a maximum limit of MV3 if it represents pro-
duction rate or as a minimum limit of MV3 if it represents production cost.
When the predicted MV stays within limits, CV3 set points are achieved
and kept as normal MPC set points. However, when the predicted value of
one or more MVs is out of limits, the optimized working set point value is
changed according to equation 9-16 to keep MVs within limits, effectively
retaining CV control.

The following sequence of events happens in Figure 9-47:

1. Predicted MV1 value is out of limits.


Chapter 9 - Model Predictive Control 377

2. CV3 set point is moved out of maximal or minimal value,


sacrificing optimal control to retain controllability.
3. Achieving new CV3 new set point requires that MV3 move.
4. MV3 move causes change in CV1 and CV2 as they move out of
their respective set points.
5. Predicted MV1 moves in out-of-limit direction, pushing CV1 and
CV2 back toward their respective set points.
The techniques for constraints handling and optimization presented here
feature simplicity, which makes it possible to implement these techniques
in the system controller as part of an integrated MPC, instead of the lay-
ered implementation in the workstation required for other approaches.

Dealing with constraints by managing working set points as presented


above, instead of controlling constraints variables directly, has the further
advantage of avoiding ill conditioned control configurations. Ill condition-
ing happens when a controller may need excessive output movements in
order to control changes in process outputs. Since the configuration of the
MPC controller using this approach does not change during operation,
controller conditioning is calculated and properly set at the time of con-
troller generation.

Integrated Optimization: Combining Constraints Handling, Opti-


mization, and Model Predictive Control
For processes with an objective function dependent on several manipu-
lated or controlled variables, optimization techniques are an essential
component of model predictive control technology. The proven approach
applies linear programming (LP) with steady-state models.

Linear programming is a mathematical technique for solving a set of linear


equations and inequalities in such a way as to maximize or minimize a
certain additional function. This additional function is called an objective
function. Objective functions usually express economic value, such as cost
or profit.

One specific of MPC optimization uses incremental values of MV at the


present time, or the sum of increments of MV over the control horizon,
and incremental values of CV at the end of the prediction horizon instead
of positional current values as in typical LP applications. With the predic-
tion horizon normally used in MPC this assumption guarantees future
steady states for self-regulating processes.
378 Advanced Control Unleashed

The LP technique uses steady-state models, so a steady-state condition is


required for its application. The basic process steady-state equation in the
incremental form is:

(9-18)

where:

the desired state at the end of the control horizon

ΔMV change should satisfy limits both on MV and CV:

MVMIN < MVCURRENT + ΔMV(t + c)< MVMAX (9-19)

CVMN < CVPREDICTED + ΔCV(t + p)< CVMAX (9-20)


The objective function for maximizing product value and minimizing raw
material cost is defined jointly in the following way:

Q = -UCVT * ΔCV(t + p) + UMVT * ΔMV(t + c) (9-21)


min

where:
UCV = cost vector when CV changes on unit value
UMV = cost vector when MV changes on unit value

Applying 9-18 where: we can express objective function through MV only:

Q = -UCVT * A * AMV(t + c) + UMVT * ΔMV(t + c)


min
Chapter 9 - Model Predictive Control 379

The LP solution is always located at one of the vertices of the region of fea-
sible solutions. An illustration for a two-dimensional problem explains the
point; see Figure 9-48.

Figure 9-48. Optimization Problem of a Two-Dimensional System

The region of feasible solutions for two controlled variables and two
manipulated variables is an area contained within MV1 and MV2 limits,
represented by the vertical and horizontal lines, and CV1 and CV2 limits,
represented by the parallelogram, per Equations 9-22 and 9-23.

CH1min =a 11 MV1+a12MV2 CV1 min =a 11 MV1 + a12MV2 (9-22)

CV2min =a 21 MV1+ a22MV2 CV2mm =a 21 MV1+ a22MV2 (9-23)

The optimal solution is located at one of the vertices marked by arrows. To


find this solution, an LP algorithm calculates the value of the objective
function (9-21) for an initial vertex and improves the solution every step
until it determines the vertex with maximum (or minimum) value of the
objective function as an optimal solution.

Optimal MV values are applied to the MPC algorithm as target values of


MV to be achieved within the control horizon. Therefore, the MPC algo-
rithm working with optimizer has two main objectives:

1. Minimize CV control error with minimal MV moves.


2. Achieve optimal steady-state MV values set up by optimizer.
The original MPC algorithm is extended to include the least squares-solu-
tion MV targets.

Figure 9-49 illustrates the optimizer and MPC integration.


380 Advanced Control Unleashed

Figure 9-49. MPC and Optimizer Integration

Optimization effectively becomes an integral part of the MPC controller.


For every scan, the integrated MPC controller and optimizer perform the
following sequence of operations:

1. Update CV, AV and DV measurements.


2. Update CV and AV predictions.
3. Develop optimal MV steady-state targets.
4. Develop MV outputs, account for CV targets and MV targets.
5. Update MV.

There is a typical situation where the optimizer can't find an optimal


solution, because of too-tight constraint limits, too many set points, or too
severe disturbances. In such cases the system of constraints is loosened by
changing set points to range control and abandoning some constraints. It
could be done in such a way as to satisfy all constraints to some extent, or
to drop the lower-priority constraints completely.

Ill conditioning is another typical problem that an optimizer has to deal


with. Similar to the MPC algorithm, ill conditioning demonstrates itself by
excessive changes in the calculated MV targets, even for a minor correc-
tion of constraints. Ill conditioning is removed dynamically by changing
the configuration of active constraints (abandoning some constraints) or
by removing the association between constraint or controlled variables
and manipulated variables with excessive moves.
Chapter 9 - Model Predictive Control 381

References
1. Garcia, C.E., Prett, D.M. and Morari, M., "Model Predictive Control: Theory
and Practice - A Survey," Automatica, 25(3), 1989, pp. 335-348.
2. Muskie, K.R. and Rowlings, J.B., "Model Predictive Control with Linear
Models," AIChE, 39(2), 1993, pp. 262-287.
3. Froisy, J.B., "Model Predictive Control: Past, Present and Future," ISA
Transactions.33,1994, pp. 235-243.
4. Qin, S. J. and Badgwell, T. A., "An Overview of Industrial Model Predictive
Control Technology," Fifth International Conference on Chemical Process Control,
AIChE and CACHE, 1997, pp. 232-256.
5. Badgwell, T.A., and S.J. Qin, "Review of Nonlinear Model Predictive Control
Applications," in Non-linear Predictive Control: Theory and Practice, B.
Kouvaritakis and M. Cannon eds., IEE Publishing, 2001, pp. 3-32.
6. McMillan, G.K., Process/Industrial Instruments and Control Handbook, McGraw-
Hill, 1999, ISBN 0-07-012582-1, pp.10.190-220.
7. Lee, J.H., Morari, M., and Garcia, C.E., "State-space Interpretation of Model
Predictive Control," Automatica, vol. 30, no. 4, 1994, pp. 707-717.
8. Wojsznis, K.W. and Wojsznis, P.W, "Robust Predictive Controller in Object-
Oriented Implementation," Advances in Instrumentation and Control, vol. 48,
part 1, 1993, pg.521.
9. Morari, M., Ricker, N.L., and Zafiriou, E., "Model Predictive Control,"
Workshop No. 4, American Control Conference, Chicago, June, 1992.
10. Wojsznis, K.W., Blevins, L.T., and Nixon, M., "Easy Robust Optimal Predictive
Controller," Advances in Instrumentation and Control, ISA Tech, New Orleans,
2000.
11. Prett, David M. and Garcia, Carlos, Fundamental Process Control, Butterworths,
1988, ISBN 0-409-90082-6.
12. Camacho, E.F., and Bordens, C , Model Predictive Control in the Process Industry,
Springer-Verlag, London, 1995, ISBN 3-540-19924-1.
13. Shinsky, F.G., Process Control Systems: Application, Design, and Adjustment, 3 r d
edition, McGraw-Hill, New York, 1988.
14. Wojsznis, W., Gudaz, J., Mehta, A., and Blevins, D., "Practical Approach to
Tuning MPC," in Proceedings of ISA 2001 Conference, Houston, September,
2001.
15. Cutler, C. R. and Ramaker, D. L., "Dynamic Matrix Control - A Computer
Control Algorithm," Proceedings, JACC, San Francisco, 1980.
16. DeltaV™ Home Page: http://www.easydeltav.com.
17. McMillan, Gregory, "Opportunities in Process Control," InTech, November,
1992.
18. McMillan, Gregory, "A New Era for Model Predictive Control," Control, vol.
XIII, no. 05, 2000, PP 45-50.
19. McMillan, Gregory, Good Tuning: A Pocket Guide, ISA, 2000, pp. 78-82.
382 Advanced Control Unleashed

20. Butler, Douglas L., Cameron, Robert A., Brown, Michael W., and McMillan,
Gregory K., "Constrained Multivariable Predictive Control of Plastic Sheets,"
ISA Expo 2001, Houston, Paper 1022.
21. Cutler, C. R. and Ramaker, D. L., "Dynamic Matrix Control - A Computer
Control Algorithm," Proceedings, JACC, San Francisco, 1980.
22. Box, George E. P., Jenkins, Gwilym, M., and Reinsel, Gregory C, Time Series
Analysis, Forecasting and Control, 3rd Edition, Prentice Hall, New Jersey, 1994.
10
Virtual Plant

Practice
Overview
A virtual plant environment is essential for the testing of control systems
and processes and the training of operators and engineers before and after
startup. The capital cost associated with a new plant or a major plant
expansion in the process industry often involves hundreds of millions of
dollars. Any improvement in the design, engineering, construction, or
startup that allows full operation of the plant to be achieved in a shorter
period of time can result in significant savings to the investors in such a
project. High-fidelity process simulations that can employ most of the
equations and physical properties in chemical engineering are commonly
used by many industries in the design of a process to examine and verify
process performance under a variety of operating conditions. The technol-
ogy is now available from companies to convert these steady-state models
into high-fidelity dynamic models with true pressure and flow dynamics
by the simple addition of volumes, pump curves and efficiencies, and
pressure drops. Details on process equipment, control valves, and mea-
surements are added so that the model provides a dynamic simulation of
everything in the field and becomes a warehouse of plant knowledge and
tools for training and testing with widespread access [10.1] [10.2].

High-fidelity dynamic simulations are used to test how the process and
control system reacts during equipment failures, upsets, and startups
[10.3] [10.4]. It provides an important learning tool for process and control
engineers as well as operators [10.5]. The virtual plant can go where you don't
want the real plant to go or don't think the real plant can go. Thus, it can show
how to recover from undesirable situations and reveal unforeseen or doubted capa-
383
384 Advanced Control Unleashed

bilities [10.1]. High-fidelity simulations are essential for real-time optimiza-


tion and for dealing with the operator disengagement associated with
extensive automation. A high-fidelity virtual plant elevates skill levels via
refresher courses for existing operators and introductory courses for new
operators. Advanced training courses help the operators to be ready for
failures and to gain confidence in the ability of the advanced control sys-
tem. To get the most out of an advanced control system, a high-fidelity
dynamic simulation is needed. Even if material balances are added to tie-
back simulations to provide some additional dynamics to supplement the
typical adjustable filters, delays, and gains, the model fidelity is still woe-
fully inadequate to show the effect of physical properties, process dynam-
ics, equipment characteristics, and control system interactions [10.1] [10.2].

For the virtual plant to show the true performance of an operating plant,
the exact control system must be used together with a high-fidelity
dynamic simulation of everything in the field. The ability of a dynamic
simulation package to duplicate the behavior of a control system is lim-
ited, in most cases, to simple control loops with a generic operator inter-
face. It is unreasonable to expect a dynamic modeling package to match
the investment made by control system manufacturers in the development
of an extensive function-block control language. Even if the model had
more control capability, the effort of duplication of the control system may
approach the cost of the process model. Even more important, it is funda-
mentally wrong to test anything other than the actual configuration.
Therefore, reconstructing or emulating the control system in a computer is
both expensive and unreliable. In some cases, actual control-system equip-
ment has been connected to process-simulation software by special inter-
faces. However, the initial and upgrade cost to plants is too high to
provide this setup in training and testing centers, particularly since hard-
ware today rapidly becomes obsolete.

These setups are still used today to provide training and testing before the
I/O is available in the field. The virtual plant by definition has no field I/
O so it can be used even for pre-project studies. The virtual plant provides
a lower-cost and more flexible, portable, and accessible alternative to the
traditional hardware setup in a training room. The use of the actual hard-
ware for studies, testing, and training is not only a dinosaur but is also
problematic because access is restricted and delayed. By the time the con-
trol system is purchased, it is often too late to make changes in the process
design based on what is learned from the simulation. Problems in control
schemes and equipment sizing and operator errors can result in damaged
equipment, spills, and an extended time spent in commissioning the plant
or in achieving production or quality targets. Skimping on simulation util-
ity and fidelity is risky business.
Chapter 10 - Virtual Plant 385

The introduction of high-fidelity process simulation and actual control


system design in the early steps of a project by the use of a virtual plant
can enable faster and safer plant startups as illustrated in Figure 10-1.

Figure 10-1. Virtual Plant Capability for Checkout and Training Before Star-
tup

A standard feature of modern control systems is the ability to simulate the


entire control system, including Fieldbus devices to be configured and
tested on a single personal computer (PC). Combining this capability with
a process simulator, it is possible to verify in the design stage whether the
process in conjunction with a proposed control strategy is capable of deliv-
ering the target throughput, efficiency and quality. System configuration,
control strategy testing with the operator interface and training of opera-
tors is done independent of the shipment of control system hardware.
Simple or enhanced tieback simulations are constructed using standard
tools in the control system to support the configuration engineer and to
check out whether the configuration matches the control system definition
and introduce the operator on how to access displays. In order to learn
how the process and the control system responds to realistic upsets and
failures and startups, a high-fidelity dynamic process simulation is needed
[10.1] [10.2] [10.3] [10.4].

As a result of joint development between suppliers, it is now possible to


combine on a single computer or distribute between computers the full
capability of a modern control system with high-fidelity simulation soft-
ware. Interfaces for the user allow the process-simulation and control-sys-
tem software execution to be coordinated and initialized so that initial
operating conditions for the simulation are easily established. In addition,
386 Advanced Control Unleashed

it is possible to set the process and control simulation execution faster and
slower than real time.

Opportunity Assessment
The concept is simple, but it has far-ranging benefits. The integration of a
scalable virtual DCS with a dynamic plant model via Object Link Embed-
ding (OLE) for process control (OPC) on a laptop or multi-node PC net-
work places a virtual plant at your fingertips. The result is an accessible
graphical tool in a familiar Windows environment that can develop, store,
and share a knowledge base of both the process and the control system,
including the operator interface. The scalable virtual DCS is the ware-
house of control knowledge and expands the access to testing and improv-
ing the control-system strategy from the configuration specialist to the
process control and the process engineer. Similarly, the dynamic plant
model is the warehouse of process knowledge and extends the access to
testing and improving the plant dynamics from the simulation specialist
to the process and process control engineer.

The following are some of the areas of expected benefits:

1. Development, testing, and sharing of a library of control modules.


2. Development of working examples for learning control strategy,
dynamics, and tuning principles.
3. Development of application guidelines for implementing various
types of control.
4. Investigation of the performance of various control system
alternatives.
5. Creation of portable, low-cost systems for training operators and
engineers on the process and control system.
6. Development of adaptive correction of feedforward and signal
characterization.
7. Development of online property estimators and neural networks
for inferential measurements.
8. Generation of bump and pseudo-random binary step-test data for
building experimental models.
9. Testing of Constrained Multivariable Predictive Controllers
(CMPC) for nonlinear systems.
10. Improvement of operator interfaces, smart alarms, and
diagnostics.
Chapter 10 - Virtual Plant 387

11. Continuous improvement of and maintenance of systems by


testing performance for various scenarios.
12. Testing the feasibility and implementation of real-time
optimization and data reconciliation.
Leading companies in the process industry have virtual plants with high-
fidelity process simulations. Benefits are starting to be realized in the cate-
gories of (1) through (6) [10.5].

Most of the costs of advanced controls are associated with the time in the
field needed to generate test data and develop experimental models. As
more dynamic process models are built, it is possible to generate some of
the data and, in particular, to be able to identify some of the process gains
and time constants to provide a more efficient implementation of
advanced controls. This virtual plant capability is used to test the effect of
changing raw material, operations, and equipment conditions offline
ahead of time so that the performance and on-stream time of the control
system are more sustainable after the specialists leave the plant site.

With the high-fidelity simulator, a single model may incrementally evolve


over the plant's life cycle starting with a steady-state model early in the
design phase and culminating in a rigorous dynamic model used in an
operator training system. With the capability to execute all features of the
control system on a single PC or distribute them to multiple PCs along
with the high-fidelity process simulation, it is now possible to develop real
control strategies earlier in the design phases of the project. With this new
capability, earlier evaluation of tradeoffs between process design and con-
trol strategy design can result in a better overall design. This provides the
user with maximum simulation flexibility over the complete plant life
cycle from design to operator training.

The estimated saving from one company's use of the virtual plant environ-
ment in the areas of design, checkout, operator training and continued
improvements in the process operation are shown in Figure 10-2.

Examples
Examples of how a virtual plant is used in operator training and for plant
and control system design follow.

Failure Handling
An engineer is responsible for the control design of a new plant design.
His/her control system will not be shipped for another month but he/she
has installed a control system as part of a virtual plant on his office PC.
One loop that is critical to plant operations uses a feedforward signal from
388 Advanced Control Unleashed

Figure 10-2. Estimated Savings from the Virtual Plant

a feed stream that is not used for some products. Under the condition of
no flow, the input signal may have a BAD status. It is important in his/her
design of the control system to understand firsthand how the PID behaves
under this condition.

To quickly learn more about how the PID controller operates under these
conditions, the engineer assigns the associated control module to his/her
PC, downloads and then through online engineering tools provided with
the control system, he/she examines the PID and feedforward signal
delivered by an AI block. Utilizing the SIMULATE parameter of the Field-
bus input blocks to the PID controller, he/she establishes a normal operat-
ing condition. By changing the SIMULATE parameter status of the
feedforward input to BAD and then back to GOOD, he/she observes that
the operation of the PID is not disrupted by the status going BAD. Know-
ing that the control system PID addresses this situation, the engineer is
able to finalize the design of this critical control loop.

Control Response
Plant production is limited by feedstock processing. To determine if a pro-
posed process design change will allow greater throughput, a high-fidelity
dynamic simulation of the process is constructed. The engineer responsi-
ble for resolving the production limitation must determine whether the
product specifications can still be met using the original control strategy.
To examine the control-system response to changes in feedstock, the engi-
neer installs the control system onto on his development PC with a high-
fidelity process-simulation package to create a virtual plant.
Chapter 10 - Virtual Plant 389

By assigning the current control strategy used in the plant to his PC and
configuring the process simulation to access the control I/O blocks, the
engineer is able to fully simulate the dynamic response of his plant with
the current control strategy. By introducing the expected feedstock
changes into the process simulation, he determines that the current control
strategy does a good job but that the tuning of some loops will need to be
changed. Using the control system autotuning capability in this simulation
environment, he retunes the simulated control loops and establishes what
the best tuning is for the new process design.

Operator Training
An operating area of an existing plant is being expanded. The design work
associated with the process changes has been verified using a steady-state
process simulation. As part of this modernization project, a new control
system is being installed in this area. This new control system is designed
to allow all of the control system to be simulated on a single PC or distrib-
uted between multiple PC's. The plant operators are not familiar with the
new control system or the process changes and their impact on the plant
operation.

Each operator is provided with a PC that is a virtual plant with the actual
displays imported from the real control system. The process simulation is
connected to the imported control system configuration using the OPC
interface. The effort to create the training system is minimized by the fact
that the actual control system configuration is directly and easily imported
and exported to facilitate rapid revision of the displays and configuration.
By incorporating the process simulation used in the design process, the
training system accurately reflects the dynamic process response so the
operators are trained on the process changes as well as the controls and
new operator interface.

Application

General Procedure
All of the OPC-compliant software used in the actual control system
should be loaded into the virtual plant. Dynamic simulations and virtual
control systems require a lot of memory and are processor hogs. PCs more
than a couple of years old are undesirable.

The flow is specified for some steady-state models that are used for flow
sheet simulators, whereas for a dynamic model the flow depends upon
flow resistances and the curves of prime movers just as it would in a real
plant. These steady-state models often have negative pressure drops.
Valve positions and sizes, pump curves, and static heads have no affect on
390 Advanced Control Unleashed

flow in a steady-state model so the user pays no attention to the pressure


profile. It is common for vessels and columns to empty because of a
reverse flow to the feed line when switched from the steady state to
dynamic mode.

The other steady-state practice that is an endless source of headaches is


specification of a duty (heat input or heat output) instead of inserting an
actual heat-transfer surface and utility stream. Unfortunately this means
the throttle valves and interactions of the utilities needed for control-sys-
tem studies and testing are missing. Drastically wrong pressure profiles
and missing flow resistances, prime movers, jackets, coils, and utility
streams are the major obstacles in converting a steady-state model to a
dynamic model. The degree of detail needed to make the model dynamic
is also the degree needed to make the model sufficiently representative of
actual plant operation to enable reconciliation and real-time optimization.
Steady-state models use techniques convenient for converging to a new
process design. Almost any flow, temperature, or composition flagged as
adjustable can be specified and variables optimized. A dynamic model,
like the real plant, relies upon control loops and final elements.

The major steps and check points for the creation of a high-fidelity
dynamic virtual plant are:

1. Set up one or more PCs each with at least 512 MG memory and a 1
GHz or faster processor with a Windows NTT or XP Professional
Operating System.
2. Connect multiple PCs to a dedicated control network and the local
area network (LAN). The LAN connection must be set up so that
PC-to-PC communication is direct and circumvents the rest of the
LAN, which is reserved for transmission of files and actual plant
data for adaptation of the process model.
3. Install the software for the process simulator, the virtual
representation of the scalable DCS, the data historian, and tools for
advanced data analysis (e.g., multivariate principal component
analysis) and for advanced control (e.g., real-time optimization).
4. Create dynamic-ready high-fidelity steady-state simulations:
a. Add a valve with the correct inherent flow characteristic (e.g.,
linear or equal percentage) to each stream to provide a flow
resistance and a method of stopping and starting the flow to
each stream.
b. Use heat exchangers with actual utility streams instead of
simple heat sources or sinks and duty streams.
c. Add all pumps and compressors that actually exist.
Chapter 10 - Virtual Plant 391

d. Add the correct pressure drops for each valve, heat exchanger,
and column tray and the correct pressure rise for each prime
mover biased to include the static head (a pump or compressor
curve biased to include static head is preferable to a power
specification).
e. Make sure the minimum pressure drop for any flow resistance
on a stream entering or exiting the flow sheet is at least 2 psi
(other resistances should have a pressure drop of at least 0.2
psi).
f. Check the pressure profile throughout the system to make sure
there is no reverse flow (make sure the feed pressure is much
greater than the tray pressure).
g. Activate the automatic sizing routine to set the flow coefficients
for each flow resistance (valve, exchanger, or tray).
h. Add the correct holdup volumes for each piece of equipment.
5. Add the transfer functions to simulate time delay, lag, and noise
for each measurement.
6. Add the stick-slip, dead band, and stroking time for each control
valve.
7. Import the actual configuration of the basic control system and
faceplates. Import the higher-level controls and displays as they
become available.
8. Complete the interface table between the process model and
control system.
9. Verify that all of the basic PID controllers are available and have
the correct control actions and set points and conservative tuning
settings.
10. Put all level and pressure controllers in automatic to prevent
volumes and headers from going empty and to negative pressures.
11. Make sure all of the controller outputs are initialized correctly to
match the control valve positions; this may require turning on the
integrator.
12. With the integrator on and the dynamic model running,
commission the rest of the controllers, line out the dynamic plant
to match the steady-state case, and save out the case.

The ability to use an identical configuration for both the actual control
system installed in the plant and for an operator training system cannot be
overestimated. Maximum benefit is achieved by allowing either physical
I/O or simulated I/O to be selected. With this feature, only one control
392 Advanced Control Unleashed

configuration needs to be maintained. Any changes made to the control


for the physical plant are automatically available in the operator training
system.

The virtual plant capability on a PC is utilized to develop control system


configuration, checkout control or operator interface, and do operator
training without the need for additional control-system hardware as illus-
trated in Figure 10-3.

Figure 10-3. Hardware-Based Simulation System for Checkout and Training

The major benefit of this approach is the dramatic reduction in the hard-
ware required for configuration development, control and operator inter-
face checkout, and operator training. The modules developed for the
control system are assigned to the PC and executed as though they were in
the controller. Using the simulation capability of input-output function
blocks, modules may be created to simulate the process for control system
checkout and operator training. Also, since support is provided for access
block parameters through OPC, it is possible to execute and check out
applications that normally run in a workstation and that access informa-
tion in the simulated controller through OPC.

In the real world, both the plant and the controllers run in real time. In the
simulated time, depending on the modeling rigors and computing
resources, the simulation can potentially run faster than real time. Run-
ning faster than real time is beneficial for slow processes. Running faster
than real time can shorten training or control-system evaluation times.
Model characterization studies for multi-variable predictive control appli-
cations are other situations where faster-than-real-time operation is essen-
tial. If the process models are running faster than real time, it is important
for the controls to run at the same speed as the simulation. Otherwise, the
Chapter 10 - Virtual Plant 393

controllers tuned for real time will not respond properly in simulation
time.

The reverse problem occurs when the process simulation cannot meet the
real-time constraint because of inadequate processing power or an overly
rigorous model. In either situation, the controller should be able to run
slower than real time to match the process-model execution speed. The
computational load of the process modeling can vary over time, depend-
ing on the state of the model. The complete system is most flexible when
the controller time base is dynamically changed as the complete simula-
tion progresses. To address these requirements, the capability is provided
in this modern control system to run function blocks faster or slower than
real time in a simulation environment. In addition, a trainer or application
may coordinate the execution of the control system and process simula-
tion.

Online Adaptation
Figure 10-4 shows how a high-fidelity dynamic process model is adapted
online to match the process by a straightforward but innovative use of
model predictive control (MPC). It does not require any special features or
algorithms and is well within the ability of the average control engineer to
implement. It is implemented without any intrusion or risk to actual plant
operations since it reads but does not write to the actual plant control sys-
tem. Users are surprised at how much they can learn about the model and
the plant. The basic procedure is to tackle one unit operation at a time as
follows:

1. An MPC is set up with key process variables from the virtual plant
as controlled variables (CV) and key model parameters from the
virtual plant as manipulated variables (MV). The targets are the
actual measurements from the real plant.
2. Analog output (AO) blocks are added to the virtual control system
to receive the key model parameters. The AO blocks are
automatically initialized by the model when the integrator is
started.
3. Analog input (AI) blocks are added to the virtual control system to
receive the actual measurements and set points from the real plant.
These measurements become the MPC targets and the actual set
points become the SUP set points of the virtual loops.
4. The virtual plant is connected via a data historian to the real plant
DCS system by a LAN so that it has access to actual measurements
and set points.
Next Page

394 Advanced Control Unleashed

5. Transfer function-block time delays are adjusted for the key


process variables in the process model to match the dead time in
the actual measurements in the real plant.
6. Critical control loops in the virtual plant are tuned to match the
response of the actual control loops in the real plant.
7. The best practices discussed in Chapter 9 for the testing,
identification, tuning, and comrnissioning of a MPC system are
followed but the MPC execution time and move suppression are
set to provide a very slow and gradual response to optimize key
model parameters without disrupting the virtual plant's response
to upsets.
8. Once the model parameters settle in on average values, implement
these as the new starting point for the parameters and move on to
another plant operation. If the model parameter can change (e.g.,
column tray pressure drop due to fouling), the MPC is left on but is
slowed down even further since most process changes, such as
fouling and catalyst activity, are usually very slow.

Figure 10-4. Non-intrusive Adaptation of a High-fidelity Process Model by an


MPC
Appendix A
Additional Opportunity
Assessment Questions

1. Increase Benefits in all areas


a. Do you have loops not running in the highest design mode
(e.g., auto, cascade, supervisory) because of oscillations, poor
control, upsets? Look for improvement by the use of auto
tuners, gain scheduling, fast reproducible measurements,
better loop designs, and negligible valve stick-slip.
b. Do you have upsets (bumps) on critical process parameters
from process interactions? Look for model predictive control to
decouple and anticipate upsets.
c. Do you have upsets from cycling back and forth across a split
range point? Look for ways to eliminate split range or make
transitions smoother with model predictive control.
d. Do you have a tool to define correlations and interactions in
systems? Install a performance monitoring-system and
advanced data-analysis tools.
e. Do you have bad measurements that you can't rely on?
Improve measurements by using advanced technologies.
Calculate batch averages and alert on deviations. (e.g., Don't
rely on pH for endpoint detection if curve is different from
previous good batches.)
f. Do you have areas you can't control because they are hard to
measure? Calculate property estimators from other
measurements and design new control schemes.
g. On continuous processes, do you have ratios installed on
related flow loops or is everything set individually based on
max feed rates? Ratio control can be installed on wastewater-
409
410 Advanced Control Unleashed

to-feed rate, steam-to-column or reactor-feed rate, additives


(soap, oil, anti foam)-to-filter or column-feed rate.
h. On set point changes, does it take a long time for the variable to
really start to move and once moving does it go into an
overshoot (or undershoot)? Use a headstart by putting the
valve fully open or closed in the beginning and bring only into
control when coming close to the desired set point.
i. Are you interested in controlling rate of change of a variable
(e.g., heat-up or cooling rates)? If so, don't try to control the
variable itself but use a RATE calculation for the loop PV and
SP to control the rate of rise. The new loop can be used as an
override controller where a signal selector chooses the safest
controller output.
2. Improve yields
a. In the areas where byproducts are made, can you look for other
sequences, other parameters (temperatures, hold times,
agitation, etc.)? Compare plant systems to the lab (theoretical)
and look for differences. Redesign plant system to match lab
situation as close as possible.
b. Would more accurate endpoint detection improve yield or
quality? Consider rate-of-change indication of one or more
measurements after a designated batch time or feed total as an
endpoint indicator.
c. Do you have sequential batch loads in your reactors? Would
parallel feed additions or ratio control improve reaction,
generate fewer byproducts, improve quality and yield, or
reduce cycle time?
d. Do you run your recovery systems just high enough to recover
products at the strengths you need to reuse them? Higher
purity will require more energy and create more effluent.
e. Do you have manually set purge or vent flows? Look for ways
to control them and ratio them to product flows. Calculate and
control minimum required purge or vent rates.
3. Reduce energy and utilities
a. Are steam, utilities, waste waters and big motors switched off
on temporary shutdowns? Implement systems that will shut
off these systems automatically on temporary and other
shutdowns.
b. Do you run dryer temperatures just high enough with good
controls or do you run always well above critical "wet"
Appendix A - Additional Opportunity Assessment Questions 411

temperature? Install better controls (feedforward from feed)


and adjust set points for lower energy usages.
c. Could you reduce energy sources (e.g., refrigeration unit
temperature, compressor pressure, boiler pressure) until user
valves reach maximum controllable position? Consider a
valve-position controller to optimize use.
d. Can you downsize over-designed pumps and motors? Can you
eliminate flow valves with variable-frequency drives for
motors?
4. Reduce effluent
a. Did you think about reusing your waste streams back into your
process? Hot water and condensate can be reused for heating
purposes or as reactor ingredients.
b. Can you reuse second waste waters as first waste water for
filtering your products? When byproducts are created can you
think about the cause (e.g., poor agitation, hot spots on heating
coils, bad mixing) and adjust controls to reduce byproducts?
5. Reduce operating cost
a. Can you automate fieldwork to reduce the manpower needed
to operate your plant?
b. Can you re-schedule your operations better in time so you can
send people to other departments or use operators to help on
maintenance during shutdown periods?
c. Are critical quality and economic variables calculated,
analyzed, and indicated online? Give operator information on
yields, energy consumption and production rates, rework
amounts, etc. Make operator aware to keep operating costs
under control.
6. Reduce maintenance cost
a. Do you have maintenance costs due to poor operation (e.g.,
pumps cavitating, overloaded, overpressures etc.)? Automate
startup procedures and best operating practices for equipment.
b. Do you blow rupture discs and activate relief valves due to
manual disoperation or bad control systems? Use better control
and automate to reduce safety risk and possible down time.
c. Could smoother control (reduced thermal and pressure shocks)
or tighter control (less byproduct and contamination) increase
time between repairing, replacing or cleaning equipment?
412 Advanced Control Unleashed

7. Reduce lab cost


a. Can you install in-line analyzers and control to reduce the
number of lab analyses? It will also reduce cycle time if you
don't have to wait for lab results.
b. Can you reduce the number of lab analyses by improved end-
point detections and automation resulting in more
consistency? Analyze lab results using SPC and adjust
frequency.
8. Improve product quality
a. Can you measure or calculate constraints from existing
measurements? Once measurable, it is controllable.
b. How close are you running to constraints (critical reaction
temperatures, pressures, feed rates, etc.)? Use advanced control
to run closer to or "ride" constraints. Switch off reset into
constraint direction to run closer. Use override controls,
feedforward, dead-time compensators, valve constraint
controllers, rate-of-change control, and model predictive
control to improve quality and capacity without violating
constraints.
c. Do you automatically capture product parameter settings
when quality is good? Use last-run data on startups to compare
and find ideal settings for each product.
9. Reduce rework
a. Do you produce rework because the operator sometimes
makes human mistakes? Look for automation in the areas
where most mistakes are being made.
b. Are all your operating instructions clear, up-to-date and easily
accessible? Use unique process parameter descriptions and
units to avoid confusing the operator at any time.
10. Reduce shutdowns and upsets
a. Do you have too many or too few alarms? Is the operator
alerted in advance and is he not confused by too many alarms?
Make alarms smart (e.g., don't want "Low Flow" alarm when
pump is stopped). Use advanced alarm handling and logging.
b. Does the operator have easy access to information when
something goes wrong? Can he do his troubleshooting when
things fail? Provide help displays with dynamic information
and instructions on what to do to correct.
Appendix A - Additional Opportunity Assessment Questions 413

c. Does automation level define IF-THEN actions to be performed


by software when something goes wrong? Do not just stop and
give control to the operator.
d. Do you have interlocks shutting you down? Is it easy to track
what exactly happened? Put all the interlocks (soft wired and
copy hardwired) into an automation system and provide
alarms and dynamic-interlock help displays for all interlock
conditions. Color display access numbers to reflect mode
changes and interlock condition status.
e. Do you have shutdowns, upsets, and incidents due to
bypassing of interlocks? Use interlock-bypass procedures and
indications (forms). Display interlock and bypass status on
control system.
f. Do you have buffer tanks in between batch and continuous
processes? Smart level control can be used to prevent
shutdowns in continuous process by looking at batch cycle
times and smoothing continuous feed on batch problems (see
Appendix B).
g. Do you have a system set up to track every problem and fault
that generates upsets, rework or shutdowns? Learn from
mistakes and set actions to prevent their happening again. Use
problem reports.
h. Can product changeover (transition) time be reduced?
Automate changeover operations (emptying, cleaning, and
washing). Look for methods to reduce transition times and set
operating parameters for every product.
11. Reduce cycle time
a. Can you reduce batch cycle times by eliminating unnecessary
waits and introducing parallel sequences (e.g., starting heating
before loading, using head starts, combining hold periods with
cooling, etc.)?
b. Can you eliminate concurrent actions that slow down your
process (e.g., blowing cold air in a tank for mixing purposes
while heating it up)?
c. Can you speed up heating times and cooling times by using
energy more efficiently (colder water, better insulation, better
agitation, improved burner performances etc.)?
d. Can you prepare premixes in separate tanks in parallel with
other sequences, and use bigger pumps, feed lines, and valves?
414 Advanced Control Unleashed

e. Is the operator informed and warned when a system is waiting


for a variable to reach a condition, if the variable doesn't move
toward that condition (anymore)? If not the system would wait
forever and the operator will find out too late. Configure
multiple dynamic waits.
f. Could you detect endpoints sooner? Consider rate of change or
sustained change detection.
g. Can you replace weighing systems with mass-flow meters?
Mass flow can also be calculated for raw materials from flow,
temperature and pressure measurements. Coriolis mass flow is
more accurate than load cells and can reduce batch load times
and cycle times dramatically.
h. Can you redesign process parameters and interlock settings
based on experience from the past? Adapt settings to process
evolution.
12. Avoid costs for other investments
a. Could automation of existing adjustments eliminate the need
for a blend or adjustment tank?
b. Can automation and the associated improvement in
consistency of operation reduce the risk for disoperation that
causes spills and technical incidents?
Appendix D
Batch-to-Continuous
Transition

In the batch-to-continuous transition, it is desirable to fully utilize the


surge volume to dampen out flow fluctuations to the continuous opera-
tions. This applies to both liquid and solids flow applications. Often the
consequences of flow upsets are severe, particularly if downstream loops
do not all have flow feedforward (e.g., distillate, bottoms, and steam-to-
flow ratios for columns).

The directional velocity limit spreads the flow upset over one half of the
surge working volume (e.g., set point=50%). The up- and down-scale
velocity limits are set to keep the level near set point by means of a small
saw tooth around the average flow for the batch cycle with minimal feed-
back correction. A head start of the feedforward flow for the first batch is
essential to establish a feed-forward signal close to the average flow and to
reduce the saw tooth amplitude.

Without a head start and with a constant velocity limit set to achieve the
average flow after one batch cycle, the tank level will rise and then contin-
ually drop below set point until the full feedback correction is reached.

Proper implementation of the feedforward can reduce the level controller


gain requirement to one tenth of its normal value for full feedback correc-
tion (2->0.2 for a 50% set point).

Adaptive gain adjustment or gain scheduling can be used to reduce the


gain as the error decreases. This is done in a discrete fashion by the use of
gain zones. An alternative is error-squared level control for continuous
and dramatic gain reduction near set point to provide further smoothing.

415
416 Advanced Control Unleashed

There are several important points to consider. First, a zero gain may cause
some controllers to think they have no proportional mode and are "inte­
gral only." Second, a zero gain or dead band region provides no surge
capacity, because the integrating response of the level will cause the mea­
surement to go to one end or the other of the region. Third, any reset
action will cause nearly sustained oscillations for the low gain settings.
Fourth, the control valve must have minimal amount of dead band and no
stick-slip (that is, good actuator, packing, linkages, I/P and positioner) or
the limit cycle will be large enough to upset the continuous flow loop.

FigureB-1shows the use of a feedforward with a velocity limit adapted to


maximize the smoothing of the discharge flow based on current computed
time to violation of a level constraint in the surge vessel. The inlet flow to
the surge tank is from a batch operation that can be in a "hold" state
within and between batches. Consequently, the batch start time and cycle
time are variable. The discharge from the surge tank goes to continuous
operations and should be as smooth as possible.

Figure B-1. Adapted Velocity Limit for a Batch-to-Continuous Transition

Time to violation of level constraint:

At =(K1*ΔL)/(0.5 *ΔF) (B-1)


Adapted directional velocity limit:

ΔF/Δt = (0.5 *ΔF2)/(Ki *ΔL) (B-2)


where:
ΔF = continuous flow subtracted from the batch flow (gpm)
Ki = dimension factor that is volume in gallons divided by 100%
level span (gal/%)
ΔL = surge tank level subtracted from min or max level
constraint (%)
Δt = time to violation of level constraint (minutes)
Appendix B - Batch-to-Continuous Transition 417

EquationB-1computes the time to violation of the min or max level


constraint as the difference between the present level measurement and
the set point converted to gallons divided by the average difference
between the batch and continuous flow. Thus, gallons of operating room
divided by the net flow imbalance in gpm gives minutes to constraint
violation.

In Equation B-2 this same flow difference is divided by the time to viola-
tion to get the velocity limit that spreads out a batch cycle disturbance
over the free room in the surge tank. It includes an element of proportional
feedback control in that the velocity limit approaches an instantaneous
response as the level approaches its constraint. It smooths out small fluctu-
ations around the average batch flow, yet greatly accelerates the response
to large upsets and startup and shut-down conditions. It enforces the
material balance by gradually seeking the average continuous flow that
balances out the average batch flow. The implementation of this strategy
on surge tanks progressively smooths out batch operations, especially
where recycle streams are involved, by increasing the attenuation and
decreasing the propagation of upsets.

The batch flow is filtered by a time constant about equal to the off time to
create an average batch flow for the adaptation of the velocity limit oper-
and, but the instantaneous batch flow is the input to the velocity limit
function.

Sometimes it is also desirable to ratio the continuous flow to a wild flow.


This can be done when the time to constraint violation is relatively large.
Thus, when the adapted velocity limit (ADL) is small, the feedforward can
be adjusted to approach to a wild flow multiplied by a calculated ratio fac-
tor. The adjustment rate should be inversely proportional to the ADL. In
other words, when the time to constraint violation is large, the speed of
ratio optimization is fast, and vice versa.

The use of an error-squared proportional-only level controller for feedback


correction complements the action of the adapted velocity limited feedfor-
ward to keep the level within limits while providing optimum smoothing.
Appendix C
Definitions

action - relative direction of change in output for a change in input


("direct" for changes in same direction and "reverse" for changes in
opposite direction)

automatic mode - PID controller mode to enable its set point to be locally
adjusted by the operator (also known as LOCAL and AUTO mode)

A/D - analog to digital converter typically used to convert an analog input


to its digital value with a resolution of 0.05% for a 12 bit converter with
one sign bit

COM - Component Object Model (COM) is Microsoft's framework for


developing and supporting program component objects

controller action - relative direction of change in controller output for a


change in controller input ("direct" for same direction and "reverse"
for opposite direction)

controller tuning - adjustment of the proportional, integral (reset), and


derivative mode settings for a PID controller

control interval - time interval between successive executions for digital


devices (same as scan or update time and the inverse of scan rate or
frequency)

controlled variable - a controller input to be kept at a set point (often a


process output such as concentration, flow, pH, pressure, temperature,
or level)

419
420 Advanced Control Unleashed

cascade mode - secondary PID controller mode to enable its set point to be
remotely adjusted by the output of a primary controller (also known as
REMOTE and CAS mode)

compression - smallest change in value of a variable that is stored by a


computer or data historian (must not exceed the repeatability of the
CV or the resolution of the MV)

D/A - digital to analog converter typically used to convert a digital value


to an analog output with a resolution of 0.05% for a 12 bit converter
with one sign bit

DDC mode - PID controller direct digital control mode to enable its out-
put to be adjusted by a sequence, interlock, or a computer (also known
as ROUT mode)

dead band - minimum change in the input for a reversal of direction that
will cause a change in the output (important for a control valve and a
manipulated variable)

dead time - time it takes for the process variable to get out of the noise
band after a change in the manipulated variable (same as time delay)

derivative action - controller output changes that are proportional to the


rate of change of the error or the controlled variable (derivative mode
of a PID controller)

Distributed Component Object Model (DCOM) - a protocol that enables


software components to communicate directly over a network in a reli-
able and secure way

dynamic gain - ratio of the output amplitude to a sinusoidal input ampli-


tude after transients have decayed (also known as amplitude and mag-
nitude ratio)

estimator - online calculation (intelligent sensor or inferential measure-


ment) of a process variable (e.g. composition or quality) corrected by a
field or lab measurement

feedback control - a control algorithm to reduce the error between the set
point and the controlled variable (most often a PID or model predic-
tive controller algorithm is used)
Appendix C - Definitions 421

feedforward control - a computation of the manipulated variable from a


measurement of the disturbance (most often corrected by a PID or
model predictive controller)

filter time - time constant of a signal filter that is is usually applied to the
PV but can be inserted anywhere in the configuration as a function
block (seconds or minutes)

final element - device used to adjust the manipulated variable (typically


it is a control valve but it can be a variable speed drive, compressor
vane, or an electrical heater)

I/P - current to pneumatic converter mounted on the valve to convert a


controller output to a pneumatic signal for the actuator (often inte-
grated with positioner)

integral action - controller output changes that are proportional to the


integral of the error (integral mode of PID controller)

limit cycle - continuous oscillation of nearly equal amplitude that persists


for a variety of controller gains (caused by a nonlinearity such as valve
slip-stick or pH)

manipulated variable - variable adjusted by a controller output (most


often it is a process input such as flow but it can be the set point of a
process variable for cascade control)

manual mode - operator adjusts the controller output directly (PID algo-
rithm is suspended but will provide a bumpless transfer to automatic,
cascade, or supervisory control)

model predictive control - model based control of future trajectory of a


process variable from the changes of manipulated variables (same as
constrained multivariable predictive control)

move limit - largest change in the manipulated variable per execution of a


model predictive controller (normally set large for simulation test but
reduced during commissioning)

nonlinear system - gain, time constant, or time delay and hence controller
tuning settings are not constant but a function of time, direction, oper-
ating point, or load

OLE - Object Linking and Editing


422 Advanced Control Unleashed

OPC - OLE for Process Control

open loop gain - final % change in the controller input divided by the %
change in the controller output with the controller in manual (dimen-
sionless steady state gain)

oscillation period - time between successive peaks in any type (damped


or undamped) of oscillation (the period of damped oscillations is usu-
ally larger than the ultimate period)

positioner - controller mounted on the valve whose set point is desired


position and whose controlled variable is shaft position (also known as
a digital valve controller)

primary controller - master or outer loop controller whose output is the


remote set point of a secondary controller in a cascade control system

process action - relative direction of change in process variable for a


change in the manipulated variable ("direct" for same direction and
"reverse" for opposite direction)

process gain - final change in the process variable divided by the change
in the manipulated variable (commonly used as the open loop gain)

proportional action - controller output changes that are proportional to


the change in error (proportional mode of a PID controller)

remote set point - preferred value for the controlled variable that is not
locally set by the operator but comes from a control system (usually
reserved for cascade control)

repeatability - short term maximum scatter in the output for the same
input approached from the same direction and at the same conditions
(see reproducibility)

reproducibility - long term maximum scatter in the output for the same
input approached from both directions and at the same conditions
(includes repeatability and drift)

resolution - minimum change in the input in the same direction that will
cause a change in the output (important for a control valve and a
manipulated variable)
Appendix C - Definitions 423

saturation - controller output is at either low or high limits that are nor-
mally set to match the valve being shut and wide open, respectively
(controller is effectively disabled)

scan time - time interval between successive scans for digital devices
(same as update time or control interval and the inverse of scan rate or
frequency)

secondary controller - slave or inner loop controller whose set point is


remotely adjusted by the output of a primary controller in a cascade
control system

self-regulating process - a process that will reach a open loop steady state
for a change in the manipulated variable if there are no disturbances
(no integrating or runaway response)

sensitivity - ratio of the steady state change in output to the change in the
input (poor sensitivity is caused by poor resolution and a low steady
state gain or flat curve)

slip - rapid movement of valve trim caused by dynamic changes in fric-


tion (the zipping past the desired position means the controller will
never reach its set point)

smart transmitter - transmitter with a microprocessor to compensate for


changes in ambient and process conditions and to provide remote cali-
bration and diagnostics

steady state gain - final change in the output of loop components (control-
ler, valve, process, measurement) divided by the change in input
(static or zero frequency gain)

stick - lack of valve trim movement for a change in input signal caused by
friction or an undersized actuator (major source of valve resolution
limit)

supervisory mode - PID controller mode to enable its set point to be


adjusted by a sequence, model predictive controller, or a computer
(also known as RCAS)

time delay - time it takes for the process variable to get out of the noise
band for a change in the manipulated variable (same as dead time)
424 Advanced Control Unleashed

time constant - time it takes for the process variable to reach 63% of its
change after its time delay for a step in the manipulated variable (same
as time lag)

time lag - time it takes for the process variable to reach 63% of its change
after its time delay for a step in the manipulated variable (same as time
constant)

tuning weight - tuning adjustment in a model predictive controller (when


applied to CV or QV it sets priority and when applied to MV it sets
move suppression for stability)

ultimate gain - controller gain at equal amplitude oscillations for propor-


tional only control (occurs at 180 deg phase shift and is inverse of static
gain and amplitude ratio)

ultimate period - period of equal amplitude oscillations for proportional


only control (occurs at 180 deg phase shift and is the inverse of the nat-
ural period in radians/sec)

update time - time interval between successive updates for digital devices
(same as scan time or control interval and the inverse of scan rate or
frequency)

valve action - relative direction of change in flow for a change in the input
signal ("direct or inc-open" for fail closed and "reverse" or "inc-close"
for fail open)

velocity limit - limit placed on the rate of change of a variable with


respect to time (prevents abrupt changes in the controller output from
noise or operator entered set points
Appendix D
Top 20 Mistakes
Made Today and Every Day
for the Last 40 Years
Reprinted with Permission from Control Magazine [1]

1. Control valve pressure drops that are not engineered. If you consistently
see 5 or 10 psi pressure drops or line size control valves, the
process or instrument engineers are playing the lottery with your
control valves. They will hit it big more often than not, because
most of the valves will end up being oversized. This is accentuated
by the tendency for the mechanical, process, and instrument
engineers to each put in some excess capacity and then choose the
next-larger size. No valve manufacturer in their right mind will
state the response near the seat, because the stick and slip is
unimaginable. This means you will have the unenviable task of
doing this research for them on your process.
2. The use of on-off valves for control valves. You need to pay attention
here to the name "on-off." If you want your valve to go between
being wide open and closed, you have made the right choice.
However, control loops have the strange idea that they can seek
any position and hold it. The primary culprits of the supposed
transition from "on-off" to throttling are rotary valves. Taking a
ball or butterfly valve, slapping on a rack and pinion or scotch
yoke piston actuator and a spool type positioner, and calling it a
control valve will create lots of repetitive cycles and questions like
"What is wrong?" Also beware of names that mean the opposite.
For example, the "high performance" isolation valve is really the
"lousy performance" throttling valve because the high sealing
friction and shaft windup cause excessive stick-slip. Remember
that "tight shutoff" means "high friction." Stick-slip comes from
high friction and shaft windup. Pure dead band comes from
backlash. Per cent for per cent, slip is worse than stick, and stick is
worse than dead band. Dead band causes excessive dead time and

425
426 Advanced Control Unleashed

stick-slip causes a limit cycle. With slip you can rest assured that
any match of the process variable (PV) with the set point (SP) is a
transitory event. It is like the PV was saying to the SP, "Just passing
through." While these valves have generous amounts of all of
these largely undocumented features, stick-slip is simply
breathtaking.
3. The placement of differential head or insertion type (e.g., vortex, annubar,
thermal mass) flow meters downstream of the control valve. It screws up
the velocity profile and the next performance review of you and
your meter. Yet websites, newsletters, and books have this
arrangement proudly displayed as a logo for process control.
4. Recalibration as the solution to instrument problems. This gives the
appearance of doing something. The real problem is usually
associated with the operating conditions, application, or the
installation, but this requires some thinking. For example, the
orifice could be worn, the steam tracing could have boiled out the
liquid in the sensing lines, connections could be plugged,
condensate could have collected in sensing lines, and the
composition or any of the physical properties (density and
viscosity) could have changed.
5. Tight control of level in a surge tank. The purpose of the volume was
to absorb changes in flow and for the level to roll with the punches.
Keeping the level within a few per cent of set point passes them on
and defeats the purpose of the tank. Most oscillations in a process
can often be traced back to an overzealous person tuning a level
controller and an excessively tight set of low and high level alarm
limits.
6. Hydro test and flush of control valves and instruments. This is a good
idea if you want to see how tight a person with a wrench can make
the valve packing, whether the vendor really meant the pressure
over-range limit, or whether your valve trim, insertion meter, or
electrode can withstand welding rods and pipe wrenches traveling
at 9 fps.
7. Equalization lines without a purge or tracing. It is not a case of
whether you view the line as half full or half empty. You need to
choose full or empty and then install a purge or heat tracing to
guarantee it. Trusting technicians to periodically empty or fill lines
is an exciting exercise in wishful thinking. Vapors will condense
and fill up an empty line, or changes in pressure and temperature
will empty a full line at the least opportune times.
8. The overuse of orifice meters to measure flow. Orifices stink. To use
them for mass flow borders on the absurd. Apparently, engineers
are oblivious to measurement noise, poor rangeability from the
Appendix D - Top 20 Mistakes 427

square root relationship, the shift in meter coefficient with wear


and Reynolds number, the change in density with composition,
and the seven leak points. Don't let some short-term repeatability
spec from the vendor fool you; this is a bad choice and a last resort
best done after you have retired at a resort. There are too many
better choices with less noise, more rangeability, and better long-
term accuracy and reliability, such as Coriolis, magnetic flow, and
vortex meters. Insertion versions are available for large pipelines.
9. The installation of temperature sensors in baffles. This is a good idea if
you want a smooth and gradual response in temperature while
your reaction runs away. The insertion of sensors in baffles coated
with glass or full of cigarette butts and sand makes trends so slow
you will think you are in the Caribbean.
10. The overuse of thermocouples for control. If you like to adjust set
points to compensate for drift and increase trend display scales to
flatten out noise and scatter, and don't want to use derivative
action, then you should save the $200 and go with the
thermocouple (TC). After all, what was good enough for my
grandfather is good enough for me. However, don't show the
repeatability and drift specs for thermocouples to the process
engineer, unless you like to share a good scare. Just tell him the
thermocouple (TC) is faster and stronger than the RTD, and don't
mention that it is inside a thermowell whose design dominates the
time response and reliability of the installation. Be sure to save an
extra $800 by omitting the smart transmitter and take the TC
directly into a DCS. Just remember to set the trend compression
high to make the A/D noise disappear and turn off the rate action.
You never liked that third tuning adjustment anyway.
11. The use of anything other than the most simple, reliable, and accurate
measurements. If you pride yourself on your intelligence and
creativity and are into extreme sports while on the job, go for the
most exotic and complex instruments and analyzers you can
dream up. Forget about Coriolis density meters. They are just too
accurate and reliable to be anything but boring after they are
installed. Instead pick an analyzer that likes a well-heated,
ventilated, and air-conditioned environment like yours. Don't
install a phone in the analyzer house, otherwise those pesky
operators will call and ask what the heck is wrong. The dead time
and low-frequency noise from the sample transportation delay and
analyzer cycle time is soothing. Besides, the analyzer house makes
a great getaway and a convenient excuse when your boss asks
where you were all day.
12. The installation of pressure gauges and electrodes before startup. This is a
great idea if you have stock in these companies. Just don't get
428 Advanced Control Unleashed

busted from all the gauges and electrodes that get busted during
construction and water batching.
13. Stream tracing left on during the summer. This is great for conducting
chemistry experiments, such as how hot does the process fluid
need to get before its gets sticky, solidifies, or polymerizes. Also, it
provides a good check on sensor temperature ratings and safety
programs for technicians to wear gloves.
14. The use offield-mountedswitches. This is lots of fun if you like to
guess the trip settings, especially for field pressure and
temperature switches. Just don't plan on figuring it out after a
failure, because the evidence will be blown up with the rest of the
plant. Limit switches on valves may be unavoidable but make sure
they are encapsulated and have no mechanical components. The
higher amp rating of other types doesn't do you much good if they
get hung up or corroded. Just remember that limit switches, which
are designed to tell you if a valve has failed, fail more often than
the valve.
15. The use of homemade instruments, analyzers, and algorithms. The use of
special fabricated or designed stuff instead of off-the-shelf
standard solutions is exciting for the innovator but not for
everyone else who has to support or use it. The testing is
prejudiced and the documentation is nonexistent, but don't let this
stop you; after all, that next promotion or graduate degree may just
be one special instrument or algorithm away.
16. The placement of all your eggs in one basket. Not a good idea unless
you are big or stupid enough to absorb the loss. The placement of
interlocks or control systems that can disable an entire plant in one
computer or controller or on one power supply or circuit is risky
business.
17. The use of too much reset action (too small an integral time) in reactor
and column temperature controllers. Reset has no sense of direction
and will only make a correction after it crosses the line, much like a
90-year-old driver.
18. The use of not enough derivative action in temperature and pH
controllers. Turn off the rate in these applications and experience
firsthand the exhilaration from acceleration. Just make sure the
trend recording shows the new highs reached by exothermic
reactors or steep titration curves.
19. The omission of positioners on fast loops. This was the fad in the days
of analog controllers and pneumatic positioners and Nyquist plots
of idealized boosters and valves. The real-life problems introduced
by bench settings, high packing friction, shaft windup, high
Appendix D - Top 20 Mistakes 429

seating friction, and the capabilities and diagnostics of digital


smart positioners make their omission a crime that should be
punishable by a person forced to guess what the valve position is
for every valve without a positioner. I am sure the person will
agree that this is more fun than engineers should be allowed to
have.
20. The use of steam traps in large or critical applications. Why not make
the performance of an investment of millions of dollars' worth of
process equipment and instrumentation hinge on a $100 stream
trap? One reason is that the use of a small vessel, level sensor, and
controller to hold up condensate might cost a few thousand
dollars. Who cares if the stream trap stays shut and liquid
condensate backs up into the process equipment and reduces the
heat-transfer area? Who will notice if the steam trap stays open,
and steam blows through to the condensate system? Besides, since
you never really know what is going on inside that little dickens,
you can always blame something else. The trip out to the field to
give the trap a rap with a wrench is a good source of exercise for
the operator. Just make sure to make Dramamine a standard issue
because the periodic buildup of condensate and blowthrough of
steam can cause motion sickness. Engineers and operators have
gotten sick to their stomachs looking at the trend recordings of
many columns, evaporators, and reactors that have managed to
save a speck of capital.
[D.1] McMillan, Gregory K., and Weiner, Stanley, "Control Talk," Control,
September 2002.
431
Index Term Links

abnormal
conditions 137 145 149 164
inputs 154
situation management 91 163-166 169 176 180
actuator 31 131 356 416
adaptive
control 225-227 232
tuning 153 226 231
advanced process control 1-5 89 101-102 116 261-262
alarm flooding 164 166
alarm screening 164 166 168
aliasing 20 137 143 158-160 185
analyzer 91 98 185 263 265 267-271 274-275
277 286 317 319
analyzer validator 267-269
ARX 346 354 357-358 367-368
arx 346 354 357-358 367-368
auto tuner 60 62 75 91 100 184 187
190 196 198-202 206 209-210 216 242
251 257
automatic mode 67 217 360 419
back propagation algorithm 300
backlash 7 14 23 28 133 196 203
271 425
backward chaining 180
basic control 3 12 14-15 72 96 98 108
110 163 183 307 310 391
basis weight control 112
batch
control 134 395-396
reactor 77 135 192-194 326 336-337
reactor temperature control 192
Bode plot 222-223
boiler drum level control 196
capability 150 185 243 251 384-385 387 392
cascade control 14-16 25-26 46 51 67-70 72 76
187 193 197 203 421-123
cascade mode 67 131 420
432
Index Term Links

caster monitoring 127


closed loop
response 187 207 218 242 244
tuning 205
column control system 21-22
column temperature control 49 193 206
communication model 157
composition loop 131
confidence interval 369
conjugate gradient 300
constraint handling 319
continuous
analyzer 290
column temperature control 193
distillate receiver level control 194
reactor pressure control 189
reactor temperature control 190
contribution plot 142
control
configuration 340 391
horizon 350 363 369-372 377-379
index 154 156
limited 154-155
module 388
surface 256-257
system performance 120 123
valve 6-7 17 23 28 47 126 416
control valve 26 28
controlled variable 6 50 184 308 363 419
controller commissioning 258
coriolis flow meter 36 42 70 75
cross correlation 134 136 144 156-157 276
cumulative error 300-302
DCS 3 20 39 120-121 126 150 286
386 390 393 395 397 405 427
DDC 189 420
dead band 6-8 23 133 416 420
defuzzification 247 250-251 254
433
Index Term Links

derivative time 57 184 203 205 214


diagnostic tool 132
direct digital control 189
discrete Fourier transform adaptation 236
distillate receiver level control 194
distillation
column 17 110 193 195 315 331
tower 103 264-265 287 316-318 320-325 408
tower control 316
distributed control system 3 120 163
DMC 363-364 369
dryer control 327
dynamic linear estimator 262-264 276 278-279 294-295 303 311
dynamic matrix 363 370-372 381
Dynamic Matrix Control 363 371
embedded expert system 174
energy balance 70 72 76 78 84 99
epoch 300-301
evaporator control 326
execution period 333-334
execution rate 158 332 334 407
expert system 163-174 176-182
factor analysis 124
failure handling 387
feedforward control 11 15 17 20 25-26 70-74 76
84 100-102 193 251 315 319 421
feedforward neural network 296 304
fictitious set point 230-231
fieldbus 9 75 121-122 143 147-151 153 160
385 388 395-397 403-405
filter 11 145 173 279 366 384
filter time constant 219 373
final element 5-6 12 15 22 31 34 51
67 73 199 282 421
finite impulse response 345 367
FIR 345-346 357-358 367-368
forward chaining 180
fractionator 284
frequency domain 212 216 231-232 236
434
Index Term Links

frequency estimator 236


function block 147-148 251 278 395
funnel control 350-351 373
furnace pressure control 34 195
future trajectory 308-309 421
fuzzification 247 251
fuzzy logic 153 156 182 221 239-251 253-255 257-258
control 153 239-244 246 250 254-255
control block 243
inference rules 248 253
fuzzy set 245 247 251
gain adaptation 227-228
gain margin 204-205 207 214 222-224 244
gain scheduling 65 276-277 409 415
Gaussian distribution 152
gradient 300
hard constraints 225 363
hidden layer 296 302
hidden nodes 302 305
histogram 133
Hotelling T2 142
hysteresis 187 196 200 202 210-213 217 237
IAE 51-52 223 225 241
ill conditioned 338 377
IMC 216 218-221 223-224
IMC tuning rule 220
incorrect mode 154-156
input layer 287
input sensitivity 290 293 297 299
input weights 300
integral absolute error 223
integral time 55 193 214
Integrated Absolute Error 51 241
integrated error 7 14-15 49 52 73 82-83 85
108 111 349
integrating process 205 216 220-221 225
integrating response 47 190 205 355 416
Internal Model Control 218
interpolation 233
435
Index Term Links

inverse response 25 59 72 194 196 209 276


315 353-354
ISA standard form 219 221
jitter 158-160
kappa analysis 271
knowledge database 171
Lambda factor 187 198
Lambda tuning method 67 187 204
Lambda tuning rule 220
learning rate 300 304
least squares solution 370
level control 19 49 57 73 184 187 190
193-194 196 220 249 315-316 413 415
level dontrol 194
linear programming 377
linear regression 213 262
liquid flow control 187-188
liquid pressure control 73 188
load response 200
LP 116 325-326 335 341 377-379
magnetic flow meter 34 37
manipulated variables 192 310 318 353 362
manual mode 339 360 421
material balance 13 72 77 79 187 194 417
measured disturbances 314 360 362-363
measurement noise 37 57 143 183 185 187 194
196 199 279 426
membership function 245-248 250 253-254 258
minimum integrated error 83
minimum variance control 151 219
mixed dynamics 334
mode 52 55-56 120 122 131 144 149
155 175 279 351 403 419-420 423
model predictive control 11 72 101 106 130 307-308 312
355 362 393
model residual plot 141-142
moisture control 240
MPC
436
Index Term Links

commissioning 351-352
controller design 347 370
multi-node environment 404
multi-valued logic 245-247 254
multivariate analysis 138
negative feedback 46-47 76 80
neural network 181 264-265 271 275 278-279 282 287-288
293 296-297 301 304
training 301
neutralizer 18-19 59 97 108-110 197 204
NN 285 290 296 299 302-303 305
noise protection 211 213
objective function 377-379
OLE 386 405-406
OPC 126 148 286 386 389 392 395
397-399 401 404-406
open loop response 45-47 49 52 68 196 206
operating constraints 126 318 336
operator interface 147 163 167-168 175 355 360 384-386
389 392 395-396 399
optimization 92-93 116 307 323 341 353 375
377 379-380
output layer 297
override control 313
over-training 301
paper machine 111-112 331
parallel installation 43
partial least squares 121 124 137 144 265
peak error 7 46 49-50 52-53 55 57
68-69 75 82-83 85 183 196 204
penalty on error 350-351 371
penalty on move 349-351
performance monitoring interface 145
periodic upsets 23 73 183 200
pH control 33 36 51 59 103 137 201
242
phase margin 214 222-223
437
Index Term Links

PID 51 60-65 192 201 213-214 224 226


228-229 231 251 257-258 309 315 388
420
PID controller tuning 51
pneumatic positioner 18
positioner 29 70 187 416 422 429
positive feedback 29 46-47 59 76
power spectrum 20 133 136 156-157 185
PRBS 303 338 343-345 354-355
prediction error 142 265 285 303
prediction horizon 308 333 363 367-368 371-373 377-378
primary controller 13 46 67-68 76 192 198 420
422-423
principal component analysis 95 121 124 137 144 275 285
process
analysis 117 120 219 339-341 347
gain 11 46 70 77-78 187 189 314
422
performance 51 91 106 115 383
static gain 216-217
test 213-214 218 278 367
time constant 15 49-51 73 78-80 86 143 183
189-190 205 216-217 220 333
variability 121 124 131 148 150 183 206
220 244
variable 7 76 94 140 150 206 226
256 304 426
process gain 46 76
process model 216 218 226 231-232 308 331 344
346 353 357 363-365 367 394 398
process variable 47
projection to latent structures 124
PV 46 149 154 188 206 426
pyramid of technologies 91
QP 325-326 335 341
Raman scattering emission
spectrophotometers 40
438
Index Term Links

ratio control 12 14 20 36 68 70 72
74 76 195 349 409-410
reactor 59 98 113 188-190 192-193
recursive adaptation 231
relay
amplitude 211-212
hysteresis 210-212
step size 210
reproducibility 5-7 11 13-15 19 24 31 35
37-38 75 92 111 183 422
reset 55 59 61-62 64-65 70 133 218
225 227 229 257
rise time 12 52-53 66 75
robustness
based tuning 221
map 207 222-225
plot 221-222
rotary kiln control 327
rotary valves 8-9 23 26-29 33 425
ROUT 66 189 420
scalable DCS 390 397 405
scan rate 34 153 198 419 423-424
scores plot 140-142
screening data 290
secondary controller 46 67-68 70 192 422-423
self-regulating process 67 76 80 217 219 221 354
423
series installation 43
set point response 7 25 50 60-64 70 183 192
224-225
set point trajectory 310 350-351 369
settling time 52-53 55 57 66 73 251 255
270 287 289 308 319 345
sigmoid function 215
simulation state 400 402
singleton 253
singular value decomposition 124
slab caster process 128
439
Index Term Links

sliding stem valves 7-8 26-29 31 33 106 133 202


slip 7-8 74 200 423 426
Smith Predictor 221
soft sensor 269-275 279 283-284 288 297 299-300 302
split range 92
standard deviation 14-15 94-96 100-102 108-109 111-114 135 146-153
185-186 298
static mixer concentration control 188
step response 26 213 216 330 332 345-346 354
357 360-361 364 366 368-371
step size 30-31 187-188 190 193 195-196 199-200 202-204
207 210 244 300 338 354
stick-slip 8 10 15 22 26 29 51
90 123 200 409 416 426
stiction 7 9 11 46 49 70 189
203 271
stroking time 7-8 10 29-31 391
supervisory control 105 421
surge tank level control 73 184 193
temperature control 44 77-78 190 192-193 240
temperature loop 17 19 66 70 111 192
temperature measurement 36 39 75 132 266
temperature sensor 36 45
thermocouple 6 18 20 24 39 59 192
202 427
thermowells 44
time delay 11 30 32-33 49-50 65 79 82
84 100 111 420-421 423
time series 141 264 276 345
titration curve 15-16 49 109 242
total standard deviation 102 113 150-152
training error 301
transportation delay 17 32 35 43 47 79 91
184 188 427
tuning interface 207 243
Tuning Rule selection 206
tuning settings 52 60 65-67 183-186 203-205 208 391
421
440
Index Term Links

typical tuning settings 65 204


ultimate gain 80-81 85 210-213 216-218 221 236 258
424
ultimate period 67-68 76 210-213 236 258 424
ultrasonic level measurement 39
valve characteristic 9 16 71 187 197 205
valve dead band 30-31 46-47 84 111 183 202
valve gain 8-11 15-16 49 51 90 187-188 314
variable dead time 333-334
variable speed drive 195-196 421
varying delay 331
virtual plant 3 117 289 383-390 392-396 399-401 406-408
windup 14 27-28 55 155 425 428
Ziegler-Nichols rules 187 213-214

You might also like