You are on page 1of 11

HUMAN FACTORS IN DISTRIBUTED PROCESS CONTROL David A. Strobhar Beville Engineering, Inc.

280 Regency Ridge Drive, Suite 2000 Dayton, Ohio 45459 (937) 434-1093 ABSTRACT The advent of distributed process control has significantly changed the manner in which the refinery operator interacts with the process. Human factors engineering is the branch of engineering dedicated to the analysis and specification of human interactions with complex systems. The basic principles of human factors engineering are discussed. Key concerns and trends relative to distributed process control, operator performance, and system safety are set forth. Picture yourself on a cool winter evening. You decide to make some tea, so you go to your kitchen and put a pan of water on the stove and you leave. Several minutes later you come back to see a pan of cool water sitting on one burner and the burner behind it is redhot. You might curse yourself for your stupidity, then turn on the correct burner. A simple case of human error. Yet, it is a common error that most people have experienced, and it is with a system that we use almost daily. Its a non-threatening error in the kitchen, but if the red-hot burner was a group of tubes in a furnace and the cool water the charge to that furnace, the simple error may become catastrophic. Why do people make these frequent mistakes with a system on which they are well trained? The problem in this example is in that the displays (i.e. the burners) are incompatible in the arrangement with the controls. The displays are usually arranged in an array, but the controls are arranged linearly. The controls and displays are not properly integrated. The problem is not exclusive to the household. A coking unit at one refinery was shut down when a field operator was turning a valve that had the flow meter installed upside down. The operator thought they were increasing flow when in fact they were shutting it off. Both the previous examples are what are referred to in human factors engineering as design-induced operator error. The design of the system helped to produce the error, and the error could have been in fact predicted based upon the poor design. It was designinduced operator errors that led to the development of human factors engineering. The U.S. Air Force found in World War II that the sophisticated machines of war were being limited by the people who operated them. Aircraft could not be consistently flown at design specifications because of operator error. In fact, more pilots died in WWII during training than in combat, often due to poor selection or system design. In most cases, the design of the man-machine system interface (e.g., instrumentation) did not account for human characteristics and limitations.

Due to the operator-limited nature of their systems, the Air Force began to incorporate characteristics of the human operator into the design of the system. This has allowed the aerospace industry to progress from the Wright B Flyer to the F-14 Tomcat without increasing the number of operators (i.e., pilots) required. Key words: human factors engineering, control center consolidation, display design, distributed control. The need to consider human factors in the design of the system is increasing in the refining industry. Much of the need can be attributed to the introduction of distributed process control systems, which radically alter the operator-process interface. Proper consideration of human performance characteristics, the human factor, in the design of a system enhances system usability, performance, and effectiveness. There are certain basic principles or characteristics of human behavior that need to be considered in the design of a system. Figure 1 is a model of human performance. In the model, process information is being detected by instruments, which must be selected for viewing by the operator. Once viewed, the data from the instruments enters the operator sub-system, is processed, and a response is made through a set of controls. All of this is occurring in some sort of environment, with a unique combination of environmental factors, such as light, noise, people, etc. While all aspects of the man-machine system are important (the lightening, noise, display selection, etc.), it is understanding the activity in the operator sub-system the often plays the most important part in preventing design induced operator error. The operators information processing system, composed of short and long-term memory, plays a major role in how people interact with complex systems. Information form our environment (displays) enters into short-term memory, or what is commonly referred to as our conscious. Short-term memory is a capacity-limited system. Human short-term memory can only hold about seven chunks of information at one time. If more information is presented, either previous information will be dropped or the new information will be ignored. The goal of the human factors engineer is to prevent overloading the short-term memory system, by putting as much data into a single chunk of information as possible. An example of chunking data is the old chemistry mnemonic, LEO GER (loss of electron oxidation, gain of electron reduction). The mnemonic allows a large amount of data to be contained in two manageable chunks of information. Once information is in short-term memory, mental resources must be available in order to do anything with it. Mental resources, or mental workload capacity, are also fixed, but only in the short term. As a person tries to solve a problem or process information, they use some of the mental workload reserve. The more complex the processing, the more the reserve is used. Stress also uses some of the reserve, reducing its availability for information processing. However, it is possible to dedicate some of the reserve to specific processing tasks. Most people have experienced the cocktail party phenomenon, where

Figure 1. Model of Operator-Process Interaction And Characteristics

you are engaged in a conversation, oblivious to the conversations around you until someone says your name. That piece of mental workload capacity that has been reserved for name recognition comes into play. Just as mental workload reserve can be reserved for name recognition, so to can it be reserved for other information processing tasks. If complex problem solving is required, more than likely long-term memory will be utilized. Long-term memory is what we usually think of as memory, the compilation of our training, knowledge, and experience. A persons ability to utilize the information in long-term memory is partially a function of how its structured. If information is stored in long-term memory in an erroneous model of not in the same form as it will be accessed, than retrieval of information will be difficult. Consider the operators at Three Mile Island. They had been trained that the pressurizer on their pressurized water reactor was a valid representation of coolant inventory. No one had prepared them for a leak out the top of the pressurizer. The data was stored improperly in their long-term memory. So when the leak occurred at the top of the pressurizer and they tied to diagnose, or solve, the problem, they interpreted the rise in the pressurizer level as an excess of coolant in the system. Their model of how the system functioned was erroneous and produced erroneous results (i.e., reduction in coolant inventory). In order to understand why consolidation improves operator performance, and when it wont, a brief discussion of how the operator interacts with the process is in order. Operator performance in process control can be represented by the model in figure 2a. The process receives some disturbance which alters the process output. The operator must detect that the output is different than his goal for the process, identify the correct course of action, and implement that action. The stability of the system is a function of the operator speed in performing those three tasks correctly. If information on the disturbance can be provided to the operator in advance of its impact on the system, then performance on the three tasks is significantly enhanced and total system performance is improved. Control room consolidation should take advantage of supplying the operator the right information on what will be impacting his process unit. Simply providing the operator more information (i.e., putting him in a central control room) is insufficient; they need the information on what will impact them. Consolidation works when the flow of information from interacting units is enhanced as part of the consolidation. This means that those units that impact each other should be put into the same control room. Consolidation criteria should not be based upon management structure or, necessarily, physical proximity of units. It should be based on optimization of critical interactions. Figure 3a shows the interactions in the major refinery. The original consolidation plans would have put the highlighted units in the same control room, because they were in the same party of the refinery and under the same manager. However, only some of the units interact, and therefore, only some of the units would benefit. For example, the sweet crude unit has a strong interaction with the lube plant and very little interaction with the

Figure 2A. Impact of the Operator In Process Control

Figure 2B. Impact of Consolidation On Operator Performance

Figure 2. Model of Operator-Process Interaction

Figure 3. Link Analysis Progression For Refinery Analysis

Figure 4. Control Room Design Based Upon Unit Interactions

rest of the refinery. The sweet crude unit operators would probably have received little benefit from being consolidated with non-lube units and would probably have had a negative performance impact due to being located remotely from their unit. Grouping of the units as in figure 3b better enhances the interactions present in the refinery. Delineation of the interactions also allows evaluations of different consolidation options, which units should be grouped together. Delineation of the influences of refinery units on one another indicates what units should be consolidated together. Extension of the principle should dictate the layout of the workstations in the control room. Figure 4a shows the interactions between the units at one refinery. Using those interactions as a base, the layout of the control consoles was revised from the original array approach to a more circular concept (Figure 4b). As the transfer of the critical information between operators is to be verbal in a consolidated control room, distances of over 20 feet between consoles essentially renders verbal communication useless. Proper arrangement of the consoles to facilitate information transfer is almost as important as consolidating those units that need to exchange information. The major change that people notice with distributed process control is the use of CRTs by the operators to control the process. No longer does the operator roam the board, rather they sit and select the information they need. The presentation of the information itself can be significantly different from what was previously utilized. Both the presentation of warning information (i.e., alarms) and operating information (i.e., displays) have changed, and not always for the better. The presentation of alarm information has proven to be a particular trouble spot. The old hardwired alarms utilized the principle of chunking the data. The alarm panels position in the control room and the location of the alarm in the panel provided structure to the alarm, so that without even reading the title the operator had information on the alarms nature. Early distributed control system presented alarms chronologically, preventing the operator from applying information attributes, or clustering the alarms into something more meaningful. In a major upset, the alarm system became essentially useless. Compounding the problem is the ease with which alarms can be added in a distributed control system. Unfortunately, the new alarms are often simply on a single process variable being out-of-tolerance. Alarms systems, like all display systems, should attempt to provide the operator information, not just data. For example, 5134341093 is data. The addition of two hyphens, 513-434-1093, transforms the data into information, namely that the numbers are in the form of a telephone number. The data should be synthesized into something meaningful. For example, a pump trip alarm on a spare pump is essentially worthless, as the pump is probably usually tripped. A failure of the pump to start after the auto-start signal should have been given is useful information, combining data on the pump with data on the situation.

Figure 5. Graphic Process Display Example

Figure 6. Revised Process Graphic Display

The presentation of operating information usually has problems in three areas: consistency, coding, and content. Figure 5 is an example of a process graphic display that exhibits problems in each of the three areas. First, information is not always presented in a consistent manner. It has been said that people are creatures of habit. Habits are another way of saying that people have developed certain responses to certain stimuli. Repetition of a stimuli to elicit a proper response builds the bond between the stimulus (the display) and the response (its meaning). Given that certain operator responses are good, consistent repetition of information should be employed. In the display example, valve designator and output % are sometimes located to the side of the valve, sometimes above, sometimes underneath, etc. Second, little complex coding information (e.g., position, shape, etc.) is usually done. Coding of information is a type of chunking, with the goal of having as many ways to code data as possible (consider a stop sign; red, octagonal, says stop). Distributed control systems have tremendous potential for coding information; a potential that is often untapped. In the display example, information is only rudimentarily coded, with a simple use of color (red, yellow, green) that has any meaning (color is used extensively, but only the three mean anything). Third, the content of the displays is often oriented towards data, and not information. The display in figure 5 shows four heat exchangers, and yet as thermo couplers exist only at the entrance and exit of the four, they might as well be one of one hundred exchangers. Showing that four exchangers exists is data, but it is no real information. Beville engineering revised the display in figure 5 to address some of the previously mentioned deficiencies (Figure 6). Since the position of the designator is consistent, making a position code, the units for the data need not be shown. Many of the piping runs have been simplified (including the heat exchangers), with no loss in technical accuracy. Principles of human perception have been incorporated to make the display easier to read. Human factors engineering has been used to reduce the potential for design induced operator errors in a number of complex systems. Consideration of how people use information only minimally impacts project time, while significantly improving system operation. In oil refining, the advent of distributed process control has increased the need to account for the human factor. Consideration of operator interactions is essential for the success of consolidation. Incorporation of operator characteristics into alarm and display system design, presenting information and not just data, will facilitate operator performance.

You might also like