90% found this document useful (10 votes)
3K views546 pages

Power Plant Performance Monitoring

Power Plant Performance Monitoring

Uploaded by

denni judha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
90% found this document useful (10 votes)
3K views546 pages

Power Plant Performance Monitoring

Power Plant Performance Monitoring

Uploaded by

denni judha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
Power Plant Performance Monitoring Rodney R. Gay With Carl A. Palmer Ne LIBS ae nat TECHNIZ BOOKS INTERNATIONAL Power Plant Performance Monitoring Rodney R.Gay Carl A. Palmer Michael R. Erbes TECH BOOKS INTERNATIONAL Publishers & Distributors New Delhi-110019, India © 2006 Tech Books International, New Delhi-110019, India, First Indian Edition All rights reserved. No Part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the Publishers. Published by . ‘TECH BOOKS INTERNATIONAL Publishers & Distributors 4/12, Kalkaji Extn., Opp. Nehru Place, Kalkaji, New Delhi -110 019, India Phone : (011) 26284791, 26284790, 26414791 Fax: 91-11-26473611, 26231799 E-mail : abitechbooks@ vsnl.net; booksales@bol.net.in Visit us at : wwww.technobooks.com, Library of Congress Cataloging-in-Publication Data Gay, Rodney R, Power plant performance monitoring / by Rodney R. Gay, Carl A. Palmer, Michael R. Erbes, - Ist ed. pem. Includes bibliographical references and index. LCCN 2004096640 ISBN 81-88305-83-9 1. Electric power-plants--Efficiency. 2. Plant performance--Monitoring. I. Palmer, Carl A. IL Erbes, Michael R. I]. Title. ‘TK1005.G39 2006 621.3121 (QB 104-200607 ISBN : 81-88305-83-9 Dedicated to the most important people in my life: Henry, my father who taught me to be curious Joan, my mother responsible for my success in life Wendy, my wife, companion and friend for 27 years and for life Christopher, my son who is nearly perfect Richard, my twin brother and good friend David, my brother, tennis partner and hiking buddy Timothy, my brother who gives me medical and political advice William, my grandson who makes me smile Authors Rodney R. Gay received his PhD in mechanical engineering from Stanford University in 1975. He served as founder and president of Enter Software, Inc. from 1988 until the company was sold to General Electric in 1999. He remained as president of GE Enter Software for two years, then left GE to become a writer and engineering consultant. Carl A. Palmer earned his PhD in mechanical engineering from the University of Wisconsin in 1991, after which he became an employee with Enter Software, Inc. Carl is currently an engineering manager working on sensor development for the power industry. Michael R. Erbes received his PhD in mechanical engineering from Stanford University in 1987. He co-founded Enter Software, Inc. in 1988 where he served as vice president and director of engineering. Mike is now president of Enginomix, LLC (www.enginomix.net), a consulting and software development company focusing on integrated engineering and economic modeling solutions for power plant design and operations. Table of Contents Foreword 5 1. Overview of Performance Monitoring B 1.1 Concept of Performance Monitoring 13 1.1.1 “Where You Are” Vs “Where You Should Be” 13 1.1.2 Performance Calculation Procedure 7 1.1.3 Expected Performance: “Where You Should Be” 18 1.1.4 Equipment Ratings 20 1.1.5 Corrected Performance 26 1.1.6 What is My Degradation? 29 1.1.7 How Much is Degradation Costing Me? 30 1.1.8 Optimization: “Where You Could Be” 32 1.1.9 Controllable Loss Displays 33 1.2 ASME Test Codes 35 1.3. Performance Testing versus Online Monitoring 37 1.4 Curve Based Methods 38 1.4.1. Performance Curves 38 1.4.2 Expected Performance from Curves. 44 1.4.3 Additive Performance Factors 46 1.4.4 Expected Performance from Curves 47 1.4.5 Correction Factors 50 1.4.6 Percent Change Correction Factors 53 LS Model-Based Performance Analysis 54 2. Heat Balance Analysis 63 21 Local Heat Balances 64 2.2 Combined-Cycle Overall Plant Heat Balance 9 23 Combined-Cycle Balence Using Commercial Software 76 2.4 — Rankine-Cycle Overall Plant Heat Balance 9 2.5 _ Rankine-Cycle Balance Using Commercial Software 85 3. Data Validation 89 3.1 Definition of Data Validation 89 3.2 Range Checking 90 3.2.1 Static Ranges 90 3.2.2 Dynamic Ranges 90 3.2.3 Rejected Values 92 3.3 Averaging Sensor Data 93 3.4 Time Averaging 94 be Heat Balances for Data Validation 9o7 4. Accuracy of Calculated Results 105 4.1 Instrument Error 105 4.1.1 Measurement Error 4.1.2. Random Uncertainty 4.13. Systematic Uncertainty 4.2 Uncertainty of a Calculated Test Result 4.3 Monte Carlo Method 4.3.1 Definition of the Monte Carlo Method 4.3.2 Probability Distributions 4.3.3 Sampling from Probability Distributions 43.4 Running the Monte Carlo Simulation 4.3.5 Results of the Monte Carlo Simulation Overall Power Plant Performance 5.1 Equipment Performance versus Plant Performance 5.2 Specification of Overall Power Plant Performance 5.3 Overall Plant Expected Performance Models 5.3.1 Curve-Based Method Expected Plant Perf 5.3.2. Model-Based Method Expected Plant Perf 5.3.3 Impact Method Jor Expected Plant Performance 5.4 Degradation of the Overall Power Plant Impacts of Degradation on Overall Plant Performance 6.1 Definitions of Plant Impacts 6.2 Gas Turbine Impacts 6.3 Heat Recovery Steam Generator Impacts 6.4 Steam Turbine Impacts 65 Boiler Impacts 6.6 Feedwater Heater Impacts 6.7 Condenser Impacts 6.8 Cooling Tower Impacts 6.9 Inlet Air Filter Impacts 6.10 Exhaust Pressure Loss Impacts. Gas Turbine Performance 7.1 Overview 7.2 Power Generation 7.3 Airflow, Firing Temperature and Pressure Ratio 7.4 Control Algorithms 7.5 Correction Curves (Baseload Performance) 7.5.1" Effect of Inlet Temperature 7.5.2 Effect of Inlet Humidity 7.5.3 Effect of Atmospheric Pressure or Altitude 7.5.4 Effect of Inlet Pressure Loss 7.5.5 Effect of Exit Pressure Loss 7.5.6 Effect of Steam or Water Injection 7.6 Part-Load Performance (Industrial Engines) 105 107 108 109 110 110 MW 113 115 115 7 117 119 122 122 124 126 128 129 129 131 135 136 137 139 140 141 142 144 147 147 147 149 153 156 159 159 160 162 163 164 166 17 78 19 7.10 7 7.12 7.13 114 715 Part-Load Correction Curves 7.7.1 Under-Firing Correction 7.7.2. Inlet Guide Vane Correction 7.7.3 Part-Load Expected Heat Rate Aeroderivative Engine Performance Overall Gas Turbine Heat Balance 7.9.1 Determination of Exhaust Gas Specific Heat 7.9.2. Detailed Gas Turbine Heat Balance 7.9.3. Tuning Detailed Gas Turbine Heat Balance 7.9.4 Step-by-Step Solution of the Equations 7.9.5 Simultaneous Solution of the Equations 7.9.6 Combustion Mass Balance Analysis 7.9.7 Specific Heat of a Mixture Model-Based Gas Turbine Heat Balance Physically-Based Models of Expected GT Performance Gas Turbine Performance Evaluation A theoretical degradation curve versus time is in Experience with Measured Data from Operating GT Performance Degradation and Engine Life Heat Recovery Steam Generator Performance 8&1 8.2 83 8.4 85 8.6 8.7 88 8.9 8.10 8.11 8.12 Overview 8.1.1 Economizers 8.1.2 Evaporators 8.1.3 Blowdown 8.1.4 Superheaters Duet Burner HRSG Efficiency and Effectiveness Expected HRSG Performance 8.4.1 Effect of Duct Burner Firing 8.4.2 Effect of Exhaust Gas Temperature 8.4.3 Effect of Exhaust Gas Flow 8.4.4 Effect of Steam Pressure HRSG Heat Balance Analysis Model-Based HRSG Heat Balance Analysis Expected Section-by-Section Performance Impact of Fouling on HRSG Performance HRSG Performance Evaluation Example Performance Analysis Fouled HP Evaporator Example of Section-by-Section Expected HRSG Perf Conclusions and Recommendations Steam Turbine Performance 9.1 Overview 168 168 169 172 174 177 179 185 195 197 198, 201 209 210 214 216 225 226 227 231 231 232 235 245 246 248 251 257 260 263 264 265 266 273 280 284 289 294 297 299 301 301 9.2 93 94 9.5 9.6 97 98 99 Steam Turbine Configurations 9.2.1 Inlet Section 9.2.2. Condensing Section 9.2.3 Back-Pressure Steam Turbines 9.2.4 Extractions 9.2.5 Controlled (‘Automatic’) Extraction 9.2.6 Uncontrolled Extraction 9.2.7 Admission 9.28 Reheat Seals and Leaks Steam Turbine Thermal Performance 9.4.1 Steam Turbine Efficiency and Heat Rate 9.4.2 Pressure, Temperature and Flow Relationships Steam Turbine Heat Balance Analysis 9.5.1 Combined-Cycle ST Heat Balance Analysis 9.5.2 Rankine Cycle ST Heat Balance Analysis Curve-Based Expected Performance 9.6.1 Rankine Cycle ST Correction Curves 9.6.2 Combined Cycle ST Performance Curves Model-Based Expected Steam Turbine Performance 9.7.1 Expected Performance of Overall ST 9.7.2 Section-by-Section Expected ST Performance Building ST Expected Performance Models Steam Turbine Degradation Boiler Performance 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 Boiler Efficiency Theoretical Air Boiler Losses Flue Gas Loss 10.4.1 Generalized Chemical Balance Method 10.4.2 Products of Combustion Method 10.4.3 Loss Due to Moisture Loss Due to Ash Loss Due to Radiation Credits for Heat Addition to Boiler Boiler Heat Balance Analysis 10.8.1 Furnace Heat Balance Analysis 10.8.2 Analysis of Boiler Convective Heat Exchangers 10.8.3 Desuperheater Heat Balance 10.8.4 Air Heater Heat Balance 10.8.5 Simultaneous Solution of the Equations Model-Based Boiler Heat Balance Analysis 302 302 308 314 315 315 316 320 321 323 325 325 328 329 329 335 341 342 345 348 348 352 355 362 367 367 369 372 373 373 374 375 379 379 380 381 385 395. 396 397 399 408 2 13. 14. 15. 16. 17. 10.10. Expected Boiler Performance 10.10.1 Curve-Based Method for Exp Boiler Perf 10.10.2 Model-Based Expected Boiler Performance 10.11 Boiler Degradation 10.12 Sootblowing Analysis Air Heater Performance 11.1 Overview 11.2 Air Heater Heat-Balance Analysis 11.3 Air Heater Expected Performance 11.4 Air Heater Degradation Feedwater Heater Performance 12.1 Overview 12.2 Feedwater Heater Heat-Balance Analysis 12.3. Expected Feedwater Heater Performance 12.4 Feedwater Heater Degradation Deaerators, Drums and Open Heaters Condenser Performance 14.1 Overview 14.1.1 ASME Method for Condenser Heat Transfer 14.1.2 The HEI Method for Condenser Heat Transfer 14.2 Condenser Heat Balance Analysis 14.2.1 Overall Plant Energy Balance for Cond Duty 14.2.2 Steam Turbine Expansion Line Analysis 14.2.3 Condenser Heat Balance Equations 14.2.4 Condenser Cleanliness from Measured Data 14.2.5 Validation of Condenser Heat Balance Data 14.3 Condenser Expected Performance 14.3.1 Predicting Expected Condenser Performance 14.4 Condenser Degradation 14.5 Diagnosing Condenser Performance Problems Cooling Tower Performance 15.1 Overview 15.2 Cooling Tower Performance Curves 15.3 Cooling Tower Heat Balance Analysis 15.4 Expected Cooling Tower Performance 15.5 Cooling Tower Degradation Inlet and Exhaust Pressure Losses 16.1 Overview 16.2 Fitting the Pressure Loss Equation to Data 16.3 Pressure Loss Degradation Pump Performance 17.1 Overview 410 412 416 426 430 433 433 440 443, 44s 447 447 449 453 437 461 461 461 464 466 467 467 468 469 47 474 415 475 477 480 487 487 490 493 500 502 503 503 504 505 507 507 17.2 173 174 17.5 17.6 17.7 17.8 Extended Bernoulli Equation Pump Curves Affinity Laws Corrected Pump Performance Pump Flow Control Model-Based Pump Performance Pump Degradation References and Links Nomenclature APPENDIX Definition of Terms 507 512 514 sis 518 519 521 523 527 531 Foreword I developed an interest in performance monitoring in 1983 when I worked as a consultant to Pacific Gas and Electric (PG&E). I was asked to review measured data from the steam cycle of a nuclear power plant. The task was, to evaluate feedwater heater performance by comparing plant measured data toa PEPSE™ computer model (built by someone else at PG&E) of the steam cycle, and identify any discrepancies which might indicate a performance problem. I compared the measured feedwater heater TTD’s (terminal temperature differences) to the predictions of the PEPSE™ model. The computer code predicted integer numbers for each of the feedwater heater TTD’s. The measured data was within 43 F of the predicted values from the computer code for all of the feedwater heaters, but did not match any of them. The fact that the predicted TTD’s were integers indicated to me that the computer model was a design prediction of the plant performance, and that, the TTD’s were inputs to the analysis. Did the predicted results mean that some of the feedwater heaters were better than expected, and that some of them were fouled? Or was this computer prediction just a theoretical design model that did not necessarily represent reality at the plant. [ did not know, so I decided to run some alternate computer calculations to see if I could learn more. Tlooked at the measured drain flows from each feedwater heater. I knew that the TTD’s and the drain flows were related to each other by mass and energy balances. The beauty of a computer code like PEPSE™ is that all these mass and energy balances are automatically calculated. I just needed to input plant measured data (TTD’s), and PEPSE™ would calculate heat balance data that is consistent with my inputs. Unfortunately, the measured drain flows bore little resemblance to the computer calculated values for these flows. Typically the calculated flow values were different from the measured values by 30% or more. I came to the conclusion that the measured drain flows were of little value. A PG&E engineer confirmed that the drain flows were known to be inaccurate. I decided to see what effect a TTD has on the overall steam-cycle performance. I chose a feedwater heater that appeared to be performing poorly (TTD measured three degrees F higher than predicted) and entered this measured TTD into the PEPSE™ input field for the chosen feedwater heater. I ran the PEPSE™ prediction and was astonished to see that the only predicted result than changed very much was the TTD for the feedwater Foreword heater that I had input to the analysis. The remained of the steam cycle was almost unchanged. The feedwater temperature at the exit of every feedwater heater, except for the one feedwater heater that I changed, was exactly the same as in the original computer prediction. This exercise taught me the difference between design analysis, where equipment operating data (such as a TTD) is input, and an off-design analysis (also called predictive analysis) where the equipment performance capabilities (such as surface area and heat transfer coefficient at the design point) are input and the operating data is predicted. In my case, the computer analysis increased the size (heat transfer coefficient times surface area) of the feedwater heater following the degraded feedwater heater such that the fall-off in feedwater temperature caused by the degradation in one feedwater heater was exactly made-up by the improved performance in the following feedwater heater. I refer to design analysis as running the “tuber” power plant because the heat balance code changes the size of plant equipment to meet the specifications (temperatures, flows, power levels and pressures) of the software user. ‘The concept of design analysis versus off-design analysis was not developed {for application to performance monitoring, but it has turned-out to be one of the key characteristics that makes heat balance codes useful as the calculation engines for on-line performance monitoting systems. Performance monitoring involves the comparison of current performance to expected performance. Design analysis can be used to mach a computer heat-balance analysis to the current plant operating conditions (current performance), and the off-design analysis can be used to predict the expected equipment performance (expected performance) given those current operating conditions. Later in my career, when my company developed the GateCycle™ heat balance code, the concept of design versus off-design analysis was built directly into code structure. We wanted it to be easy for a user to establish the design-point performance of a power plant in one calculation, and then switch to the off-design analysis to predict the power plant performance over a range of postulated operating conditions. Several vendors of commercial heat balance products have built the concept of design versus off-design analysis into their produets. The GT PRO™ and STEAM PRO™ products from Thermoflow, Inc. perform design analysis, and allow the user to transfer the results of the design analysis to the predictive (off-design) analysis that is done by the GT MASTER™ and STEAM MASTER™ computer codes. Foreword The GateCycle™ user can establish the design-point performance of a power plant in one calculation, and then switch to off-design analysis (by clicking a check box on the user interface) to predict the power plant performance over a range of postulated operating conditions. PEPSE™ allows the user to perform design or off-design analysis through the option to either input a desired equipment output parameter (such as feedwater heater TTD), or detailed equipment characteristics (such as the number of tubes, the tube sizes, the tube material and surface type of a heat exchanger). The choice of inputs must be selected individually for each icon in the PEPSE™ model. ‘The engineer that I reported to at PG&E just wanted me to compare the predicted temperatures to the measured temperatures, and identify any potential problems. Unfortunately there were too many discrepancies, and I didn’t know how to resolve them. I soon realized that the running of a design computer model of the plant and comparing to measured data was not an adequate process to identify equipment degradation. I describe this example to illustrate the point that it is not obvious how to use commercial heat balance codes to monitor performance, even when you are familiar with all of the inputs and outputs of the heat balance code. 1 spent next two decades developing and implementing procedures for using heat balance analysis and commercial computer codes to monitor the performance of power plants and their equipment. This book documents what I learned. Returning to the discussion of the performance monitoring evaluation at the PG&E nuclear power plant, I can now say what I should have done to evaluate the plant data back in 1983. First, I would assume that the PEPSE™ computer model, given to me by PG&E, was a design analysis model that matched a vendor heat balance or guarantee prediction of the power plant performance at full load. The TTD’s in this model probably came from a heat-balance diagram delivered to PG&E by the plant vendor. This model would become the basis of predictive models for the expected performance of the plant equipment. The equipment performance characteristics from the design model, such as the UA (heat transfer coefficient multiplied by surface area) of each heat exchanger, and the design isentropic efficiency of each steam turbine section, would be the inputs to the predictive (off-design) models. I would copy and rename the design PEPSE™ model, and then manually edit the inputs from design to off-design for each piece of plant equipment in the Foreword PEPSE™ model (these equipment inputs are changed for you automatically in GateCycle™ by selecting the off-design analysis mode), Fora feedwater heater, the input would be changed from TTD to surface area plus the design value of heat transfer coefficient. If I did not know the actual surface area, I would choose a correlation for heat transfer coefficient and then iterate on the input value of surface area until the feedwater heater TTD matches the value used in the design analysis. The only requirement is that the product of surface area and heat transfer coefficient result in the desired TTD. I would then run this off-design PEPSE™ model to confirm that its prediction matches the results of the design model when the plant is running at the design operating conditions. I would now have two PEPSE™ models, the original design model and an expected plant performance model that matches the design model at the design plant operating conditions, and will correctly predict plant performance at various plant operating conditions (such as changes in load or environmental conditions). The original design model would not correctly predict plant performance over a range of plant operating conditions because the TTD’s (and other parameters) are held constant, I will also need separate PEPSE™ expected performance models of each feedwater heater, each steam turbine section and the condenser. These models will be used to predict the expected performance (TTD, condenser pressure, steam turbine power) of each piece of plant equipment. Each of these PEPSE™ models would contain a single piece of plant equipment plus sources representing the flows into the equipment from other locations in the steam cycle, and sinks to receive the flows going out of the equipment. I would use the following procedure to evaluate the performance of the feedwater heaters. 1, Perform Plant Heat Balance Analysis Run the design PEPSE™ model of the overall plant with plant | measured data (such as the TTD of each feedwater heater) as the input Values to each piece of equipment in the plant. This is called the “heat balance analysis” of the power plant. The heat balance analysis yields a set of current plant operating data that matches the plant measured data, but is more complete than the measured data. This heat balance data will include all the steam turbine extraction flow rates and their enthalpies, and it will also include feedwater heater drain flow rates that are consistent with the extraction flows and the feedwater heater TTD’. Foreword 2. Calculate Expected Equipment Performance Predict the expected performance of each feedwater heater using a separate PEPSE™ model for each feedwater heater. The inputs to each feedwater heater model are the mass flow rates and enthalpies of all three streams that flow into the feedwater heater: the extraction steam, the feedwater and the incoming drain water from the higher pressure heaters, The design heat transfer characteristics (UA) of each feedwater heater are part of the input data for each feedwater heater. The expected feedwater heater outlet temperatures (from which TTD and DCA can be calculated) are outputs of this calculation. 3. Evaluate Degradation Degradation is based upon the difference between the current performance and expected performance. The current performance of ‘a feedwater heater is the measured TTD. The expected performance is the expected TTD, calculated for each feedwater heater. The difference between the measured TTD and expected TTD is an evaluation of degradation in the feedwater heater. The expected TTD’s from the above performance monitoring procedure will be very different from the design point TTD’s that were used as inputs in the original PEPSE™ model. Visualize what would happen if one feedwater heater is degraded such that its TTD is three degrees higher than the design TTD. The plant heat balance analysis (step 1. above) will calculate a lower extraction flow for this degraded feedwater heater. This lower extraction flow will result in higher steam turbine pressure at the extraction location and at all the extraction locations downstream in the steam-turbine steam. path. These higher extraction pressures result in higher steam saturation temperatures, which will change the TTD’s of all the feedwater heaters receiving this higher pressure steam. The higher TTD from the degraded feedwater heater will result in a lower feedwater temperature at the inlet to the next (higher pressure) feedwater heater, and this will change both the TTD and the extraction steam flow to that higher pressure feedwater heater. The changes in extraction flows will change the steam turbine power and the exhaust energy from the steam turbine, which changes the condenser pressure. These complex interrelationships between the feedwater heaters, the steam turbine and the condenser indicate that degradation at one point in the steam cycle will cause changes in the measured (and expected) performance all around the steam eycle. The expected TTD of any feedwater heater is Foreword dependent upon the performance of the remainder of the steam cycle equipment. It is necessary to perform a heat balance analysis (step | above) to quantify these interrelationships. Then an expected equipment performance calculation (step 2 above) can predict the expected output from each feedwater heater given the performance of other equipment in the plant In the case of the degraded feedwater heater, the expected TTD of the higher pressure feedwater heater will be higher when the lower pressure feedwater heater is degraded than it would be if the lower pressure feedwater heater is not degraded. If the monitoring system assumes a “target” TTD for a feedwater heater that doesn’t depend upon the performance of the other feedwater heaters, it is assuming that a higher pressure feedwater heater will make-up any degradation in the lower pressure feedwater heaters. This is equivalent to assuming that the “rubber” (design analysis) model of the plant is an accurate prediction of the expected equipment performance. The one remaining problem with the performance analysis described above is that the expected performance prediction is based on a design model that might not represent the actual performance expected for plant equipment. One way to resolve this is to obtain data from early in the plant operational history when degradation can be consicered to be zero. Use this plant data as the design data in the plant design aralysis model instead of vendor design or guarantee data. This “tuning” or “base-lining” process can be repeated at any time during the life of the plant: use measured plant data as the plant design data such that the plant degradation is zero at the time of the measured data, An on-line performance monitoring system should be thought of as a relative evaluation instead of an absolute evaluation. Because plant measured data does not normally come from calibrated, precision-test instruments, the absolute magnitude of the results may not be accurate. However, on-line performance monitor:ng systems can be very good at detecting changes in performance. Because an on-line performance monitcring system produces relative results, the degradation of plant equipment must be tracked over time to identify changes that have occurred. For this reason, itis important to install a performance monitoring system early in the operational history of the plant. Any performance changes that occurred before the monitoring system ‘was installed at the plant may be missed by the monitoring system. The only way I would be able to tell my PG&E manager about degradation in the feedwater heaters would be to run earlier measured data through the performance monitoring calculation described above, and then compare the Foreword degradation calculated from the earlier time to the degradation calculated from the current plant measured data. The changes in calculated degradation are an accurate indication of changes in equipment performance, but the absolute values of the degradation may not be accurate. Rodney R. Gay Foreword Foreword 1. Overview of Performance Monitoring 1.1 Concept of Performance Monitoring 1.1.1. “Where You Are” Versus “Where You Should Be” Performance monitoring is the process of continuously evaluating the production capability and efficiency of a power plant and its equipment over time using measured plant data. Performance monitoring evaluations are repeated at regular intervals using data readily available ftom on-line instrumentation. This differs from a performance test, a one-time event that relies on precision instrumentation installed specifically for that test. The objective of performance monitoring is to continuously evaluate the degradation (decrease in performance) of the plant and its equipment in order to provide plant operators additional information to help them identify problems, improve performance, and make economic decisions about scheduling maintenance and optimizing plant operation. A successful performance monitoring system can tell plant operators how much the plant performance has changed and how much each piece of equipment in the plant contributed to that change. This information enables operators to localize performance problems within the plant and to estimate the operational cost incurred because of the performance deficits. While it is expected that performance monitoring will help operators diagnose and repair faults in plant equipment, the diagnostic procedures to accomplish this are beyond the scope of this book. To answer the question “How good is my performance?” one must compare the current capability of the power plant and its equipment to its expected capability. Thus, performance monitoring is a comparison of the current capability, “Where You Are”, to the exnected capability, “Where You Should Be”. Production capability is a measure of tke ability of equipment to produce the output that the equipment is designed to produce; it is not the current production. In other words, a plant that is designed to generate (produce) 600 MW, might only be able to generate $50 MW on a hot day, but still be capable of generating 600 MW when operating at its design conditions. The objective of performance monitoring is to continuously evaluate this capability and monitor its change over time. Degradation is defined as the shortfall in equipment performance caused by ‘mechanical problems in the equipment (such as wear, fouling, and page 13 1. Concept of Performance Monitoring oxidation), but not by changes to set points under the control of the plant operators. For example, if plant operators increase the excess oxygen on a coal-fired boiler to reduce CO emissions when burning low quality fuel, the boiler efficiency will decrease. The boiler capability has not changed: if the fuel and excess oxygen level were returned to their original value, the boiler efficiency would also improve to its original value. Thus, the observed efficiency decrease in this example is not degradation, but is instead an opportunity for economic optimization. A second example is a gas turbine whose water-to-fuel injection ratio must be increased to meet more restrictive NOX emissions requirements. The engine power would increase and the heat rate would get worse (increase). These changes in performance do not represent degradation, just a change in operating conditions. Economic optimization is concemed with finding the plant operating mode and control set points that meet all constraints on plant operation (such as equipment protection and emissions limits) and maximize plant profits. The current degradation of plant equipment is an important input to optimization analysis and the current plant control set points are important inputs to degradation analysis, but the two are separate evaluations. Performance monitoring involves two calculations: current production and expected production. The evaluation of performance degradation is a comparison between these two values. For example, a plant designed to produce 600 MW on a 59 F day may be expected to produce 550 MW on a 100 F day. If the plant meets its expected production of 550 MW on the 100 F day, then its performance is as expected (zero degradation) even though it did not perform at its design production level of 600 MW. Table 1-1 lists the plant equipment types discussed in this book, the production objective(s) of each equipment type, and an output parameter that is a measure of each production objective. Any performance evaluation of the equipment listed in the table must relate the current production capability of the equipment to the expected production capability. Notice that the equipment types that consume fuel have two production objectives, and hence two measurements of performance. This is because output and efficiency are independent parameters for these equipment types. For fuel- consuming equipment, efficiency needs to be evaluated along with output production capability because it may be possible to achieve higher output by simply consuming more fuel. For other equipment types (non-fuel-consuming types such as heat exchangers and steam turbines), the input source of energy is fixed (that is, not determined by the performance of the equipment type being monitored) 1.Concept of Performance Monitoring page 14 and therefore higher efficiency causes higher output. Thus, for these equipment types output and efficiency are not independent performance parameters, Table 1-1 List of equipment types and their production objectives Equipment Production Objective Measured Output Electricity Power Plant Efficiency Efficiency Net Power (MW) Net Heat Rate Heat Rate Steam Generation Steam Flow(s), Steam Turbine Electricity Boiler Temperature(s) and Pressure(s) Efficiency Boiler Efficiency Heat Recovery Steam | Steam Generation Steam Flow(s), Generator ‘Temperature(s) and Pressure(s) Condenser Vacuum Condenser Shell Pressure Cooling Tower Energy Rejection Cooling Water Temperature to Condenser Feedwater Heater Feedwater Heating Feedwater Outlet Temperature ‘The performance of a power plant has two measures: power and heat rate. ‘They are independent measures of peiformance in that the highest power is not necessarily achieved at the best (lowest) heat rate. A plant operator generally has the option to control the plant for maximum power output or to control for maximum efficiency. A performance evaluation of a power plant must include evaluations of both the power generation capability and the heat rate capability. page 1 1. Concept of Performance Monitoring A gas turbine is like a power plant; in ‘act, a simple-cycle gas turbine is a power plant. Thus, both power and heat rate are independent performance parameters that must be evaluated when monitoring a gas turbine. A boiler consumes fuel to generate steam. Both the’steam generation capability and the boiler efficiency are important parameters of boiler performance, and both must be evaluated. The job of a heat recovery steam generator (HRSG) is to convert the available exhaust gas energy into as much steam as possible. When the plant is operating at full load, the temperature and pressure of the steam are controlled by the plant, and therefore are not independent parameters of HRSG performance. They represent requirements that the HRSG must meet. Improved HRSG effectiveness or efficiency results in increased steam generation. There is no opportunity to increase steam generation capability without increasing HRSG efficiency; thus, efficiency and steam generation are not independent parameters of performance. A performance monitoring system must compare the current value of HRSG steam generation or efficiency to its expected value. ‘A condenser’s job is to condense all of the steam exhausted from the steam turbine at a pressure as low as possible. The need to condense alll of the steam is a requirement that must be met. Condenser pressure is the measure of condenser performance: the lower the pressure the better the performance. A performance monitoring system must compare the current value of this pressure to its expected value. Other parameters of condenser performance, such as cleanliness, are only important because they are an indication of the ability of a condenser to reduce steam turbine exhaust (condenser) pressure to its expected value. A cooling tower must reject all of the steam condensation energy (condenser duty) to the cooling media (air or water). The quantity of energy to reject is a requirement that the cooling tower must meet. The measure of performance of a cooling tower is the cooling water temperature at the exit of the cooling tower (or at the inlet to the condenser). A lower value of thi temperature indicates better performance. A performance monitoring system must compare the current value of this temperature to its expected value, 1.1.2 Performance Calculation Procedure Performance monitoring involves a calculational procedure that is repeated at regular time intervals. The details of the calculation vary greatly from plant to plant, depending upon the measured data that is available, the plant type, and the degree of sophistication cf the calculations. However, a 1. Concept of Performance Monit page 16 performance monitoring calculational procedure always involves some or all of the following steps: 1. Acquire measured data. 2. Review, check and/or validate the raw measured data to find errors and omissions. 3. If possible, fix errors or omissions identified in the measured data. 4. Improve precision of the measured data by averaging and/or other techniques. 5. Compute fluid thermal properties, such as enthalpy and entropy, from measurements. 6. Use mass, energy and/or chemical balances to calculate data that is not measured, but can be computed from the measurements that do exist. 7. Compute current values for equipment parameters of performance such as heat rates, efficiencies, effectiveness’s, temperature differences, and cleanliness. 8. Predict expected values for equipment parameters of performance. 9. Compute the corrected performance of the plant equipment 10. Calculate the shortfall in performance (degradation), based upon the difference between the expected and current values of the performance parameters. 11, Estimate the effect (impact) that the equipment degradation has on plant performance and plant operating cost. 12, Perform plant optimization calculations to predict the most cost- effective way to run the degraded plant equipment. A given performance monitoring system often will not perform all of these calculational steps, but the list is a fairly complete compilation of the calculations that can be and probably should be done in a comprehensive and successful performance monitoring system. 1.1.3. Expected Performance: “Where You Should Be” For performance monitoring to be meaningful, one must compare current performance to expected performance, and track that comparison over time. This process is equivalent to tracking degradation (the difference between expected and current performance) over time. Since performance monitoring is a continuous process, as opposed to a one-time event like a page17 1. Concept of Performance Monitoring performance test, the performance evaluation will be performed over a variety of plant operating conditions. This makes the evaluation of expected performance the most challenging aspect of performance monitoring. Figure 1-1 illustrates the concept of expected versus actual (degraded) power for a gas turbine engine. The baseload power of a gas turbine engine varies with inlet air temperature, as illustrated by the expected power line in this figure. It is assumed for the purposes of this discussion that ambient temperature is the only environmental parameter that is changing, Gas turbine vendors typically provide performance curves which show how performance will change with environmental conditions. A vendor performance curve can be used to compute the expected power line. Notice that one point on the expected power line is the rated power, which occurs at only one air inlet temperature (shown as Trefeence in Figure 1-1). When a degraded gas turbine is operated over a range of inlet air temperatures, the measured gas turbine power levels will likely be along a line below the expected power line, as illustrated in Figure 1-1. The corrected power is the power that the actual (degraded) engine would produce if operated at the reference temperature. The difference between the tated and corrected power is the degradstion of the engine from rated. The procedure to calculate expected performance is to start with the expected performance at the reference operating conditions (the rated performance), and then use a model or models of equipment performance to predict the change in equipment performance when the equipment is operated at conditions different from the reference operating conditions. The ‘model(s) of equipment performance can be very simple, such as table look- ups, or very complex, such as a physically based computer code. 1. Concept of Performance Monitoring page 18 Performance Evaluation Terms GT Power Expected (ne degradation) Expected Performance ‘Actual Performance rae Trotoane Inlet Temperature dooF Figure 1-1, Comparison of rated, expected, measured and corrected power for a gas turbine The expected performance line in Figure 1-1 is actually a simple example of a performance model of a gas turbine. This line shows how gas turbine power will change as the gas turbine inlet temperature changes. This line could be converted into a table look-up as part of a computerized performance model. Of course, any gas turbine performance monitoring system must also account for changes in other reference operating conditions such as inlet pressure loss, exhaust pressure loss, fuel properties, inlet pressure, inlet relative humidity, steam/water injection, inlet guide vane angle and firing temperature. Since these are independent parameters of gas turbine performance, separate models can be used for each condition, and the total power change is the product of the power changes predicted from the changes in each reference operating condition. Using curves to evaluate equipment performance is discussed later in this chapter under “Curve- Based Methods”. page19 1, Concept of Performance Monitoring An alternative model of gas turbine performance is a computer code that includes physically based mathematical models of the compressor, combustor and expander. Such a code would take the operating conditions as inputs and predict the gas turbine power and heat rate at those operating conditions. It would be necessary to adjust (tune) such a computer code so that it accurately predicts the gas turbine rated performance at the reference operating conditions, Then the performance monitoring system could input ‘measured data into the computer code to predict the expected performance at the current measured operating conditions. Using physically based computer models to evaluate equipment performance is described later in this chapter under “Model-Based Performance Analysis”. 1.1.4 Equipment Ratings The rated performance of plant equipment must include a specification of all the external conditions and control settings that change equipment performance but are not part of the equipment itself. Table 1-1 lists all of the specifications that are required to state the rating of a gas turbine. ‘There are several ways to obtain the rating data for a plant and its equipment. For performance monitoring purposes, the choice is somewhat arbitrary since a monitoring system tracks changes in performance or degradation over time. If the monitoring system defines degradation as the fall-off in performance over time, the absolute value of the rating cancels ‘out. Several ways to define equipment ratings are: ‘© Use vendor guarantees © Use acceptance test (as-built) data for the plant and equipment ‘* Use plant measured data at the time the monitoring system is installed © Baseline (tune) the ratings on a regular basis using plant measured data A gas turbine will produce its rated power and heat rate only at the reference operating conditions listed. The values of the reference operating conditions are called the reference data. All of the data in Table 1-2 are related. Change any of the operating conditions (from their reference values), and the power and heat rate of the engine will change (from rated), 1. Concept of Performance Monitoring page 20 Table 1-2 Typical rating specifications for a gas turbine engine Gas Turbine Rating Specifications Example Data RATING: Gross Power 170oMw Gross Heat Rate 9400 BewKW-hr REFERENCE OPERATING CONDITIONS: Ambient Temperature 59 deg-F Ambient Pressure 14.65 psia Ambient Specific Humidity 0.0065 Ibm H,0/lbm air Inlet Pressure Loss 4 in H,0 | Exhaust Pressure Loss 12 in HO ‘Steam/ Water Injection none Fuel Type Natural Gas | Fuel Lower Heating Value 20200 Brw/lbm Inlet Guide Vane Angle 86 deg, Firing Temperature 7300 deg-F Inlet Cooling or Heating none The expected performance prediction for a gas turbine, or for any equipment type, requires both a set of rating specifications (which includes both the rated performance and the reference operating conditions), plus a model of performance that predicts how performance changes as the operating, conditions change. Table 1-3 lists the rating specifications for a typical heat recovery steam generator. page2l 1. Concept of Performance Monitoring Table 1-3 Typical rating specifications for a heat recovery steam generator (HRSG) Heat Recovery Steam Generator Example Data Rating Specification RATING: HP Steam Flow 415,000 lb/hr IP Steam Flow 70,000 liar REFERENCE OPERATING CONDITIONS: Exhaust Gas Flow 3,250,000 Ilr Exhaust Gas Temperature 1138 F Exhaust Gas Composition 3% HO HP Drum Pressure 1900 psia IP Drum Pressure 400 psia LP Drum Pressure 100 psia Inlet Feedwater Temperature 140F HP Steam Temperature 1100 F Duct Bumer Fuel Flow none Steam Extraction to Process 20,000 Ilar Water Extraction to Process 30,000 tar Once again, the rating specifications for an HRSG indicate that the HRSG will produce the rated steam flows only if it is operating at the reference ‘operating conditions. To predict HRSG expected performance, a monitoring system must be able to predict the change in HRSG performance as operating conditions change from their reference values. Table 1-4 gives typical rating specifications for a steam turbine. 1. Concept of Performance Monitoring page 22 Table 1-4 Typical rating specificaiions for a steam turbine/generator Steam Turbine Rating Specification T Example Data RATING: Gross Power 190 Mw REFERENCE OPERATING CONDITIONS: Throttle Steam Flow £930,000 Ibrar Throttle Steam Temperature 137F Throttle Steam Pressure 1800 psia Condenser Back Pressure 0.8 psia Reheat Steam Temperature 1000 F UP Extraction Flow none LP Admission Flow 160,000 tbrhr A steam turbine will generate its rated power only at the rated steam flow conditions and condenser pressure. Any change in these flow conditions or pressures will cause the steam turbine power to change. page 2! 1. Concept of Perfarmance Monitoring ‘Table 1-5 Typical rating specifications for a condenser Condenser Rating Specification Example Data RATING: Shel (Steam) Pressure 0.8 psia REFERENCE OPERATING CONDITIONS: Inlet Steam Flow 930,000 tbrhr Inlet Steam Enthalpy 1000 Brullb Cooling Water Flow 6,000,000 Ibe Cooling Water Inlet Temperature 80F ‘A condenser is required to condense all of the incoming steam and transfer the energy released from condensation to the cooling water. The condenser duty, the cooling water flow and the cooling water inlet temperature are ‘imposed upon the condenser by the performance of other equipment in the plant (external to the condenser). The condenser is designed to achieve its rated pressure at a given (reference) set of inlet flow conditions. Any change in the inlet steam or water flows will be expected to change the condenser pressure. 1. Concept of Performance Monitoring page 24 Table 1-6 Typical rating specifications for a coal-fired boiler Boiler Rating Specification Example Data RATING: Main Steam Generation 2,560,000 tbvhr Boiler Efficiency 89.59% REFERENCE OPERATING CONDITIONS: Fuel Input Energy 3374 mmBtwhr ‘Steam Drum Pressure 2800 psig Steam Temperature 1005 F Reheat Steam Temperature 1005 F Reheat Steam Flow 2,275,000 lb/hr Reheat Steam Inlet Pressure 592 psig. Fuel Higher Heating Value 11,495 Bru Fuel Composition (C, H, N, $, HO, Ash) (64.2,4.1,25,4.4,08,4.1,19.9) Inlet Feedwater Temperature 475 F Inlet Air Temperature 80F Inlet Air Relative Humidity 60% page 2S 1. Concept of Performance Monitoring Table 1-7 Typical rating specications for a feedwater heater Feedwater Heater Rating Specification Example Data RATING: Outlet Feedwater Temperature 420F Outlet Drain Water Temperature 380F REFERENCE OPERATING CONDITIONS: Inlet Steam Flow 120,000 lbvhr Inlet Steam Temperature 890 F Inlet Sieam Pressure 320 psia Inlet Feedwater Flow 2,600,000 lb/hr Feedwater Inlet Temperature 370F Inlet Drain Water Flow 150,000 tbr Inlet Drain Water Temperature 460 F 1.1.5 Corrected Performance: The Indicator of Degradation For combined-cycle power plants, the expected performance varies greatly over time. This makes it difficult to track changes in performance, as the measured values of most performance parameters vary due to changes in plant operating conditions. One methodology to make the identification of performance changes over time easier is to “correct” the current performance to a standard operating condition, usually the reference operating conditions. To correct the performance means to account for the performance variations that would be expected due to the changes in environmental conditions and control set points. The corrected performance is the performance that would be expected if the current (degraded) engine were operating at the reference operating conditions. The virtue of corrected performance is that its expected value zemains constant and equal to the rated value. Thus, any change in a corrected value represents a change in equipment performance capability 1.Concept of Performance Monitoring page 26 Corrected power is a barometer of engine performance. It goes down when degradation increases and it goes up when degradation decreases. In fact, the degradation in performance from one point in time to another is equal to the change in corrected performance over that time range. a || ef —___ i Figure 1-2 Measured and corrected gas turbine power over a nine-month time period. An overhaul was performed on the gas turbine during October 2002; this time period is evident on the plot as the time during which there is no measured data Corrected gas turbine power accounts for changes in engine operating conditions and predicts the equipment performance if the equipment were to operate at the reference operating conditions (including inlet filter delta-P, ambient conditions, load level, water/steam injection, fuel heating value, and exhaust delta-P). If changes in the engine operating conditions were to ‘cause changes in gas turbine power, the corrected power would not change. Figure 1-2 is an actual trend of gas turbine measured and corrected power. The measured power is shown on the plot only when the engine was operating at or above 99% percent of baseload power. Notice that measured page27 1. Concept of Performance Monitoring baseload power varies during each day, and is higher in the winter months than in the summer months.The trend display of measured power is a history of operation, but gives the viewer little information about degradation, Each corrected power point shown in Figure 1-2 is an average of calculated corrected power over a time period of approximately two hours. Corrected power is a prediction of the power that the engine would generate if operating at reference operating conditions. Itis essentially the current, rating of the engine, or it is a prediction of the power the engine would achieve in a performance test at reference operating conditions. The corrected power is a convenient plotting parameter because it shows degradation in the engine. Notice that the engine corrected power started at over 161 MW in July and degraded to approximately 158 MW by October, a loss of approximately 3 MW over a three-month period. The engine ‘overhaul in October improved the corrected power back up to approximately 162 MW. In other words, this plot shows that the overhaul improved the engine's power capability by 3 MW to 4 MW. Figure 1-3 Shows corrected condenser pressure over an eight-month period Figure 1-3 shows corrected condenser pressure at a combined-cycle power plant in the United Kingdom. Notice the slow increase in corrected pressure ‘over 150 days, indicating fouling of the condenser tubes and/or blockage in the waterboxes. Cooling water flow through the tubes also decreased about 4% during this time period (not shown on the figure). When the tubes and 1. Concept of Performance Monitoring page 28 waterboxes were cleaned during a plant outage, the corrected condenser pressure improved back to approximately the same level as the beginning of the trend, and the cooling water flow rate also recovered (not shown on the figure) 4.1.6 What is My Degradation? Degradation is the reduction in equipment performance capability that has occurred over time. It is a relative parameter; it compares equipment capability at one point in time to that at another time. Since the corrected gas turbine power is a prediction of the current rating of the engine, the difference in corrected power from one point in time to another is the degradation that has occurred over the time period. Thus, degradation may be defined as the change in corrected performance over time. For the value of degradation to be meaningful, the start time and end time of the degradation must be stated. If no time range is stated, it is usually assumed that the degradation is over the operational lifetime of the equipment, which is from the time the equipment was put into service to the present. Often when historical data is not available, degradation may be stated as the difference between the current equipment capability and its rated capability. This is equal to the difference between the rated performance and the corrected performance. Since degradation is defined as a change in performance over time, this definition of degradation is only true if the equipment actually achieved its rated performance at some point in time. Rated performance is often set equal to the vendor guarantee as opposed to a performance test at the beginning of equipment life. Thus, the equipment ‘may not have ever operated at its rated performance. Degradation will be defined throughout this book as the difference between corrected and rated performance. Ideally, the rated performance should be defined as the actual performance at some given point in time, but if sufficient plant data is not available it may be set equal to the vendor guarantee. The definition of degradation as a change over time instead of the change from vendor guarantee is significant to the concept of performance monitoring because a change over time is an aid in identifying changes in equipment performance, while a change from guarantee may be misleading. If the degradation is defined as the change from guarantee (a level of performance that the equipment may never have actually operated at), some plant equipment may show may show negative degradation, indicating that the equipment is performing better than the guarantee level. page29 1, Concept of Performance Monitoring 1.1.7 How Much is Degradation Costing Me? Knowing the amount of degradation is important, but it’s not the full story. ' In order to make decisions about which maintenance to perform, plant operators need to know how much the degradation is costing plant operation. For example, a reduction in gas turbire performance (power and heat rate) has an effect on overall combined-cycle plant performance, which can be calculated using an overall plant model. The power reduction in the gas turbine reduces plant power because both the gas turbine and the steam turbine power levels will change. The steam turbine power changes because the gas turbine exhaust flow and temperature normally change as a result of the gas turbine degradation. The heat rate increase of the gas turbine will cause the plant to consume more fuel per MW-hr of power produced. These effects on plant power and heat rate can then be converted to operating costs by applying a fuel cost to the extra fuel being burned, and/or a MW-hr cost to the power which is not being sold because of the degradation. Here isa situation where the definition of degradation as a change in performance over time, as opposed toa change from vendor guarantee, is particularly important. Once the plant is accepted and goes into commercial operation it is too late to worry about equipment guarantees. The best that the operators can be expected to do is to maintain plant performance at a level that the plant actually operated in the past. Therefore, degradation is an estimate of the performance improvernent that is possible, and any existing degradation can be looked upon as the source of an operational cost that is potentially avoidable. Degradation normally is evaluated in different engineering units for each ‘equipment type: gas turbine degradation is in MW while condenser degradation is in either psia or percent cleanliness. This makes it difficult to compare degradations calculated for cifferent parts of the plant or for different equipment types. One way to make a meaningful comparison is to calculate the impacts of the degradation on overall plant power, heat rate and operating cost. The definition of z plant impact is the change in plant performance that would be realized if the degradation were to be returned to zero by some maintenance action. For example, a condenser may have a degradation of 0.1 psia (6.9 mbar). This means that the condenser is operating at a pressure 0.1 psi (0.69 mbar) higher than it would operate if the degradation were zero. This degradation causes a reduction in steam turbine power, which is also a reduction in plant power. The impact of the condenser degradation on plant power is equal to the change in plant power caused by the degradation of the condenser. 1. Concept of Performance Monitoring page 30 The reduction in plant power due to the condenser degradation increases plant heat rate since fuel flow is not changed, Actually, fuel flow ina Rankine cycle plant with condenser degradation may decrease slightly because the increased condenser pressure will lead to higher feedwater temperature entering the boiler, which will reduce boiler fuel consumption. Even so, the plant power always decreases and heat rate always increases when condenser pressure increases. The change in plant heat rate caused by the degradation in the condenser is called the impact of condenser degradation on plant heat rate. ‘These changes in plant performance reduce electric sales revenues and increase fuel costs, resulting in a net operating cost to the plant. The change in plant revenues minus fuel costs is called the impact of condenser degradation on plant cos ‘The idea behind the overall plant impacis is to convert all of the degradations in the plant to their respective costs on plant performance. Then these degradations can be compared and evaluated on a consistent (apples to apples) basis. Table 1-7 belov’ illustrates the concept for a combined-cycle plant. Table 1-8 Example of plant equipment degradations and their impacts on plant performance Impact on Plant Performance Equipment | Degradation Power | Heat Rate | Operating (MW) | Btwkw-h) | Cost (S/hr) Inlet Air Filter | 1.1 in-tt,0 03 16 46 Gas Turbine 19MWw 22 15 294 HRSG 12,000 tbe 4 169 Steam Turbine | 0.8 MW 08 31 84 Condenser O41 psi oa 25 56 Cooling Tower 21F 02 B 2 Total Plant 31 191 on page 31 1, Concept of Performance Monitoring ‘The inlet air filter in Table 1-8 has a pressure-loss degradation equal to 1.1 in-H,0. If this degradation were eliminated by replacing the air filters, the ‘gas turbine inlet pressure would increase, resulting in a gas turbine power increase, The steam turbine power would also increase because of the increase in gas turbine exhaust energy. The total plant power would increase by 0.3 MW, which is defined as the impact of the air filter on plant power. This plant power increase would cause a plant heat rate decrease equal to 16 BtwkW-hr. Overall these changes in plant power and heat rate would yield a net increase of 46 $/hr in plant operating profits (electric sales revenues minus fuel costs). Methods to calculate these impacts are reviewed in the chapter 6, “Impacts of Degradation on Overall Plant Performance”. The total plant power degradation is equal to the sum of the equipment impacts on plant power. In other words, the total of the equipment impacts on plant power is equal to the degradetion in plant power, which is equal to the rated plant power minus the corrected plant power, when the degradation is calculated from rated. For example, if the plant were rated at 400 MW, and the total power degradation from rated is 5.1 MW. Then, the plant ‘would be expected to now produce orly 394.9 MW if operated at the plant reference operating conditions. This power (394.9 MW) is called the corrected plant power. Ina similar manner, the corrected plant heat rate is equal to the rated plant heat rate plus the total of the equipment degradations in plant heat rate (191 Btu/kW-hr in Table 1-8). The current plant operating costs (electric sales revenues minus fuel costs at the reference operating conditions) are S671/hr higher than they would be if the plant was performing as rated and was operating at the reference operating conditions. 1.1.8 Optimization: “Where You Could Be” Once the degradation of the plant and its equipment is known, the plant operator is prepared to answer the question, “What is the best way to operate the plant so as to maximize plant profits?” The idea is to adjust the plant set- points that are under the control of the operator to make as much money as possible for the plant. The equipment degradation listed in Table 1-8 summarizes maintenance issues, but optimization is concemed with actions the operator can take to improve performance without maintenance. ‘An example of calculated optimization outputs for a combined-cycle power plant with two gas turbines is illustrated in Table 1-9. 1. Concept of Performance Monitoring page 32 Table 1-9 Example optimization outputs for a combined cycle power plant Controllable Current Value | Optimal Cost Savings Set-point Value (Shr) GTI Power 170 MW limw | 90 GT2 Power 150 MW 159 MW 88 Inlet Chiller #1 On oft 2 Inlet Chiller #2 on on u Duct Burner # on on o Duct Bumer #2 om on 0 Number of Cooling, 7 6 au ‘Tower Fans On ‘Total Savings Possible 22 The Current Value column shows current plant operating data, and the Optimal Value column shows where the plant could operate if the operator took the appropriate control actions. Finally the Cost Savings column ‘estimates the increase in plant operational profit that would be achieved if the operator took the suggested actions. This screen is different from the degradation screen in Table 1-8 in that no maintenance actions are required, and the optimal operating conditions are achievable by operator action. No one knows if the degradation in Table 1-8 is fully recoverable, but the control actions suggested in Table 1-9 can be taken (assuming no environment or other operational limit on plant operation is violated), and the cost savings achieved. 4.1.9 Controllable Loss Displays Controllable loss displays are an alternate way to present the degradation and optimization data of tables 1-7 and 1-8. These displays are most often used for Rankine cycle plants where the expected or target values of plant performance parameters do not vary widely with plant operating conditions. Controllable loss displays show the current value of selected plant performance parameters, their target values, and the cost incurred by not operating the plant at these target values. pige33.— 1, Concept of Performance Monitoring anes opzosebn (ilemarowasumtossisomunyitinf omoes Loto vm son? oeey oe sess on i226 we 4 4P AP ae Se Se Se 207] pe ET om eE JE IE 4 anor wom | [Cz] [Cae sere] [Caene ‘wey ""(oronw Figure 1-4 Example controllable loss display for a fossil (Rankine cycle) plant The advantage of controllable loss displays is that they are readily understandable summary of the plant performance status. If there is no degradation in plant equipment, the controllable loss display will show small losses and vice versa. The disadvantage is that they give little information as to the location of plant performance problems. Controllable loss displays are a very useful way to summarize plant status; they inform the operator if there is a plant performance problem. The target values for controllable loss displays are generally based upon expected overall plant performance with no equipment degradation anywhere in the plant. Due to the regenerative nature of a Rankine cycle, degradation in one area of the plant will likely show up as deviations in several controllable loss parameters calculated from measured data in other areas of the plant. Thus, controllable loss parameters do not report degradation specific to individual plant equipment, but instead report a departure in overall plant performance from the values that the performance parameters would have if the entire plant were “new and clean”, For example, in order to achieve the target main steam temperature in a boiler, the economizers, the air preheater, and the feedwater heaters must all 1.Concept of Performance Monitoring page 34 operate with their target performance. Degradation in any of these may cause the steam temperature to change. A change in the steam temperature may change the throttle pressure, which might change the steam turbine efficiency and the condenser pressure. Thus, many of the controllable loss parameters are related to each other, and several will likely change when one of them changes. The target values used in controllable loss displays are a very different concept from the equipment degradation calculations described above where the expected performance of each equipment type depends upon the operational conditions that the equipment is exposed to and is independent of the degradation of other equipment in the plant. 1.2 ASME Test Codes ASME Performance Test Codes provide test procedures that yield results of the highest level of accuracy consistent with the best engineering knowledge and practice currently available, The test procedures were developed by balanced committees of professional individuals representing all concerned interests. The test codes specify procedures, instrumentation, equipment operating requirements, calculation methods, and uncertainty analysis, When tests are run in accordance with an ASME code, the test results will be of the highest quality and the lowest uncertainty available, The focus of the ASME test codes is to provide test specifications appropriate for verification of compliance with guarantee or warranty performance. As such, the absolute eccuracy of measured performance is, stressed as opposed to ease of testing. In general it is very difficult to implement the AMSE test code procedures as the basis of performance monitoring at an operating power plant. The following table lists the test codes that are most closely related to power plant performance monitoring. page 35 1. Concept of Performance Monitoring Table 1-10 ASME Performance Test Codes closely related to pefformance monitoring ASME Test Code Description PTC 1 - 1999 General Instructions PTC 2- 1980 (R197) Code on Definitions and Values PIC 4.3 ~ 1968 (R191) Air Heaters PTC 4.4 1981 (R2003) Gas Turbine Heat Recovery Steam Generators PIC6- 1996 Steam Turbines PTC 6A - 2000 Appendix to PTC 6 PTC 6 Report 1985 (R1997) Evaluation of Measurement Uncertainty in Performance Tests of Steam Turbines PTC 6S ~ 1988 (R195) Procedures for Routine Performance Test of ‘Steam Turbines PTC 82-1990 Centrifugal Pumps PTC 11 ~ 1984 (R198) Fans PTC 12.1 -2000 Closed Feedwater Heaters PIC 12.2- 1998 Steem Surface Condensers PTC 123 - 1997 Deaerators PTC 19.1 - 1998 ‘Test Uncertainty PTC 22-1997 Performance Test Code on Gas Turbines PTC 23 ~ 1986 (R197) Atmospheric Water Cooling Equipment PTC 46~ 1997 Overall Plant Performance PTC PM~ 1993 Performance Monitoring Guidelines for ‘Steam Power Plants 1, Concept of Performance Monitoring page 36 1.3. Performance Testing versus Online Monitoring A performance test is a one-time evaluation of equipment performance that relies on precision instrumentation installed specifically for that test. The equipment being tested is operated at conditions as close to design and/or guarantee as possible. The objective of a performance test is to measure the absolute capability of the equipment. The tests are often done to verify vendor guarantees on new or upgraded equipment. The objective of performance monitoring is to detect changes in equipment performance (degradation) so that proper corrective action can be taken. The absolute value of performance is not necessarily important to performance monitoring; instead, repeatability of results is most important, so that changes over time can be evaluated. The principal differences between testing and monitoring are summarized in Table 1-11 below. Table 1-11 Comparison of performance testing and online monitoring Performance Test | Online Monitoring Objective ‘Absolute Performance Detect Degradation Insiumentaion Type | Precision Test insiruments | Whatever Is Available Measurement Requirement | Accuracy Repeatability ‘Test Interval One Time Event Repeated Often. ‘Test Conditions Equipment kolated and at | Normal Plant Operation Full Load The basic difference between performance monitoring and performance testing is that monitoring uses whatever instrumentation is continuously available at the plant to give the operators an indication of plant performance status. As such, monitoring data is usually not adequate for vendor guarantee testing, but is usually acceptable for tracking changes in equipment degradation. The fact that monitoring evaluations are repeated many times gives the engineer the opportunity to reject results that are not consistent with long-term trends. page37 1. Concept of Performance Monitoring The uncertainty of a measurement is considered to be the sum of two components called the bias and the random uncertainties. Accuracy is, achieved only if both the bias and random uncertainties are small. However, repeatability is the long-term variation in bias error. Although the relative contributions of random and bias errors are unknown for most instruments, the ASME Performance Test Code Committee has estimated the repeatability as one-half the overall instrument uncertainty. The conclusion is that even though installed plant instrumentation may not be adequate for precision tests, the repeatability of performance monitoring results often approaches the accuracy of precision tests. This means that degradation (change in performance) can be measured more accurately than absolute performance. 1.4 Curve Based Methods 1.4.1. Performance Curves Performance monitoring involves a comparison of the expected (new and clean) equipment performance to its current (measured) performance. The current performance is usually directly measured or is calculated from measured data. The prediction of expected equipment performance requires, both a measurement of equipment operating conditions and a method or ‘model to use to predict how the equipment performance changes as operating conditions change. Curve based methods are a simple and reliable method to predict equipment performance changes as long as the operating conditions have not changed too much from the reference conditions. The basic concept behind curve based methods is to assemble a set of performance or correction curves that plot the variation in a specific equipment performance parameter (such as power, heat rate or efficiency) when one of the operating conditions changes. The total equipment performance fractional change is then computed by multiplying together the fractional changes for each operating condition, where cach multiplying factor is generated using a separate correetion curve. Two equipment characteristics must be known in order to predict the expected performance of any plant equipment: 1, Rating specification for the equipment that includes both the rated performance and the reference operating conditions at which the rating applies. 1.Concept of Performance Monitoring page 38 2. A method or model, which could be in the form of performance curves, of equipment performance that can predict how the performance changes when any of the reference operating conditions change. Table 1-12 is an example of the rating specifications for a heat recovery steam generator (HRSG), and Figures 1-5 through 1-8 are example performance curves for that same heat recovery steam generator. These curves may come from vendor performance guarantee tables, a computer model of the HRSG, or from measured data. Each curve shows how equipment performance will change if only one of the equipment operating conditions changes. When generating a performance curve it is assumed that all other equipment-operating conditions remain constant and equal to theit reference values, Thus, figure 1-5 shcws the variation of HP steam flow and HRSG effectiveness as the gas turbine exhaust temperature varies, but only if the other operating conditions (exhaust gas flow, exhaust gas composition, drum pressures, inlet feedwater temperature, HP steam temperature, and duct burner fuel flow) remain equal to their reference values as stated in Table 1-12 page39 1, Concept of Performance Monitoring Table 1-12 Ratings specification for the example heat recovery steam generator Heat Recovery Steam Generator | Data Rating Specification RATING: HP Steam Flow 511,700 Tovar LP Steam Flow 88,300 Ibvar ifectiveness 934 REFERENCE OPERATING CONDITIONS: Exhaust Gas Flow. 3,200,000 Ib/ar Exhaust Gas Temperature 1135 F Exhaust Gas Composition 10% H.0 HP Drum Pressure 1900 psia LP Drum Pressure 100 psia Inlet Feedwater Temperature 1B6F HP Steam Temperature 1000 F Duct Bumer Fuel Flow 0.00 1, Concept of Performance Monitoring page 40 RSG Perfomance vs Ehaust Gas Temperature Figure 1-5 Example HRSG performance (HP steam flow and HRSG effectiveness) versus changes in gas turbine exhaust gas temperature page4l 4. Concept of Performance Monitoring HRSG Perormance vs Exhaust Gas Flow Figure 1-6 Example HRSG performance (HP steam flow and HRSG effectiveness) ‘versus changes in gas turbine exhaust gas flow rate 1, Concept of Performance Monitoring page 42 HRSG Pertormance ve HP Drum Pressure 0 nm Pree Figure 1-7 Example HRSG performance (HP steam fiow and HRSG Effectiveness) versus changes in high-pressure steam drum pressure page 43/1. Concept of Performance Monitoring HRSG Performance ve Burner Ful Enersy He amFlow ie Figure 1-8 Example HRSG performance (HP steam flow and HRSG effectiveness) versus changes in duct burner fuel energy input 1.4.2 Expected Performance from Curves The basic assumption behind the curve-based performanee-prediction methodology is that the individual operating conditions impact equipment performance independently. When this assumption is true, the total impact in performance can be computed by combining the impacts of the individual parameters. ‘The methodology used to combine the individual impacts into a net impact on performance is to convert all the individual impacts into a fractional or percentage change in the performance parameter. The fractional change in HP steam flow when the exhaust temperature changes fiom the reference value, Try, to some value T;, is, Fractional Change from T; to Trot: re = £4 Tr) lec col an where 1. Concept of Performance Monitoring page 44 Curver(T;) is the look-up value of the HP steam flow from the HRSG exhaust temperature performance curve, Figure 1-5, at ‘temperature T; Curver(Tr) is the steam flow at the reference exhaust temperature from the same performance curve The expected HP steam flow is the combination of all the fractional changes from all of the parameters that affect the HP steam flow. Expected HP Steam Flow from Performance Curves: cy Lure dT) Curve) Curven(R) Currey» Won) © Curve Tog) CHIE g Wen) CHV p (Pry CHIME pg py) (1.2) Ware where Wares is the expected value of the HP steam flow, at the exhaust, temperature 7}, exhaust flow waxy, drum pressure P;, and duct burner fuel flow wpg). Wrra is the rated value of the HP steam flow, which occurs at the reference exhaust temperature Tuy Curver(T;) is the value read from the exhaust temperature performance curve at temperature 7 Curver(Tre) is the value read from the exhaust temperature performance curve at temperature Trey Curvew(Wexhi) is the value from the exhaust flow rate performance curve at temperature Weshs Curvey(Wees rep is the value from the exhaust flow rate performance curve at temperature Werk rey Curvep(P.) is the value from the drum pressure performance curve at temperature P, Curvep(Prep is the value from the drum pressure performance curve at temperature Prey Curveps(Woarep is the value from the duct burner fuel flow performance curve at temperatute Wes Curveps(wpay) is the value from the duct burner fuel flow performance curve at temperature wpa, page 451. Concept of Performance Monitor As an example, consider what happens to the HRSG performance when the gas turbine exhaust conditions change, such as when the exhaust temperature into the sample HRSG changes from its reference value of 1135 F to 1100 F, and the exhaust flow reduces from 3200 kib/hr to 2800 kib/hr. ‘The exhaust temperature performance curve (Figure 1-5) gives HP steam flow values of 511.7 klb/hr at the reference temperature (1135 F), and 480.6 lb/hr at exhaust temperature equal to 1100 F. The exhaust flow performance curve (Figure 1-6) gives HP steam flow values of 511.7 klb/hr at the reference flow (3200 kibjhr), and 448.5 klb/hr at exhaust flow equal to 2800 kib/hr. Thus, the expected HP steam flow at the new exhaust conditions is equal to: weg ew, Lurvey (1100) Curve, (2800) HPexy Priel’ Curve, (1135) Curve,,(3200) 480.6 448.5 _ yg hlb S117 511.7 hr Notice, only two terms out of four possible change factors are included in the calculation because the drum pressure and duct bumer fuel flow did not change, and their contributions to the calculation would equal unity (1.0). (13) =SIL7 The expected HRSG effectiveness (7Aey) at the new gas turbine exhaust conditions would be calculated in the same manner, except that the calculation must use curve look-up values for the effectiveness instead of the steam flow. Curve, (1100) Curve, (2800) 4 Curve, (1135) Curve, (3200) 93 (28/24) =92.8% 93.4 A934 1.4.3. Additive Performance Factors Some operational parameters are not best represented by fractional changes in performance, but instead by incremental changes in performance. An addition of a quantity of energy to a system will likely cause the outputs of the system to increase by an additive amount that is proportional to the quantity of energy added, For example, a given amount of water or steam injected into a gas turbine will increase the gas turbine power by an increment that is proportional to the amount of steamy/water injected, but is not closely related to the power level of the gas turbine without the steanvwater injection. It would make little sense in this case to use a multiplier on the reference gas turbine power. In this situation, it would be Nesy (a) 1. Coneept of Performance Monitoring page 46 non-intuitive to express this impact as a multiplier on the reference gas turbine power. Instead, this impact is better presented by adding an increment of power proportional to the amount of steam/water that is injected. This argument also applies to other situations such as adding duct burner fuel energy added to an HRSG, steam, and to admission or extraction from a steam turbine where the flow rate is not directly related to the throttle flow. Note also that if the impact of water injection on gas turbine power is expressed using the water-to-fuel ratio as the independent parameter instead of a specified water injection flow rate, then the effect on performance is better represented as a multiplicative factor. This is because the water injection has been normalized back to rated conditions by dividing the quantity of water injection by the quantity of fuel flow. Additive correction factors generally are only used to represent discrete quantities being added to or taken from the equipment or system. Thus, gas turbine vendors have the option of expressing the effect of water injection on gas turbine performance as an additive factor (when the amount of water injection is plotted versus gas turbine power) or asa multiplicative factor (when water to fuel ratio is plotted versus gas turbine power). HISG vendors have the same option cn duct bumer fuel energy. If the duet bumer fuel energy were expressed as ¢ fraction of the input exhaust gas, energy, the performance effect would be multiplicative. Additive changes in performance are computed by adding the increment in performance calculated from the performance curves to the reference performance value, For example, the performance increment when the duct bumer fires at level equal to some value, wr, instead of firing at the reference duct burner firing level is: Performance Increment = (Curveps(wr) — Curvepa(0)} (1.5) Where Curvepa(wr) is the value from the HRSG performance versus duct bummer firing curve (Figure 1-8) at the x-axis value equal to wp. The rated HRSG performance occurs at a duct burner firing level equal to the reference value, which equals zero. Thus, Curvepg(0) is the HRSG performance at the reference duct burner firing level. If the duct burner in the example HRSG fires at a level equal to 200 mmBTU/hr when all other operating conditions remain at their reference values, the expected HP steam flow would-be: page47 1. Concept of Performance Monitoring W(20D)em = = Wray + {Performance Increment} Wray + {Curvepa(200) - Curvens(0)} SUL.7 + 704 - 511.7 704 klb/he 1.4.4 Expected Performance from Curves In summary, the expected equipment performance at actual operating conditions can be calculated from a set of performance curves of the equipment performance versus equipment operating conditions by the following formula, Expected Performance from Performance Curves: CurveValudactualconditions) snurptetacers Curve Valudreferenceconditions) +L [Curvevatudactual)—CurveValudreferencd] bsvePciore erfOr MANE eg Performaneg, (1.6) where Performance. is the expected equipment performance at the actual operating conditions, if the equipment performs with rated capability Performanceraea is the expected or rated equipment performance at the reference operating conditions (expected equals rated at the reference operating conditions) is a mathematical operator indicating that all the terms in the following parenthesis are to be multiplied together, one term for each performance curve until terms from all the performance curves are included in the final product. _isamathematical operator indicating that all the following steer terms are to be added together, one term for each additive performance increment until terms from all the additive performance increments are included in the firal sum, CurveValue(i) is the value off the performance curve at operating condition i Note that the above formula can be used to predict the performance at any set of operating conditions when the performance is known at any other set 1. Concept of Performance Monitoring page 48 of operating conditions. Simply define the reference conditions to be equal to the operating conditions where the performance is known, and the rated performance to be equal to that known performance value. Predicted Performance at (1) Given Test Performance at (2): Curve Valuéconditions) satiptcFucens Curve Valugconditons 2) + — Y[CurveValuéconditions|)—CurveValuéconditions2)] (7) Performane(1) = Performane(2) where Performance(1) is the predicted equipment performance at the operating conditions (1) if the equipment performs with the same capability as the known or test performance at conditions (2) Performance(2) is the known or test equipment performance at the operating conditions (2) [] isa mathematical operator indicating that all the terms in the following parenthesis are to be multiplied together, one term for ‘each performance curve until terms from all the performance curves are included in the final product. 2. __ isamathematical operator indicating that all the following terms are to be added together, one term for each additive performance increment until terms from all the additive performance increments are included in the final sum. CurveValue(i is the value off the performance curve at operating condition i If the example HRSG is operated at inlet exhaust gas temperature of 1100 F, and inlet gas flow rate of 2800 Klb/hr, and with the duct burner consuming 200 mmBtw/hr of fuel, the expected HP steam flow rate would be: Curve, (1100) Curve,(2800) (Curve -a1 reap = Wired Cryer (1135) Curve. 700) * WUrVe pg(200) ~ Curveng(0)} =suif 82) AB) p04—si1.7 = 61282 S17 \ S117 hr (1.8) page49 1. Concept of Performance Monitoring 1.4.5 Correction Factors The traditional method to account for operational and environmental effects on equipment performance is the correction factor method. This method was developed by equipment vendors to enable their customers to predict the performance of the vendor’s equipment at various operating conditions, and to avoid the need to provide the physically based computer models of equipment performance from which the curves are usually derived. ‘The methodology is to apply independent correction factors for each operational and environmental effect. For example, gas turbine base-load power is known to be dependent upon inlet air temperature, inlet air pressure, inlet air humidity, inlet pressure loss, exhaust pressure loss, steam injection rate, water injection rate and fuel type. The gas turbine vendor would rate the engine power at a given set of these conditions, and provide correction curves for each of these operational and environmental effects. Each correction curve would quantify the change or percent change in engine performance that would result when the given operational or environmental condition changes. The basic assumption of this curve-based methodology is that the individual operating conditions have independent impacts on equipment performance. This means that the total impact on performance can be computed by combining the individual parameter impacts. ‘A correction curve is simply a normalized performance curve. The equipment output parameter (Y axis on the performance curve) value is divided by its rated value. This forces the Y-axis value to equal unity (1.0) at the X-axis value equal to the reference value. The advantage of correction curves is that the value read directly from the plot is equal to the correction factor needed to predict performance at the reference conditions given performance at some other operating condition. There is no need for the user to divide by the rated performance value to obtain a correction factor. The disadvantage is that the absolute value of performance is not available from the curve. 1. Concept of Performance Monitoring page 50 Figure 1-9 Correction factor curve for the effect of exhaust gas temperature on HP steam flow, this curve is equal to the curve in Figure 1-5 divided by the rated HP steam flow Correction factors are defined as the fractional change in performance from rated when an operational condition changes from the reference conditions. They are often used in performance testing to predict the equipment performance at the reference conditions when the performance was ‘measured at conditions other than the reference conditions. This predicted performance at reference conditions is called the corrected performance. Expected Performance at Test Condtions from Correction Factors: Performance. = Performance. [| CorrectionFactors Aion (19) +) AdditiveCorrections akadave where is a mathematical operator indicating the product of ier all the following terms (each term multiplied by the next term) pageS1 i, Concept of Performance Monitoring isa mathematical operator indicating the sum of all suave the following terms (each term added to the next term) Performanceryea is the expected or rated performance at the reference operating conditions Performanceep is the expected performance at test operating conditions if the equipment performs with rated capability CorrectionFactors are the values from the correction curves at the test operating conditions AdditiveCorrections are the values off the additive correction curves at the test operating conditions The correction factor curves, just like performance curves, can be used to predict equipment performance at any operating condition given the performance at one other operating condition. The formula for the predicted equipment performance at operating condition (1), given known performance at operating condition (2) is below. Predicted Performance at Operating Conditions (1): Performan) = Performan [1 omectionFaciorstt) - [| CorrectionFactors + Performance(2)+ Y” AdditiveCorrections(\)~ ¥° AdditveCorrections(2) (1.10) where Performance (1) is the predicted performance at operating conditions (1), ifthe equipment performs with the same capability as the known or test performance at operating conditions (2) Performance (2) is the known or test performance at operating, conditions (2) Performanceraie is the tated performance at the reference operating conditions CorrectionFacotrs(1) are the values off the correction curves at ‘operating conditions (1) CorrectionFacotrs(2) are the values off the correction curves at operating conditions (2) 1. Concept of Performance Monitoring page 52 AdditiveFacotrs(1) are the values off the additive correction curves at operating condition (1) AdditiveFacotrs(2) are the values off the additive correction curves at operating condition (2) Because the correction factor curves are based upon a rated performance value at reference operating conditions, the prediction of performance at some operating conditions (1) requires the knowledge of both the rated performance as well as the performance at operating conditions (2). 1.4.6 Percent Change Correction Factors Sometimes the variations in equipment performance with operating conditions are presented as a percentage change in performance versus the percentage change in the operating condition. These curves are fully normalized performance curves where the y-axis is equal to the change in equipment performance (equipment performance minus the rated performance) divided by the rated performance, and the x-axis is equal to the change in reference condition (current operating condition minus the reference operating condition) divided by the reference condition. An example of such a performance curve is shown in Figure 1-10. ageS3- 1. Concept of Performance Monitoring ‘% Change in Steam Flow Versus % Change in GT Exheust Temperatre Figure 1-10 An HRSG percent change correction curve for the HP steam flow versus exhaust gas temperature ‘The use of these percentage change correction curves is essentially the same as that for correction curves, except that the correction factor must be calculated from the curve look-up value in the following manner, ValueOnPercentageCorrectionCurve 100 CorrectionFactor a.) 1.5 Model-Based Performance Analysis The correction curves that equipment vendors supply to customers are based upon physically based computer models of the equipment performance, and are a convenient way for vendors to transmit the results of complex computer analysis to their customers. However, in this time of powerful ‘computers on every desk, it is not necessary to simplify the analysis into a few curves. Why not use the computer codes directly to calculate expected and corrected performance? Computer software programs like GateCycle™, Pepse™ and GTMaster™, contain complex, physically based models of equipment performance that 1. Concept of Performance Monitoring page 54 can be used in place of correction curves. In fact, equipment vendors often use these computer codes to create the correction curves. Some advantages of using the computer codes (model-based analysis) instead of performance or correction curves are listed below: * Interaction of varying operating conditions can be modeled, * Physically based models can allow wide variations (far from reference) in operating conditions, * Physically based models can compute impacts of parameters for which no curves are available. * Physically based models give detailed information about the expected performance, not available from curves. This additional information may help the engineer diagnose problems, The individual equipment operating conditions may not have independent effects on equipment performance, wiich is an assumption of the curve- based method. In other words, the assumption that the overall effect of changes in all the operating conditions can be computed by multiplying the correction factors together may is not valid over a wide range of operating conditions. As long as each correction factor is near unity, the method words very well; but when correction factors get far from 1.0, their product may not represent the true performance change in the equipment. Computer models can handle wide variations in environmental parameters and operational modes for which curves do not exist or do not accurately ‘model. In particular, as conditions change over a broad range, the interactions between environmental parameters become more and more important, and computer codes are often built specifically to handle these interactions. Computer models can compute corrections for parameters that the vendor may not have supplied correction curves for. For example, if the gas turbine uses varying amounts of water injection or switches from natural gas to oil fuel, the exhaust gas compositions will change. These changes are handled directly by computer models, but seldom accounted for in correction curves. age $5 1. Concept of Performance Monitoring ‘The methodology for model-based performance analysis is, 1. Build a computer model of the equipment being monitored. The procedure to build such a model is specific to the software used. Test the model versus vendor guarantee data and/or plant-measured data over a wide range of operating conditions. Correct the model where necessary. ‘At each performance monitoring calculation interval input the ‘measured equipment operating conditions into the model Run the model and obtain the expected equipment performance as a model output. . Evaluate degradation by comparing the expected performance from the model to the measured performance. The following figures illustrate the use of physically based computer code analysis for the example heat recovery steam generator used in the chapter on performance curves. The computer model in Figure 1-11 was constructed to replicate the actual steam/water flow path in an existing HRSG. The actual surface areas of the tube bundles were obtained from vendor information and input into the computer model. Then the design-point heat transfer coefficient in each tube bank was adjusted so that the model prediction matches the rating specification for the HRSG. This resulted in a design point model of the HRSG. 1. Concept of Performance Monitoring page 56 meen a | sonal Y : ms | re Figure 1-11 Computer model (screen from GateCycle™ computer code) of example HRSG at reference operating conditions Next, the predictions (off-design mode in GateCycle™) of the HRSG model ‘were compared to vendor warrantee data over a range of operating conditions to verify model accuracy. If necessary, corrections were made to the design-point model, and the verification process repeated until the predictions of the HRSG model matched the vendor data to within one percent over the entire operating range of the HRSG. The resulting model can then be used to give predictions of the expected performance of the HRSG as operating conditions change. Figure 1-11 shows the output of the model when the reference operating conditions from Table 1-12 are input to the computer model. Notice that the model predicts the rated steam flows to three digits of accuracy or better. In addition to predicting the rated steam flows, the model also computes the complete temperature distributions within the HRSG. Figure 1-12 is a plot of these temperature distributions. pageS7 1, Concept of Performance Monitoring 1100 gages Temperature, F 400 Heat Transfer, BTU/hr Figure 1-12 Temperature profile from GateCycle™ for the example HRSG at reference conditions ‘The upper straight line in Figure 1-12 is the exhaust gas temperature as the gas goes from the HRSG inlet to the stack. The lower set of straight lines is, the corresponding steam/water temperature distribution. There is one steam/water straight line for each tube bundle modeled in the HRSG. The ‘computer code only predicts the inlet and outlet conditions for each tube bundle, and does not predict temperature distributions within a tube bundle. ‘A straight line is drawn from one predicted point to another. Notice on the left hand side of the plot (at zero on the x-axis), the gas enters at a temperature of 1135 F, where the corresponding steam temperature is 1000 F. On the right hand side of the plot the gas exit (stack) temperature is 206 F, while the inlet feedwater temperature is 136 F. The closer the gas exit temperature is to the feedwater inlet temperature the higher the effectiveness of the HRSG. 1. Concept of Performance Ma page $8 r+ Sts Figure 1-13 Model-based prediction (from GateCyclo™) of the HRSG performance at exhaust gas temperature equal to 1100 F, and exhaust gas flow equal to 2800 Kibmhr Figure 1-13 shows the predicted HRSG performance when the exhaust gas inlet temperature and flow rate are changed to 1100 F and 2800 mmBtu/hr respectively. Notice that when using model-based analysis, all operating conditions are input to the model and the output accounts for changes in all the operating conditions at once. Interactions between the inputs can be predicted only if all changes in operating conditions are input to the model. Figure 1-13 shows the model-based prediction of HP steam flow and effectiveness. How do these compare to the curve-based method? Table 1- 13 below compares the model-based results to the curve-based results for the situation where the exhaust gas temperature and flow change from 1135 F and 3200 kib/hr to 1100 F and 2800 klb/hr respectively. Table 1-13 Comparison of results between curve-based and model-based methods for a change in HRSG inlet conditions Predicted Performance after exhaust gas flow & Curve-Based Model Based Method temperature change Method HP Steam Flow (Klb/hr) 420 421 HRSG Effectiveness (%) 928 92.9 page S¥ 1: Concept of Performance Monitoring Thus, the curve-based and the model-based methods yield approximately equal predicted performance values when exhaust gas temperature and flow change over a relatively narrow range. Changes in exhaust conditions of this, size could be expected to occur as a result of ambient temperature changes on the order of 40 F. asa crecroeness ross SESPEEATERS—— i oral Figure 1-14 Predicted HRSG performance (from GateCycle™) when exhaust gas temperature, exhaust gas flow, and duct burner fuel flow all change from reference Now let’s add a significant change in duct bumer firing level, from zero at the reference conditions to the maximum possible for this HRSG (200 mmBtwhr), and once again compare the predictions of the curve-based method to the model based method. Table 1-14 Comparison of results between curve-based and model-based methods for high duct-burner firing situation Predicted Performance after exhaust gas flow & | Curve-Based Method | Model Based Method temperature & duct firing level all change HP Steam Flow (kltvhr) on 614 HRSG Effectiveness (%) 945 95.1 1. Concept of Performance Monitoring page 60 Notice that differences between the curve-based method and the model- based method begin to become important, at least for effectiveness, as the changes in operating conditions get larger. Since, the curves for this example were calculated from the model, all of the differences in calculated results are due to the simplifying assumptions inherent in the curve-based method. In other words, the curve-based method is based upon the model, and is a simplification to the model that makes it possible to predict performance without needing to run the computer code. page61 1. Concept of Performance Monitoring 1. Concept of Performance Manito rage 62 2. Heat Balance Analysis 2.1 Overview The term heat balance analysis in the context of performance monitoring describes the application of mass and energy balance equations to model power plant systems with the objective of determining detailed thermodynamic properties of the operating system. The heat balance analysis process takes measured data as input and outputs a complete set of thermodynamic data for each flow stream in the model, including both the measured data and data that were not measured. Heat balance analysis, in this context, is not a prediction of plant or equipment performance: itis instead a process of matching @ mass and energy balance model of the system as closely as possible to measured data. The objective of heat balance analysis for performance monitoring is to obtain a complete set of actual operating data for the system. This operating data will be used in subsequent equipment evaluations to determine the degradation in the plant equipment. Therefore, the heat balance analysis, should not make assumptions about the performance capability of the equipment being modeled. Mass and energy balances are true no matter what the performance capability of the system, and therefore do not require any assumptions about performance capability. Thus, mass and energy balance calculations are appropriate for heat balance analyses when applied to performance monitoring, but assumptions about the efficiency of a turbine or the effectiveness of a heat exchanger are not. One often hears the term heat balance code to refer to software programs used to predict power plant performance. Heat balance codes typically perform mass and energy balance calculations; in addition, they often incorporate models of the physical characteristics of the equipment and apply them to predict the equipment performance over a range of operating conditions. A commercial heat balance code may or may not be useful for the type of heat balance analyses used in performance monitoring that is described in this chapter. In order for a commercial heat balance code to be appropriate, it must allow the user to input measured plant data as opposed to parameters of equipment performance. When matching a heat balance code to measured data itis generally correct and necessary to input values such as temperatures, flow rates, pressures and power levels. If the heat balance code requires the user to input values for isentropic efficiency or heat page 63 2. Heat Balance Analysis, exchanger effectiveness, it may be making implicit assumptions about equipment degradation that negate the purpose of the heat balance analysis. 2.2 Local Heat Balances The application of mass and energy balance calculations (also called simply “heat balances” in this book) are a method to infer data that is not or cannot be measured, and can also be used to improve the accuracy of existing power plant measured data. Heat balances are an essential component of any on-line performance-monitoring system, because plants rarely have sufficient instrumentation to adequately assess equipment performance. It is often necessary to infer data at locations where measurements are not normally made. Suich locations include the extraction steam flow rates to feedwater heaters, the power generated in each steam turbine stage group, the exhaust flow from a gas turbine, the cooling water flow in the condenser, and the boiler flue gas properties. Heat balance analysis can often provide accurate estimates of data values at these locations. Some plant locations have more instrumentation than needed to determine a data value (redundant measured data). Heat balance analysis can verify and possibly improve the accuracy of redundant measured data, Performance monitoring is a process where operating information (measured data) from the plant is processed so as to make judgments or Teach conclusions about the current capabilities of the plant. Mass and energy balances are a way to add information about the plant. These balance equations should be based on as few assumptions as possible about the performance capability of the plant, but instead establish relationships between the measurements that are always true. For example, heat balance equations should not contain assumptions about the heat transfer coefficients in heat exchangers or the efficiency of rotating equipment. Of course, some assumptions about the operational capability of the power plant must always be made. A mass balance equation for a pipe will generally assume that all of the water entering a pipe flows out the other end of the pipe. This is equivalent to assuming that there is no leak between the inlet and the outlet of the pipe, and that the pipe is operating in steady state. The basic mass and energy balance equations for a system are stated below: Rate of Storage of Mass = Mass Inflow Rate - Mass Outflow Rate (2.1) Rate of Storage of Energy = Energy Inflow Rate - Energy Outflow Rate(2.2) Power plant performance monitoring is almost always done at steady-state conditions, where the system is not changing with time. At steady state, the 2, Heat Balance Analysis page 64 storage terms in the above equations are equal to zero and the balance equations reduce to setting the inflow rates equal to the outflow rates. Stack Ems | ee z nas PH2 | Power pump cro] [Hea] [rw] [PH Figure 2-1 Control volume for performing heat balance analysis To perform a power plant heat balance analysis, an engineer first selects the area of the power plant around which the mass and energy balances are to be performed. The selected area is called the control volume. The balance equations include only the mass and energy flows into and out of the control volume: that is, flows which cross the control volume boundary. Flows that stay inside of or outside of the control volume are not part of the analysis. This makes the selection of the contrcl volume very important. Ifa flow value is not measured or is measured inaccurately, it may be possible to select a control volume that completely surrounds or excludes the unknown flow and thereby eliminates the flow from the analysis. Any flow or energy transfer that crosses the boundary of the control volume is included in the balance equations; no other flows appear in the balance equations. In Figure 2-1 a sample control volume, which looks something like a HRSG, is shown with arrows indicating the flows of mass and energy into page 65 2. Heat Balance Analysis and out of the control volume. The labels on the arrows indicate the type of flow at the location of the arrow: 2. Heat Balance Analysis page 66 Table 2-1 Types of flow associated with she arrows on sample control volume in Figure 2-1 Label ‘Type of Flow INSTACK Exhaust Gas In Exhaust Gas Out PHI Preheat Water In PH2 Preheat Water Out FW Feedwater HP HP Steam, 1p IP Steam LP LP Steam RH Cold Reheat Steam HRH Hot Reheat Steam Powetnnp | Power to Pump(s) Eras Heat Energy Lost ‘The mass balance equation for the sample HRSG control volume is as follows: Mass Inflow Rate= Mass Outflow Rate wiv + Worn + Wew + Weai= Wsrack + Warn + Wei + Wp + wip + WLP (2.3) Where w is the mass flow rate at the location specified in the subscript. Notice that the power (Powerjump) and heat transfer energy terms (Eig) do not appear in the mass balance because they do not involve the movement of mass across the boundary of the control volume. The energy balance equation for the HRSG control volume is given below. Energy Inflow Rate = Energy Outflow Rate win*hin + worn'*hcan + Wew*hew + wens *hens + Powerpump =wsrack*hsrice Ware * hei wens *hpret wre hue + wip hipt wip hip + Etoss (2.4) Where h refers to the enthalpy of the fluid at the location specified in the subscript. The term for power input to the system (Powerump) only appears in the energy balance equations for an HRSG if a pump or some other power-consuming device is located within the boundary of the control volume and the electric current to power the pump crosses the boundary of page 67 2. Heat Balance Analys the control volume. To choose whether or not to include a pump within the HRSG control volume, consider where the feedwater temperature is ‘measured. Ifthe feedwater temperature is measured after the pump, the control volume can exclude the pump so that the temperature into the control volume is the temperature downstream of the pump. If the feedwater temperature is only measured before the pump, it is better to establish the control volume so that it includes the pump so that the feedwater temperature into the control volume is the temperature upstream of the pump. ‘The energy transferred out (Eion) is due to heat transfer to the environment. If the energy lost to the environment were due to a leak, an additional mass flow term would be required to account for the leakage flow. The remaining terms in the energy balance equation are the same as the terms in the mass balance equation, except that each flow rate is multiplied by an enthalpy (/), which is the energy content per unit mass. ‘The mass balance equation (Equation 2.1) includes both gas and steam/water flows. It may be convenient to develop separate relationships for the gas and steam/water flows. To obtain a relationship including only the gas flows, set up a control volume which includes only the gas (shell) side of the HRSG. The boundary of this control volume would be along the outside of the pressurized tubes and drums in which the steam/water flows and along the inside of the metal walls that contain the gas flow within the HRSG. For this gas-side control volume the mass balance is: Mass Inflow Rate =Mass Outflow Rate Wow =Wsrack (2.5) The energy balance for the gas-side control volume is: Energy Inflow Rate=Energy Outflow Rate win*hiv=wsrack*hstac + Eloss + Qurse (2.6) Where Qnasc is the heat transfer rate from the gas-side to the steam/water side of the HRSG, also called the duty of the HRSG. These equations illustrate that changing the location of the control volume will change the balance equations, and that can yield additional information from the analyses. In this case, the additional information is that the gas flow into the HRSG must equal the gas flow out of the HRSG, and that the difference in energy of the gas from the inlet to the stack is equal to the duty of the HRSG plus heat transferred to the environment. There are two ways to generate the equations for the steam/water mass and energy balances, One is to mathematically subtract the gas-side balance 2, Heat Balance Analysis page 68 equations from the overall HRSG balance equations. The second is to use a control volume consisting of the steam/water flow paths through the HRSG. The mass balance equation for this steanvwater control volume is: Mass Inflow Rate = Mass Outflow Rate Wer + Wew + wrx! = Wurutwenst wet wit wip 27 The energy balance equation for the steam/water control volume is, Energy Inflow Rate= Energy Outflow Rate Wern*hcrn + Wew*hew + wey; "hens + Powerpunp + Oursc = wart*huen + Wex2*hpx + wup*hup + wre*hip + wip *hip (2.8) ‘These heat balances around an HRSG illustrate how an engineer can use heat balances to add additional information about the system. For example, if all the steam/water flows and temperatures are measured and the gas inlet and outlet temperatures are also measured, the HRSG heat balance equations can be used to calculate both the duty of the HRSG and the gas turbine exhaust gas flow rate. To perform these caleulations, first calculate the duty using the steam/water and pump power measurements: Qiesc = Powerpump + Wrn*hrinn - Wern*hcan + Wen*hpr - wer*hen* wue*hup + wir*h + wip*hip- wew*hew (2.9) ‘Then, use the gas-side energy balance (Equation 2.6) to get the exhaust gas. flow rate: win = (Eas + Quase) / Ohov- hisrace) (2.10) ‘The energy loss to the environment, Eioss is difficult to measure, and therefore is frequently estimated (assumed) to equal approximately 2% of the HRSG duty (heat transfer rate). In this case the gas turbine exhaust gas flow rate is: Wor = 1.02 * Quase/ (haa hsrack) 4) Local heat balance analysis, when applied as in this example for an HRSG, adds useful information for performance analysis. The input/output methods presented in several of the ASME Performance Test Codes are developed from local mass and energy balances. One of the primary difficulties with the application of local heat balance analysis to performance monitoring is the number of measurements required and the resulting accuracy of those measurements when obtained from available plant measured data. The calculation of the duty (Qyrsc) of an page 69 2. Heat Balance Analysis HRSG from a complete energy balance requires the measurement of every inlet and outlet steam/water flow, temperature and pressure. Unfortunately, steam flow measurements at power planis are often accurate to only 2% to 5% (or worse). This means that the calculated HRSG duty will only be accurate to 2% to 5%. Ifan engineer wants to track performance so as to be able to detect 1% degradation, he has to avoid dependency on steam flow measurements. 2.3, Combined-Cycle Overall Plant Heat Balance The application of overall plant heat-balance analysis has the potential to improve the precision of performance monitoring results by allowing the performance engineer to use the most accurate instruments in the plant and avoid less accurate measurements. If the degradation of plant equipment can be computed from temperature, pressure and power measurements instead of steam flow measurements, the uncertainty of the results will be reduced. Overall heat balance analysis cannot completely eliminate the need to use inaccurate steam flow measurements during the performance evaluation, but it can greatly reduce the impact of these measurements on the overall determination of equipment degradation. ‘An overall plant heat balance does not mean simply setting up a control volume that encompasses all of the plant. Instead, overall plant heat balance analysis involves using control volumes at many locations within the power plant system and formulating mass and energy balance equations for each of those control volumes. It is not necessary to generate the balance equations for an overall plant control volume because that is equivalent to the mathematical sum of all the balance equations of all the control volumes used throughout the plant system, and its use therefore adds no additional information. Overall plant heat balances are considered a separately because the large number of equations arising from the required control volumes comprise a set of simultaneous algebraic equations. The solution of this set of equations requires a level of mathematical sophistication beyond the scope of one-line expressions used for calculated tags in a control system or a plant data historian, Instead, the solution of an overall heat balance analysis usually requires the sophistication of a mathematical equation solver or a commercial heat-balance computer code. Figure 2-2 illustrates the concept of using multiple control volumes to model a combined-cycle plant system. The plant has been simplified for the purposes of this discussion, but enough detail is included to illustrate the concepts of overall plant heat balance analysis. page 70 Looking at the outermost control volume (the set of dotted lines that surround the plant) and applying conservation of energy, one can see that: Condenser Duty=Fuel Energy -GT Power Stack Energy -ST Power (2.12) Thus, the condenser duty can be computed from the fuel energy, plant power and the stack energy loss. However, the calculation of the stack energy loss requires knowledge of the stack temperature and the gas turbine exhaust gas flow rate. The gas turbine exhaust flow could be computed from the HRSG energy balance described above (Equations 2.9 and 2.11), but it can also be computed without the use of measured steamv/water flows by applying a gas turbine energy balance: Fuel Energy = Powergr + GT Exhaust Gas Energy = Powercr + Wes * heh and solving for the exhaust gas flow rate, Wen = (Fuel Energy - Powercr) / hea (2.13) This expression enables one to compute the exhaust gas flow rate from the measured gas turbine fuel flow, gas turbine power and gas turbine exhaust temperature if the specific heat or enthalpy of the exhaust gas is known as a function of temperature. The beauty of these relationships is that the condenser duty and exhaust gas flow rate (two quantities that cannot be directly measured) can be calculated from measurements that are normally acceptably accurate in a combined-cycle power plant; namely fuel flow, power, and gas temperatures, page 71 2. Heat Balance Analysis Stack Power Figure 2-2 Control volumes for Combined-Cycle Overall Plant Heat Balance Analysis Those familiar with gas turbine heat balances may be saying, “This method looks easy enough, but I know that a gas turbine heat balance is complicated. You need to guess the air flow rate in order to get the exhaust gas properties.” ‘The gas turbine heat balance is an iterative process that is outlined in Table 2-2: Several methods to solve the gas turbine heat balance are detailed in Chapter 7 “Gas Turbine Performance”. 2. Heat Balance Analysis page 72 Table 2-2 Overall Gas Turbine Heat Balance Analysis Procedure Gas Turbine Heat Balance Iteration To Get Exhaust Flow 1. | Guess the inlet airflow. 2. | Compute the air/fuel ratio, or use a chemical mass balance to get the exhaust gas composition. 3. | Get the exhaust gas specific heat or enthalpy from table look-up or gas properties routine. 4. | Compute the exhaust gas flow rate from Equation 212, 5. _ | Compare the exhaust gas flow minus the fuel flow to the guessed inlet air flow. If the two air flows (from steps 1. and 5.) are not equal go back to step 1 Such an iterative solution process is probably too complex to implement as a calculated tag in a control system or daia historian, and can even be very challenging to implement in a spreadsheet environment. Note, too, that this is just considering the gas turbine heat balance. Overall plant heat balances involve many more equations and unknowns, and these unknowns must be determined by some iterative process. The rewards of overall plant heat balance are available only to those willing and able to make the leap to the iterative mathematical analyses provided by the application of heat balance computer programs and/or equation solvers. Once the exhaust gas flow is known, the condenser duty can be calculated from the overall plant energy balance, Equation 2.12. The cooling water flow that runs through the tube-side of the condenser can then be computed from an energy balance on the tube-side of the condenser. Condenser Cooling Water Flow Rate: CondenserDusy “8 CoolingWaterFlowRate =—__CondenserPuty_ [SpecificHeat * TemperatureRiselcosingwaur The HRSG duty can be calculated from the gas-side energy change from the HRSG inlet to the stack (Equation 2.6). The energy change of the exhaust page 73 2. Heat Balance Analysis, gas is accurately indicated by the temperature measurements at the gas turbine discharge and at the HRSG stack. Equation 2.8 is the steam/water energy balance for the HRSG. This equation includes all of the steam and water flows into and out of the HRSG, but the high pressure steam flow is the most important of these flows. From an energy perspective, the HP steam energy represents 70% to 90% of the total HRSG heat transfer. The reheat steam flow mostly derives from this same HP steam flow, so that the determination of reheat steam flow is improved if the HP steam flow is known more accurately. Assuming that HP steam is at least three-fourths of the total steam energy, the accurate measurement of this flow is three times more important to an accurate energy balance on the HRSG than are all the other steam flows combined. In other words, a 1% error in the measurement of this HP steam flow will cause the same size HRSG energy balance error as a 3% or 4% ‘measurement error in either the IP or LP steam flows. Therefore, a modeling technique that improves the precision of the HP steam flow measurement is, valuable, even if the IP and LP steam flow rates still must come from measurements. ‘The HRSG steam/water side energy balance, Equation 2.8, can be solved for the HP steam flow, resulting in: wap = {Wean*hcan + Wew*hew + Went *hpar + Powerpump + Queso - Waat*hunu - Wora*hpna- Wie*hip - Wiehe} | hup (2.15) Al the terms on the right hand side of this equation must be measured except the HRSG duty (Quasa), which comes from the overall plant energy balance plus the HRSG gas-side energy balance. The beauty of the method is that the heat balances used to get the HRSG duty do not involve measured flows (except plant fuel flow). If measured power levels, temperatures and pressures are accurate to 2%, the HRSG duty will be accurate to 2%. Since the energy contribution of the other steam flows (IP and LP) is small relative to Quinsc, even a 4% measurement error in these other steam flows will not add much to the uncertainty in the HP steam flow, wie, and the uncertainty of the HP steam flow will still be on the order of 2%. ‘The HP steam flow that is calculated from the heat balance is called the heat balance value of HP steam flow. This heat balance value can be compared to the measured value of HP steam flow. This comparison can serve either as a confirmation of the heat balance analysis or, more likely, as a diagnostic on the accuracy of the measured HP steam flow. 2. Heat Balance Analysis page 74 Now that the steam flow into the steam turbine is known, we can get the steam turbine discharge steam enthalpy by using heat balances on the control volume surrounding the steam turbine. We can then estimate the overall steam turbine efficiency by using the enthalpy-drop method In summary, overall plant heat balances for combined-cycle power plants ‘can add data thiat were not measured and improve the accuracy of selected measured data. The additional data added by the heat balance analysis are essential for the expected equipment performance evaluations that follow the heat balance analysis. age 75 2, Heat Balance Analysis, Table 2-3 Summary of Overall Plant Heat Balance Inputs and Outputs for ‘Combined-Cycle Power Plant Overall Plant Heat Balance Analysis for Combined-Cycle Plants Required Measurements Calculated from Heat Balance Ambient Conditions: Temperature, Pressure and Humidity Condenser Duty Steam Turbine Power Condenser Cooling Water Flow Fuel Input Energy Gas Turbine Exhaust Gas Flow Fuel Chemical Composition and Heating Value Gas Turbine Exhaust Enthalpy Gas Turbine Exhaust Temperature Firing Temperature of Gas Turbine Compressor discharge temperature and pressure First Stage Nozzle Flow Area of GT Steam/Water Injection Rate to Gas Turbine | Compressor Efficiency Gas Turbine Power Expander Efficiency HIRSG Stack Temperature HRSG Duty IP Steam Flow HP Steam Flow fiom HRSG LP Steam Flow Steam Turbine Discharge Enthalpy Steam/Water Extraction Flows from HRSG (to process or deaerator heating) ‘Steam Turbine Isentropic Efficiency Preheater Water Flow Preheater Outlet Water Temperature All Steanv Water Temperatures & Pressures Condenser Cooling Water Inlet Temperature Condenser Cooling Water Outlet Temperature 2. Heat Balance Analysis page 76 2.4 Combined-Cycle Heat Balance Using Commercial Software GateCycle™ is one of several commercial “heat balance” codes that can perform a heat balance analysis of a combined-cycle power plant. Figure 2-3 shows a screen from a GateCycle™ model of a combined-cycle power plant consisting of two industrial gas turbines with HRSG’s and a single steam turbine. Figure 2-3 Model of Combined-Cycle Power Plant in GateCycle™ Software GateCycle™ and most other heat balance codes do not allow the user to input the power generation directly, but instead require inputs such as the steam turbine efficiency or exit enthalpies, which are then used to calculate the power. Since power is an important measured parameter that the heat balance analysis must match, when using GateCycle™ for heat balance analyses the user must set-up what that program calls a macro (a user- defined control calculation) that iterates upon a selected plant variable, such as LP steam turbine efficiency, until the measured power is matched. The HRSG models built using heat balance codes are typically more detailed than the measured data can support. That is, a complete heat page 77 2. Heat Balance Analysis balance analysis around the HRSG would require measured steam/water temperatures at the outlet of each economizer and superheater icon in order to have a complete set of inputs for the analysis without the need to make assumptions about the performance of any of the icons (tube banks in the HRSG). If some the economizer or superheater outlet steam/water temperatures are not measured, then the heat balance analysis must make assumptions such as the effectiveness of the superheater or the outlet subcooling of the economizer. ‘The incorporation of assumptions such as these into the configuration of the heat balance analysis will mean that any degradation that has actually occurred in the icons with assumed performance will, according to the results of the heat balance analysis, be shifted to other equipment in the plant. For example, if the HP economizer tubes in the operating plant were to foul and reduce the effectiveness of the economizer, but the heat balance analysis was configured assuming no fouling in the economizer; then in the results from the heat balance analysis, the degradation would shift from the economizer to the evaporator. The calculated overall (or sectional) HRSG degradation would still be correct; and, as long as the performance monitoring system reports overall HRSG degradation and not the degradation for each icon in the HRSG, there will be no error in the reports from the performance monitoring system. In GateCycle™, the HRSG model can be configured so that steam generation in the HP evaporator is an input to the calculation. However, there is only cone HP steam generation that would be consistent with the measured values of IP steam flow, LP steam flow and the stack temperature. An iteration can be used to vary the input steam generation in the HP evaporator until the measured stack temperature is achieved (the IP and LP steam flows, temperatures and pressures are fixed inputs). A GateCycle™ macro is used to accomplish this iteration. The third and most complex iteration required in the application of a GateCycle™ heat balance analysis for a combined-cycle power plant is the ‘gas turbine heat balance. In the gas turbine heat balance, the measured fuel flow, compressor discharge conditions, exhaust temperature, and power are input to an iteration to determine the compressor efficiency, expander efficiency, firing temperature, exhaust gas flow rate, first stage nozzle flow area, and exhaust enthalpy. This detailed gas turbine heat balance analysis uses three GateCycle™ icons and is preferred to the overall gas turbine heat balance described as described in Table 2-2, which uses a single control volume for mass and energy balances. The detailed gas turbine heat balance yields more detailed output information than does the single control volume gas turbine heat balance analysis. 2, Heat Balance Analysis, page 78 A GateCycle™ detailed gas turbine heat analysis model (for a typical industrial or frame gas turbine configuration) is illustrated in Figure 2-4, This model includes a macro to iterate on air inlet flow rate until the user- input power is matched by the results from the heat balance model, case: ore POWER: 168.08 Gas Turbine Heat Balance Analysis Figure 2-4 Detailed Gas Turbine Heat Ealance Analysis Model in GateCycle™ Since the gas turbine heat balance is an iterative calculation and the overall plant heat balance s also iterative, itis efficient and safer (meaning there is a higher probability of convergence) to solve the two problems in separate GateCycle™ models. To do this, first run the gas turbine heat balance and then transfer the results as input data into the gas turbine (type GTDATA} icon in the overall plant heat balance model. The overall plant heat balance can then be executed without fear of the gas turbine iteration causing convergence problems in the overall plant heat balance. page 79 2. Heat Balance Analysis Table 2-4 Macros Required in GateCycle™ Heat Balance Analysis of Combined- Cycle Plant Macros Required in GateCycle™ Heat Balance Analysis for Combined-Cycle Plant Measured Value to Match Iteration Implemented in a Macro Overall Plant Power Heme on LP Turbine Efficiency Stack Temperature Ienite on HP Steam Generation Gas Turbine Power Iterate on Gas Turbine Airflow First Stage Inlet Flow Area (optional) | Iterate on Gas Turbine Fuel Flow 2.5 Rankine-Cycle Overall Plant Heat Balance The application of heat balance analyses to Rankine cycle power plants is often more mathematically complex than that for combined-cycle power plants because the flow streams in a Rankine plant are more interconnected than in a combined-cycle plant. In addition, there is generally little ‘measured data available inside the boiler of a Rankine cycle plant. This forces the heat balance model to make assumptions about the performance of some of the equipment in the boiler. One useful and attainable objective of Rankine cycle heat balance analysis, is to generate a complete set of thermoéynamic data about the boiler such that the boiler efficiency from the ASME loss method is equal to the boiler efficieney from the input/output method when heat balance data is used in the calculations. The result is a set of operating boiler data that is more complete than the measured data and that is consistent with the available ‘measurements. {A second objective is to improve the accuracy of the boiler fuel flow ‘measurement. The heat balance analysis requires measurement of the feedwater flow and/or the main steam flow; the fuel energy input to the system is then determined by the heat balance. The result is a fuel flow estimate that is at approximately the same accuracy as the feedwater flow measurement. A third objective of the Rankine cycle heat balance analysis is to generate input and output flow information about each piece of equipment in the plant that can be used to evaluate the degradation for each piece of 2, Heat Balance Analysis page 80 equipment. The boiler heat balance data will enable the performance of the walls, superheaters, and economizer in the boiler to be estimated. The additional information about the steam cycle (such as the flow rates and enthalpies of all flows leaving the steam turbine) will enable the degradation in the steam turbine and feedwater heaters to be evaluated. The heat balance analysis does not determine the degradation: the heat balance analysis provides a more complete set of current plant operating data so that degradation can be determined in subsequent calculations. Degradation is the difference between current or actual performance and the expected or rated performance. The heat balance analysis provides additional information about the current performance of plant equipment, and it provides input data for the expected performance prediction. The difference between the current and expected performance determines degradation. Ash Figure 2-5 Control Volumes for Heat Balence Analysis of Rankine Cycle Plant Figure 2-5 illustrates the control volumes used for heat balance analysis of Rankine cycle power plants. The diagram is simplified in that it does not show all of the flow streams, and the number of feedwater heaters has been reduced to two closed heaters and one open heater. In a complete power page 81 2. Heat Balance Analysis plant analysis, a separate control volume would be used for each feedwater heater in the power plant, regardless of the number of feedwater heaters. In this example, the main steam superheating has been modeled as two tube banks (SH and SH2). The primary superheater (SH) is in the radiative section of the boiler and the secondary superheater (SH2) is in the convective section of the boiler. A similar modeling convention has been used for the reheat tube banks (RH and RH2). In actual practice, it may be difficult to determine performance for each of these tube banks because of limited measured data. If the steam temperatures at the exit of each tube bank are measured, the heat balance can determine the effectiveness or cleanliness of each tube bank. If fewer measurements exist, the heat balance analysis must assume the performance of tube banks where the exit steam temperature is not known. This makes it impossible to determine degradation in those tube banks. Any degradation that actually occurs in tube banks where performarice is assumed to be as rated will appear instead as degradation in other (downstream) tube banks. Boiler heat balance analysis is discussed in more detail in Chapter 10, “Boiler Performance”. The objective of a heat balance analysis is to determine the thermodynamic properties of the mass and energy flows into and out of the control volumes ‘used to model the power plant. A set of measured data is input to the heat balance analysis and a complete set of heat balance data for the system is output. Table 2-5 summarizes the inputs and outputs for a Rankine cycle power plant heat balance. Often the number and location of the measurements available for use are such that the heat balance is mathematically over-specified in some areas of the plant. This means that there is more measured data than needed by the heat balance analysis. An example is that if the feedwater flow into the boiler (minus the desuperheating spray flows) is measured along with the main steam flow. If boiler blowdown is zero, then the heat balance equations would state that the feedwater flow must equal the main steam flow. If the measured values of the feedwater flow and the steam flow are not equal to each other, then there is no solution to the heat balance equations that matches both measurements. In this ease, the performance engineer is forced to use only one of these measurements as an input to the heat balance analysis or, alternatively, to use both measurements to improve the accuracy of the calculated results through the heat balance optimization procedure described in Chapter 10. 2, Heat Balance Analysis page 82 ‘Table 2-5 Inputs and Outputs for Heat Balance Analysis of a Rankine Cycle Power Plant Overall Plant Heat Balance Analysis for Rankine-Cycle Plants Required Measurements Calculated from Heat Balance Ambient Air Temperature, Pressure & Humidity Boiler Efficiency Steam Turbine Power Boiler Fuel and Air Flows Plant Auxiliary Power Reheat Steam Flow Main Steam Flow to Steam Turbine ‘Steam Turbine Extraction and Discharge Flows Main Steam Desuperheat Spray Flow ‘Steam Turbine Extraction and Discharge Enthalpies Reheat Steam Desuperheat Spray Flow LP Steam Turbine Inlet Flow Main Steam Temperature and Pressure HP Turbine Power and Efficiency Hot Reheat Steam Temperature and Pressure IP turbine Power and Efficiency Cold Reheat Steam Temperature and Pressure LP turbine Power and Efficiency ‘Steam Turbine Extraction and Discharge Pressures and Temperatures Feedwater Heater Effectiveness IP Steam Turbine Exit Temperature Main Steam Superheat Effectiveness Feedwater Heater Outlet Feedwater Water ‘Temperatures Reheat Steam Superheat Effectiveness Feedwater Heater Outlet Drain Water ‘Temperatures Water Wall Effectiveness Economizer Outlet Water Temperature Economizer Effectiveness Fuel Composition Air Heater Effectiveness Fuel Heating Value ‘Air Heater Leakage Unburned Carbon in Ash ‘Condenser Cleanliness page 83 2. Heat Balance Analysis, ‘Oxygen and/or CO; at Air Heater Gas Inlet Condenser Duty ‘Oxygen and CO, at Air Heater Gas Outlet Condenser Cooling Water Flow Gas Temperature at Air Heater Inlet Flue Gas Flow and Composition Gas Temperature at Air Heater Outlet Air Temperature at Air Heater Outlet Power of Fans in Boiler Condenser Pressure Condenser Cooling Water Inlet Temperature Condenser Cooling Water Outlet Temperature 2. Heat Balance Analysis age 84 Table 2-6 Optional Measurements and Modeling Assumptions for a Rankine Cycle Power Plant Heat Balance Overall Plant Heat Balance Analysis for Rankine-Cyele Plants Optional Measurements | Modeling Assumption Required If Parameter Is Not Measured Economizer Exit Water Correlation for Economizer Temperature Heat Transfer Coefficient Secondary Main Steam Correlation for Superheater Superheater Exit Temperature | Heat Transfer Coefficient Secondary Reheat Steam Correlation for Superheater ‘Superheater Exit Temperature | Heat Transfer Coefficient Steam Turbine Extraction Use Overall Steam Turbine Temperatures Efficiency to Get Extraction Temperatures Leakage Steam Flow from HP | Seal Leakage Model to IP Turbine Radiation Loss from Boiler to | Assume a Percent Loss or Environment use Table Look-up Versus Lead Pump Inlet and Outlet ‘Assume a Pump Efficiency Temperatures & Pressures Water Wall Exit Steam Quality | Assume a Wall Exit Steam or Boiler Recirculation Ratio | Quality page 8 Heat Balance Analysis 2.6 Rankine-Cycle Heat Balance Using Commercial Software TR Figure 2-6 Model of Rankine-cycle power plant in GateCycle™ software ‘Commercial heat balance codes such as GateCycle™, PEPSE™, VirtualPlant™, and SteamPro™ can perform heat balance analysis of Rankine cycle power plants at the level of detail needed for performance monitoring applications. Figure 2-6 shows a screen from GateCycle™ displaying a sample heat balance mode! of a Rankine cycle power plant. This GateCycle™ model replicates closely the control volume schematic illustrated on Figure 2-5. It is important to configure the models built using commercial heat balance codes in a manner such that the inputs and outputs (see Table 2-5) are appropriate for heat balance analysis as used here. Heat balance analysis as described in this chapter is not a prediction of plant performance; it is instead a mass and energy balance of the power plant that matches the measured data. 2. Heat Balance Analysis, page 86 The use of commercial heat balance codes to perform heat balance analysis normally involves an iterative procedure where selected inputs to the heat balance code are changed until a desired measured result is achieved. For example, the steam turbine power is normally measured, but itis usually not an input to a commercial heat balance code. Thus, the user must iterate on (guess values for) an input to the code (such as LP steam turbine efficiency) until the desired result (measured value for steam turbine power) is output by the heat balance code. Most commercial heat balance codes enable the user to set up such iterations so that they are automatically executed as part of the plant model. GateCycle™ calls these macros, and PEPSE™ calls these controls: in both cases, user-specified iterations can be configured as part of the plant model such that the iterations run automatically when the heat balance code is executed.. page $7 2. Heat Balance Analysis, 2. Heat Balance Analysis page §8 3. Data Validation 3.1 Definition of Data Validation Data validation is the process by which the quality of raw input data is evaluated and improved if possible. Data validation, when applied to power plant performance monitoring, has the following objectives © Identify data that is not or cannot be correct ‘© Reject data that is in error * Provide replacement values for data that is determined to be in error Improve the accuracy of input data ‘* Determine the accuracy of the resulting input data Data validation can be viewed as a decision-making process where tests are applied to the raw input data, and individual data values are either accepted or rejected. If the raw input data is rejected, replacement values may be supplied. In either case, the quality of the resulting input data may be improved if possible. Finally, the ideal data validation system will estimate the accuracy of the resulting input data. 3.2 Range Checking 3.2.1 Static Ranges The most often-used data validation technique is range checking. Range checking is simply testing the raw data value against minimum and maximum values and rejecting the data value if itis lower than the minimum or higher than the maximum. If the data value is rejected, it may be replaced with a default value or removed from the set of data that is input to the analysis. The minimum and maximum values (ranges) used in range-checking algorithms have a direct impact on the ability of the rangé-checking algorithm to identify incorrect data. The first step in establishing the minimum and maximum values is to determine the objective of the range checking. If the objective is simply to detect and reject data from failed sensors, a minimum value lower than ary reasonable measured value and a ‘maximum value higher than any reasonable measured value can be used. This is because failed sensors usually report data values that are far from the expected measured result. That is, the failed sensor data usually goes to zero page 89 3. Duta Validation or negative, or it goes to a very high value, beyond any physically reasonable value of the quantity being measured. 3.2.2 Dynamic Ranges If the objective of a data validation process is to detect sensor drift or partial failure, then the ranges (minimum and maximum values) should be derived from a model which produces a prediction of the expected value of the sensor. In this case, the ranges result from a prediction, and therefore usually change over time. Ranges that change over time are called dynatnic, ranges. The optimal technology to predict the expected data value from a sensor is a computer-based model of the overall power plant expected performance, implemented in a commercial heat-balance computer code. The methodology to establish dynamic ranges using an overall plant predictive model is to first acquire measured values for plant operating conditions such as ambient conditions, load, fuel type, and process steam requirements. Then, predict the overall plant expected performance using the plant model. This model should also predict the expected values for all the remaining data sensors in the plant. ‘An interval of acceptable values around the expected sensor value must be established for each sensor. This interval may be expressed as either a percentage of the expected value or as an absolute change in value (delta) from the expected value. Then the minimum and maximum values for the ranges used in range checking will be at the extremes of the interval of acceptable values. Ifa percentage is used, the minimum value will equal the expected value minus the percentage of the expected value, and the maximum value will equal the expected value plus the percentage of the expected value. If an absolute change in value is used, then the minimum, value is the expected minus the change and the maximum value is the ‘expected value plus the change. ‘The interval of acceptable values must be chosen carefully. If the interval is too small the algorithm will reject data representative of the degradation that the monitoring system is trying to detect. If the interval is too large the algorithm will fail to detect sensor drift. A simpler alternative to using an overall plant model is to establish relationships among sensors that are local to one another. Many different techniques have been used to establish these relationships. The most frequently used techniques are based on the application of the laws of 3. Data Validation page 90 thermodynamics and the development cf correlations between related data values. Examples of the relationships among sensor values derived ftom application of the laws of thermodynamics are as follows. The cooling water outlet temperature from a heat exchanger should be lower that the inlet hot fluid temperature and higher than the inlet cooling water temperature. The water temperature out of an air-cooled cooling tower must be greater than the ambient wet-bulb temperature and less than the hot cooling water from the condenser. The gas temperatures down the path of an HRSG should decrease after each heat exchanger and increase after a fired duct burner. The measured steam temperatures should be equal to or above the saturation temperature. The steam temperature in a steam drum or condenser should equal the saturation temperature, unless there are non-condensable gases (air) in the system, Often the relationship between sensor values is too complex to model or is not known. In this case a relationship can be developed by correlating the expected value of a sensor with the histcric values of related sensors. The engineer who develops such a correlation needs to identify which sensors a given sensor is related to and dependent upon, and then acquire the appropriate amount of historic data for all the related sensors. Various methodologies, including neural networks, may be used to create the correlations. The resulting relationship (correlation) can be used as a predictive model of the expected data value from the sensor. 3.2.3 Rejected Values When a value is rejected by the range check, some action must be taken. Some actions that may be taken are: * Do nothing, but record a warning about data accuracy © Replace the data with a static default value * Replace the data with the expected value of the sensor calculated from a model * Halt execution of the monitoring system calculation, wait for the next time step ‘The action to be taken may depend upon the importance of the measured data value to the performance-monitoring calculation, Some input data values are vital to the calculated results, but others have little impact. For example, the inlet air temperature to a gas turbine and the gas turbine power are critical parameters, and there is little use in running the performance- page 91 3. Data Validation monitoring calculations if these data values are incorrect. However, the inlet air humidity to the compressor has almost no effect on the performance- ‘monitoring calculations, and so a static default value can be used to replace an incorrect inlet air humidity data value. Generally, for a robust implementation of performance-monitoring calculations, it is recommended that all rejected data values be replaced with reasonable default values appropriate for the sensor so that mathematical problems in the ensuing calculations can be minimized. Some data values, even if they are not normally important to the calculated results, can lead to mathematical or programmatic problems such as divide by zero, failed table look-ups, negative flow rates, or negative heat transfer rates. Since a monitoring system has no plant control or equipment protection role during normal operation of the power plant, it is usually of litle consequence if the calculations are not executed at a particular time step. Thus, the procedure of choosing the action to not run the calculations, when data vital to the calculated results fails a range check, can be an effective way to avoid errors in the calculations. It is usually better to not report a degradation value at a given time step than to report a value that is incorrect. Similar logic can also be used to stop the monitoring calculations during plant operational modes where the monitoring calculations are not appropriate, such as during start-up or at low plant loads. 3.3 Averaging Sensor Data Often there is more than one sensor available to measure a single quantity such as the stack temperature, the gas turbine exhaust temperature, or the compressor inlet temperature, but when only one data value is needed or used in the performance calculations to represent the measurement. In this situation, the use of multiple sensors can improve both the reliability and the accuracy of the measured data. ‘There are two ways that multiple sensors can be used to improve data quality. The first is to allow a measurement to survive the failure of a single sensor without losing validity. The second reason is to improve the accuracy, Multiple readings at various locations are often more accurate than a single reading, especially if the average of a flow stream is desired. ‘The exhaust temperature of a gas turbine varies across the exhaust duct, and temperature measurements are taken at multiple points, and it is the average exhaust temperature that is most appropriate for use in calculating the overall energy balance. Multiple exhaust gas temperature sensors provide enough data to calculate a meaningful average temperature. 3. Data Validation page 92 There are many schemes used to identify and account for failed sensors in a multiple-sensor situation, and then compute an average of the accepted sensor values. The following algorithm has been used successfully in the EfficiencyMap™ performance monitoring system, and is called the “Smart Average”. TM EfficiencyMap™ Smart Average Procedure: Acquire data values from all-sensors that measure the same plant characteristic 2. Range check each of the data values against minimum and maximum. values Discard any data values that fail the range check Ifall data values fail the range check, provide a default data value Calculate the median (midway point) of the remaining data values awh we Discard any data values for which the absolute value of the data value minus the median is greater than the required precision, where the required precision is an input parameter for each measurement 7. Compute the average (mean) of the remaining values This procedure will use all data values which pass the test criteria. This procedure rejects obviously incorrect values via the minimum and maximum values used for range checking, and also rejects data values that are inside the range-check limits but disagree with the other data values by ‘more than the value of the required precision. The required-precision value should be determined for each set of sensors based upon the estimated accuracy of the sensors and the expected variation of the quantity being measured across the field of measurement. The required precision should be small enough so that sensors that are badly out of calibration are rejected, but large enough so that calibrated sensors that are measuring different locations in the flow field are not rejected. A reasonable estimate for the required precision is to double the expected variation (sensor accuracy plus variations due to location) in sensor readings when multiple sensors are in the flow field. As an example, the average exhaust temperature from a gas turbine is a number typically accurate to within a few degrees Fahrenheit; however, the required precision used to average these exhaust temperatures should be at least 50 F (28 C) because the actual gas temperatures normally vary by as much as 15 F to 25 F around the annulus of the exhaust duct. page 93 3. Data Validation, In another example, the appropriate required precision for the inlet air temperature to a gas turbine should be only 3 to 5 degrees Fahrenheit because the inlet air temperature normally does not vary much across the inlet flow area, and the measured temperature is typically accurate to within approximately one degree Fahrenheit (0.55 C). 3.4 Time Averaging Averaging both measured (input) data and calculated (output) data over time can be a very effective method to improve both the accuracy and the reliability of a performance-monitoring system. An online performance- monitoring system has the advantage that many data sets can be evaluated over long periods of time. Reviewing trends of measured and calculated data versus time helps engineers to evaluate when and if performance changes actually occurred. The standard deviation of a measured data value due to the random fluctuations can be reduced by taking multiple measured data values (readings) over a period of time and then averaging the data values. The standard deviation of the time-averaged data value is related to the standard deviation of a single measured data value by the following formula. Standard Deviation of a Measured Value Se 3.1 Un GA) where ‘Syis the standard deviation of ¢ single measured data value Sz is the standard deviation of the average (mean) of a set of data values taken over time and then averaged Nis the number of data values used to determine the time-averaged value. Time averaging of data values from a data acquisition system can dramatically reduce the uncertainty in a measurement due to random error. However, time-averaging does not reduce the measurement error due to the systematic uncertainty (bias) in the messurement. The systematic error is constant over the time interval of the time-averaged data values, and is not affected by the averaging. Performance-monitoring calculations are based upon steady-state predictions of plant and equipment performance. Time-averaging the 3. Data Validation page %4 measured data can smooth out transients and improve the accuracy of the calculations. ASME test procedures typically call for time-averaged data as input to the calculations. ‘Some measurements vary in such a manner that they must be averaged in order to get a value that is representative of steady-state conditions. An example is the feedwater flow rate into an HRSG, which is controlled by water-level sensors in the steam drums of the HRSG. The water levels in these drums change slowly relative to other parameters in the system, causing the feedwater flows to lag the corresponding steam flows by several minutes. The feedwater flows into some HRSG’s have been observed to behave ina cyclic manner with the length of a cycle on the order of ten to twenty minutes. The amplitude of the cycle is ten to twenty percent of the value of the average feedwater flow rate. Thus, even when a combined-cycle power plant is operating at base load, a measured feedwater flow value at one point in time can vary by ten percent from the sum of the steam flows at that same point in time. In such a case, it may be necessary to average the feedwater flow measurements over a time interval as long as twenty minutes. ‘The author has found that time averaging of measured data is typically not used in most on-line performance monitoring implementations even though it is generally agreed that it can improve accuracy. One reason for not using time averaging in an on-line performance monitoring system is the response time of the monitoring system. Some power plant operators ask for less than one minute update rates on the performance calculations. Such a requirement makes it impossible to do much time averaging. A second reason is that time-averaging increases the complexity of an already complex system, and many engineers do not think that the small improvement in results is worth the effort, especially when it is difficult to quantify the improvement in results. Note also that the time averaging of the outputs from a performance monitoring system can always be used as an aid to interpreting the results, and may alleviate much of the need for time-averaging of the input data. Figure 3-1 displays calculated gas turbine corrected power values calculated from data taken from an operating gas turbine over a six-month period. These corrected power values were generated by an on-line performance monitoring system (EfficiencyMap™) that calculates the gas turbine corrected power at approximately five-minute intervals, No time-averaging was used in the calculation of gas turbine corrected power; however, the data was filtered such that the plot shows corrected power values only when page 95 3. Data Validation the gas turbine was operating at baseload, In this plot, the darker black data Points on the plot are the corrected power values when the engine was ‘operating at baseload, and the lighter grey data points on the plot were calculated using EfficiencyMap™'s smart-averaging process over the last 24 corrected power values, It is obvious from the plot that the time-averaged corrected power values have less scatter than do the corrected power values that were not time-averaged. Figure 3-1 Gas Turbine Corrected Power calculated from data taken only when the engine was at baseload. Darker points are the corrected power values; lighter Points are the EfficiencyMap™ “smart average" of the last 24 corrected power values. 3.5 Heat Balances for Data Validation Often there is redundancy in the measured plant data and a performance monitoring system can use that redundancy to improve the accuracy of the measured data values. Redundancy exists when more measurements are available than are necessary to determine the quantity being measured, One example of a redundancy is when several sensors are used to measure the same physical quantity, such as multiple thermocouples used to measure the 3. Data Validation inlet air temperature. In this situation, the sensor-averaging techniques described above can be used to improve the measurement accuracy. A less obvious example of redundancy is when mass and energy balance relationships can be applied to measured data in order to calculate a parameter that is measured. For example, the measured HP steam flows from each of the HRSG’s in a combined-cycle power plant are redundant with the measured throttle steam flow into the steam turbine if all the HP steam is going to the steam turbine. That is because the sum of the measured HP steam flows should equal the measured steam turbine throttle flow. If all these values are measured, the redundancy can be used to improve the accuracy of the measurements, and to estimate the uncertainty in the ‘measurements. Consider the case where there are two measured HP steam flows from two HRSG's, and a measured steam turbine throttle flow, as illustrated on figure 3-2. HRSG #1 HP Steam 106 KPPH, 1000 F, 1000 psia ‘Throttle Steam 220 KPPH, 1020 F, 1000 psia 99 KPPH, 1040 F, 1000 psia, TIRSG #2 HP Steam Figure 3-2 Example of Redundant HP Steam Flow Measurements In this example, it is easy to see that all of the measured data cannot be correct. According to the flow measurements, the total measured inflow from the HRSG is 205 KPPH (thousands of pounds per hour) of steam while the measured outflow to the steam turbine is 220 KPPH. The measurements violate the principle of conservation of mass which states that the inflows page 97 3. Data Validation must sum to equal the outflow. The measurements also violate conservation of energy since the total energy into the system is greater than the outflow of energy. Some of the measured data values must be incorrect. Conservation of Mass for Figure 3-2: Waser + Warscr = Wrote (3.2) Conservation of Energy for Figure 3-2: Waascrunser * Wuesoa aesc2 = arate Mirae (33) where w is the steam mass flow rate nis the enthalpy of the steam ‘One way to calculate a set of validated data that satisfies the conservation of mass and energy is to “believe” the HRSG measurements, and use mass and energy balances to calculate the throttle flow conditions. In fact, in this, example, there are three different sets of measured data that could be ‘generated by believing the data at two of the measurement locations, and calculating the data at the third location. Any one of these sets of data might be correct if the only test for correctness is to satisfy mass and energy balances. It would then difficult to know which of these possible sets of validated measurements to use in the monitoring system. Some way to test and rank them would be desirable. ‘The first three rows of results on Table 3-1 show the mass flow rates calculated when two of the three measurement sets are believed and the third is calculated from mass and energy balances. The problem with all three of these sets of validated data is that in each case only two of three possible measurements were used. and the third measurement was ignored. Ifall three sensors are of equal quality it might be better to find a set of validated data that uses all three of the sets of measured data values as inputs, using them to generate a set of validated data values that are some sort of weighted average of the sensor values while still satisfying the mass and energy balances. 3. Data Validation page 98 Table 3-1 Possible Sets of Validated Measurements that Satisfy Conservation of, Mass and Energy (uncertainties set equal to 1.0 for all sensors) Validation Assumption | Valideted | Validated | Validated | Least- HRSG #1 | HRSG #2 | Throttle | Squares Flow | Flow | Flow | Error 1. Believe HRSG #1 and 1060 99.0 | 205.0 | 225, HRSG #2 Measurements 2. Believe HRSG #1 and 1060 | 1140 | 2200 | 227. Throttle Measurements 3. Believe HRSG #2 and 121.0 990 | 2200 | 238. Throttle Measurements 4, Believe All 1075 | 1075 | 2150 | 995 ‘Temperatures, Equal Uncertainties on Flow Sensors 5. Optimal with Equal 110 | 1040 | 215.0 | 75.2 Uncertainties ‘One way to accomplish this “weighted averaging” of the sensor values is to find the set of validated data that satisfies mass and energy balances and is as close as possible to all the measuremer nts using the concept of “least- squares error”. The set of validated data values that minimizes the difference between the measured data values and th ie validated data values, based on the calculated least-squares error, is the preferred solution. This criterion of “least-squares error” is calculated using the following expression. x, LSE= > where vated — X mensrd S sersor 2 ) LSE is the least-squares error for the set of measurements and validated data Xvpeanees 18 the measured data value for a given sensor X saves i8 the validated data value for a given sensor that satisfies conservation of mass and energy page 99 3. Data Validation. yuu iS the estimated uncertainty for measured data from a given sensor & isa mathematical symbol indicating that the following terms are to be summed for all the sensors (measurements) being validated, This formulation of the least-squares errcr uses the sensor uncertainty to ht the importance of each sensor in finding the validated results. Ifa particular sensor is known to be more accurate than the other sensors, a small value of the uncertainty for that sensor will cause the validated results to be relatively closer to the measured value for that sensor than to the values measured by other sensors which have larger uncertainties. Another factor to take into consideration when determining the optimal set of validated data from a performance-monitoring system is that certain classes of sensors are known to be more accurate than other sensors. In this case the data validation procedure can be constrained to “believe” these sensors and then adjust the remaining sensors so as to satisfy mass and energy balances. For example, the tempe-ature measurements are typically more accurate than are the flow measurements. The data validation procedure can be forced to “believe” a more accurate measurement by adding the constraint expressed in Equation 3.5 to the set of equations to be solved. For measurements to be believed by the least-squares validation: AG x, waned = X mused G.5) To force the least-squares validation procedure to believe all three of the temperature measurements (Assumption 4 in Table 3-1 above). three equations of the form of Equation 3.4 were added to the mass and energy balance equations. These three equations are: Tpnsetcatito = Fronsitmeasiod Ce) Tausei2vunkacs = Tes a 3.7) Tirentevaitnst = Tirana G8) The fourth set of results on Table 3-1 shows the validated flow rates that result from believing all of the temperature measurements and then finding the three flow rates that minimize the least-squares error. Notice that with this assumption (believing all three temperatures), the two HRSG inlet steam flows are forced by the mass and energy balances to be equal because the outlet (throttle) steam temperature is midway between the two inlet 3. Data Validation page 100, steam temperatures. The only way that this can happen, according to conservation of mass and energy, is if the inlet flows are equal. The optimal solution to the data validation problem is the solution which finds the set of validated data that is as close as possible to all of the measurements (minimizes the least-squares error), and still satisfies the mass and energy balances. The validated flows for the optimal solution are shown on the last row of validated results on Table 3-1. If all of the sensors are of equal accuracy, this set of validated is better than any of the other possible sets of validated results because it makes use of all of the sensors, and weights all of the sensors equally (absolute uncertainties set equal to 1.0 for all sensors) in the least-squares e:ror. This method of least-squares data validation can be applied anywhere in the power plant where redundant sensors exist. If the gas turbine exhaust gas flow is known from a measurement or from a heat-balance analysis of the gas turbine, then there are typically multiple redundancies in the HRSG data. The inlet feedwater flows are usually measured, and the outlet steam flows are also measured. These flows are redundant. In addition, there is redundancy because the gas-side energy loss must equal the energy gain of the steam/water side. Least-squares data validation can be used to determine more accurate values for the HRSG steam flows based upon measurements of feedwater flow and exhaust-gas inlet and outlet temperatures. Ina Rankine-cycle power plant, the measured feedwater drain flows are typically redundant with the measured feedwater flow rate and the outlet feedwater temperature from each feedwater heater, when mass and energy balances are applied to each of the feedwater heaters. As an example of the methodology required to implement the least-squares data-validation procedure described above, the two shaded boxes (Figures 3- 3 and 3-4) which follow present a mathematical model containing Equations 3.2 through 3.8, and show a set of results that are a solution to the mathematical model. The first shaded box (Figure 3-3) contains the complete set of equations required to implement the least-squares data-validation procedure that generated the results on Table 3-1. The equations are in the format required by the LINGO™ optimization equation solver. LINGO™ is described in more detail in Chapter 7, “Gas Turbine Performance”, and in Chapter 10, “Boiler Performance”. The second shaded box (Figure 3-4) contains a results report generated by LINGO™ as the solution to the set of equations presented in the first shaded box. page 101, 3. Data Validation ILINGO Model £6 Apply Least-Squares Mass-and-Energy-Balance Data- Validation to Flow Redundancy Data! MODEL: IMINIMIZE THE LEAST-SQUARES ERROR OBJECTIVE FUNCTION; min = ((wi-wi_mes) /Wi_UNC)"2 + ((w2-w2_mes)/W2_UNC)~2 + ((w3- w3_més) /W3_0NG)*2 + ((T1-T1_mes)/T1_UNG)*2 # ((12- 32 mes) (B2)UNC) *2 +. ((T3-T3/mes}/73_UNC) “2; \Conservation of Mass; wt 2 = 37 'Consezvation of Energy: wit (HEgsCp* (M-Tref)) + w2* (HEg#Cp* (72-Tret) }=w3" lHEg#Cp* (73 Tref)); lUse these equations to force belief in a given measurement; 't2=t2_mes; 't3=t3_mes; twi=wismes; ! CONSTANTS; cp=0.53 Tref=550.; Hfg=1200.7 !UNCERTAINTIES; wiune = 1. w2-une = 1. w3-une Tloune r2_une T3-une te re Li Le IMBASUREMENTS ; w1_mes w2Lmes w3omes Times 72/mes 73mes END Figure 3-3 Model used to solve the redundant steam flow data validation analysis using the LINGO™ equation solver 3. Data Validation page 102 LINGO RESULTS REPORT Rows= 3 Vars= 6 No. integer vare= 0 Nonlinear rows= 2 Nonlinear vars= 6 Nonlinear constraints= 1 Nenzeros= 16 Constraint non2= 9 Density=0.762 No. <: No. =: 2No. >: 0, Obj=MIN Single cole= optimal solution found at step: 22 Objective value! 75.27935, variable Value Reduced Cost a 110.9605, =0.14156938-04 Wi_MES 106-0000 9.000000 wove 3.000000 60000000 we 104.0403, 60000000 W2_MES 93.0000 9.000000 1.000000 0.000000 215.0008, -0.22308098-05 220.0000, 0.000000 1.000000 00000000 1039.79 0, 0000000 1040.000 0.000000 1.000000 00000000 999.7924 0.7487616E-05 1000..000 ‘0.000000 1.000000 0,0090000 1020.429 -0.35073878-05 1020000, ‘0000000 1.000000, 0.000000 0.S00a000 0. 0000000 550.0000 00000000 1200.00 0.000000 Figure 3-4 Output results from the LINGO™ model in Figure 3-3, page 103 3. Data Validation, 3. Data Validation page 104 4. Accuracy of Calculated Results 4.1 Instrument Error 4.1.1 Measurement Error ‘The terms and methods used in this book to describe the uncertainty in ‘measurements and in the results calculated from those measurements are consistent with the terms and methods described in “ASME PTC 19.1-1998, Test Uncertainty”, which is a supplement to the ASME performance test codes. Every measurement may be considered to equal the true value of the quantity being measured plus an error. The accuracy of the measurement is the closeness of the measured value to the true value. Normally that accuracy is expressed mathematically in terms of the measurement uncertainty. Uncertainty is the interval around the measured value that contains the trie value for a given confidence level Measurement uncertainty with 95% Confidence Level TrueValue =X £U yy 1 where TrueValue is the true value of the quantity being measured Xis the measured value Ups is the uncertainty in the measurement with 95% probability that the true value lies within an interval equal to the measurement plus or minus the uncertainty It is convenient to divide the error in the measurement into two components called the random error (or precision) and the systematic error (or bias). Every ‘measurement will fluctuate about a mean; the error in the measurement due to these fluctuations is called random error. Another tetm for the random error is precision. The mean of the measurements will be different from the true value by an error amount called the systematic error. Another term for the systematic error is bias. Measurement Error: b=fre (42) where 6's the total measurement error page 105 4. Accuracy of Calculated Results Bis the systematic measurement error (bias) is the random measurement error (precision) 41.2 Random Uncertainty One reason to divide the error into random and systematic is that the uncertainty due to random error may be estimated by inspection of the scatter in the measurements. The standard deviation of a set of measurements can be used to estimate the uncertainty in the measurement due to random error. The standard deviation of a data sample can be calculated by the following formula. ‘Standard Deviation of Measured Data Values: spo [peat 43) N where 'Syis the standard deviation of the set of measured values X,, is a measured value with index k % is the mean (average) of the measured values Nis the total number of measured values ‘kis an index to keep track of the measurements; the first measurement has k equal to one, the second measurement has k equal to two, and so forth, w > isa symbol indicating that the expression which follows is to be summed for all values of k from one to N. ‘The mean or average of the measurements is calculated as follows. Mean of Measurements: — 12 x 7 2 x, (44) In typical on-line monitoring situations a computer-based data acquisition is used and the measurement of a variable may be only a single data value or a single average of data values taken over a short period of time (milliseconds). If the period of time is short relative to the time period of the random variations 4, Accuracy of Calenlated Results. page 106 in the process, then this single average of data values should be treated as if it were single data value. The random uncertainty of this single data value from a computer based data acquisition system can be estimated by taking a set of data values over a longer time frame (over which the true value is expected to remain constant) and calculating the standard deviation of the set of data values Sometimes computer-based data acquisition systems will take a data value at a fixed time interval over a medium length of time (minutes), and then average these data values to get a time-averaged data value to use as current input to a performance monitoring calculation. If the standard deviation of a single data value is known from prior testing to be Sx, then the standard deviation of the mean of the time-averaged data values is, Standard Deviation of a Mean of Data Values’ (4.5) where ‘Sys the standard deviation of a test set of measured data values, taken prior to the performance test Ss data values the standard deviation of the mean (average) of the time-averaged Nis the number of current data values used to determine the time- averaged value. 4.1.3. Systematic Uncertainty Systematic error is constant as measurements are repeated and therefore cannot be quantified by observation of a set of measurements. Thus, the systematic uncertainty, # , must be estimated based on engineering judgment and knowledge of the equipment being used. Often the instrument manufactures provide an estimate of the likely systematic uncertainty to be expected from a given sensor. The total measurement uncertainty can be calculated from knowledge of the random and systematic measurement uncertainties as follows. Us =2 (4) +(S¢P 46) page 107 4, Accuracy of Calculated Results where Uisis the total uncertainty in the measurement with 95% probability that the true value lies within an interval equal to the measurement plus or minus the uncertainty Bis the systematic uncertainty (bias) in the measurement Sy is the random uncertainty (precision) in the measurement 4.2 Uncertainty of a Calculated Test Result To understand how the measurement uncertainty associated with each sensor used in a performance test affects the uncertainty of a result calculated from the ‘measurements, one must know how much a given change in a measurement will change the calculated result. The ratio of the change in a result to a unit change in a measured value used in the calculation of the result is called the sensitivity. The sensitivity of a calculated result toa change in a measured value can be found by the following procedure. First calculate the result value, R(1), using the measurement value, X(1). Then, change the measurement by a small amount to the new measured value, X(2), and calculate a new result, R(2), based upon the changed measured value. The sensitivity is the change in the result divided by the change in the measurement. Sensitivity of a Calculated Result to a Change in a Measured Value: R(2)- R() 9=x0@)-x0) mo where @ is the sensitivity of the result to the measurement X is a measured value used as input to the calculation of the result Ris the calculated result associated with the given value of the measurement Equation 4,7 is strictly valid if the calculated result depends only upon a single ‘measurement or if all the measurements are independent of each other in the calculation of the result. Neither of these cases is typically true in a performance monitoring calculation where the calculated result is usually a non-linear combination of many measurement variables. Fortunately the error in using this first order approximation is usually small. 4, Accuracy of Calculated Results page 108 Using this first order approximation, the calculated result from a set of ‘measurements is the result that would be obtained from a given set of measurements plus the change in the result that is caused by the change in any of the measurements. Result of Multivariable Calculation: RIX, Xy Xr) = Rey, Hay Hee vo) + Ox, (X= He, ) + Ox, Ka ~My, ) + Ox, (X5 — Hoe (48) where R is the test result calculated at given values of the measurements 4, is the sensitivity of the result 1o the measurement i H,, is the given value of measurement i where the result is known ‘is the value of the measurement / where the value of the result is desired Ifall the variables, X;, are independent in the calculation of the result, then the standard deviation of the result depends upon the standard deviation of the ‘measurements in the following way. Standard Deviation of Result: 28% H (4.9) where Sq is the standard deviation of a calculated result that depends upon the measured values of n parameters ,, is the sensitivity of the calculated result to measured parameter i S,, is the standard deviation the measured values of parameter i The total uncertainty of the calculated result depends upon both the random and the systematic uncertainties of the measured parameters. If the systematic uncertainty of each of the measurements is known then the systematic uncertainty of the result can be estimated as follows. Page 109 4. Accuracy of Calculated Results Systematic Uncertainty of Result: Bro {Xe Br, (4.10) By is the systematic uncertainty of a calculated result that depends upon the measured values of n parameters where 4,, is the sensitivity of the calculated result to measured parameter 7 Bj, is the systematic uncertainty the measured values of parameter i The total uncertainty of a calculated result depends upon the standard deviation and the systematic uncertainty of that result. Uncertainty of a Calculated Result: EZ ) +(Sp) (4.11) Uyo, is the total uncertainty in the result with 95% probability that the true value lies within an interval equal to the result plus or minus the uncertainty By, is the systematic uncertainty (bias) in the result S, is the random uncertainty (precision) in the result 4.3, Monte Carlo Method 4.3.1 Definition of the Monte Carlo Method The first order (linear) method, described above, to estimate the uncertainty in a calculated result given the uncertainty in the measurements is an approximation. This approximation may be inaccurate when applied to the situation where a complex, non-linear heat balance computer code is used to process the measurements and calculate the results. There is a way to correctly propagate the measurement uncertainties through a computer-based model, and it can be done to any desired level of accuracy. The method is called the Monte Carlo method, and it involves running many sets of simulated measured data through the computer model and compiling the 4, Accuracy of Calculated Results page 110 calculated results. Each input measured data value is varied according to its own probability distribution using a random number generator. After many calculations, each using a different set of input (simulated measured) data, the probability distribution of the results will be an accurate representation of the uncertainty in the calculated result given the probability distributions of the inputs. See Figure 4-1. It’s called the Monte Carlo method because it’s a lot like gambling (Monte Carlo is a world famous gambling location) in that random numbers generate the input to each calculation, Like in gambling, the results from many trials are determined by the probability of occurrence of each possible outcome. The more trials run, the more accurate the probability distributions of the outputs. Monte Carlo Method Pick Input Values Rua Otwene ‘According to. Calculations Distibaont Their Probability Ech Sof the Distributions eps Clie X,4U, ——— RU, XU, Performance Caleiaton X4U; Module RAU,, XaU; ReU,, INPUTS OUTPUTS Figure 4-1 Overview of the Monte Carlo Method 4.3.2 Probability Distributions Each measured data value from a power plant has its own probability distribution. The Monte Carlo method involves randomly sampling (picking) simulated measured values from their probability distributions and inputting sets of these measured values to the performance monitoring calculation. page 11 4. Accuracy of Caleulated Results The probabitity distributions of the measured data are normally not known; but, for purposes of Monte Carlo simulation, the distributions are often assumed to be cither uniform or normal. The uniform distribution is such that all values between the minimum possible value, a, and the maximum possible value, b, have equal probabilities of occurrence. The mean of the uniform distribution is the average of a and b. The variance (or standard deviation of a set of values) is the interval between a and b divided by the square root of twelve (3.464). The interval from a to b contains 100% of the data sampled from the uniform probability distribution. Figure 4-2 is a plot of the uniform probability distribution function. While the uniform distribution does not appear in nature, it is easy to implement in a Monte Carlo simulation, and the results are often easy to interpret. £0) Figure 4-2 Plot of the Uniform Probability Distribution Function The normal distribution is the bell-shaped curve that we are all familiar with, Figure 4-3 is a plot of the normal probability distribution function. The mean, t, and the variance, o, of the normal probability distribution are such that the interval 2c contains 95% of all values sampled from the distribution. This level of uncertainty is often used in the statement of uncertainty for a measurement or for a calculated result. 4. Accuracy of Calculated Results. page 112 ‘The normal probability distribution function: 1 eat £) pr a @.12) where JAés) is the probability of sampling data over the interval xtdx nis the mean of the distribution iis the variance of the distribution f(x) HW x Figure 4-3 Plot of the Normal Probability Distribution Function 4.3.3 Sampling from Probability Distributions Sampling simulated measured data from a probability distribution requires the use ofa random number generator. A random number generator is a comptiter algorithm that outputs a number with a uniform probability distribution between the values of zero and one. Each time the random number generator is called, it outputs a new number. Ifa value equal to ¢ is retrieved from a random number generator, then a value, X, can be computed from the following formula, where X has a uniform probability distribution between the values a and b. Page 134. Accuracy of Calculated Results Sampled Value with Uniform Probability Distribution: Xsat(b-ag (4.13) where Xis a sampled value with a uniform probability distribution between a and b isa random number between 0 and I. ‘Sampling from a normal probability distribution can be accomplished using the Box-Muller transformation, The Box-Muller transformation states that if ¢, and @ are two uniformly and independently distributed numbers between zero and one, then X; and X2 have a normal distribution with mean equal to zero and a variance equal to one, where X; and X» are defined by equations 4.14 and 4.15, Values with Normal Probability Distributions with means equal to 0.0, and variance 1.0: X, =J-2inG,) cos(2x¢,) (4.14) and X, = y-2in(G,) sin(27¢;) (4.15) To obtain a simulated measured value with a mean equal to jt and a variance equal to 0, one needs to generate two random values, ¢; and 2, and then use either of the following two formulas (equation 4.16 or 4.17) to calculate the simulated measured value. Values with Normal Probability Distributions with means equal to y and variance 0: X, = t+a-2InG,) cos(2a¢,) (4.16) and/or X, =M+oy-2in(G,) sin(27¢,) (4.17) where X isa sampled value with a normal probability distribution with mean and variance o (is a random number uniformly distributed between 0 and 1. Gis a second random number between 0 and 1 4, Accuracy of Caleulated Results page 114 4.3.4 Running the Monte Carlo Simulation Each run of the Monte Carlo simulation involves the following steps: Generate a set of random numbers, one or two random numbers for each simulated measured value to be input to the performance calculation module Calculate a set of input values to the performance calculation using the sampling formula (Equation 4.13 or Equation 4.16) appropriate for the probability distribution chosen for each input value. Execute the performance calculation using the input values from step 2. 4. Record (store) the results of the performance calculation for future 4.3.5 processing Retum to step 1. and generate a new set of calculated results; enough executions have been completed, go to step 6. Calculate the mean and standard deviation of each calculated result variable. Results of the Monte Carlo Simulation The uncertainty of a calculated result can be quantified by calculating the standard deviation of the calculated result from the result values stored from all of the Monte Carlo simulations performed. First calculate the mean using Equation 4.18; then calculate the standard deviation of each result variable using Equation 4.19. Mean of Result Variable: where = ie R we (4.18) R, is a calculated result value form the Ath simulation Ris the mean (average) of the result values Nis the total number of simulations executed Kis an index to keep track of the simulations; the first simulation has k equal to one, the second simulation has k equal to two, and so forth. Page IIS 4. Accuracy of Calculated Results > isa symbol indicating that the expression which follows is to be ma summed for all values of k from one to N. Standard Deviation of Calculated Result Variable: y (4.19) If the result variable has a normal distribution (which it probably does not), then the uncertainty of the result with a 95% confidence level is plus or minus two times the standard deviation. This is because the standard deviation of a sample of data values is an estimate of the variance of the probability distribution; the more data values in the sample the more accurate the estimate of the variance. The uncertainty of the calculated standard deviation of the calculated result variable can be estimated from the number of simulations performed. Uncertainty of the Standard Deviation of Calculated Result Variable: (4.19) If only two Monte Carlo simulations are executed then there is 100% uncertainty in the calculated standard deviation of the result. But if one hundred and one simulations are executed, the standard deviation of the result is known to within 10%. 4. Accuracy of Caleulated Results page 116 5. Overall Power Plant Performance 5.1 Equipment Performance versus Plant Performance Overall plant performance evaluation is a comparison of the expected plant performance to the measured plant performance. A unique characteristic of overall plant performance is that many pieces of equipment in the power plant must all operate with rated performance capability in order for the plant to perform as expected. That is, the expected overall plant performance is based upon the assumption that all equipment in the plant are operating at their rated capabilities. Itis important to understand that the expected equipment performance calculated in the performance evaluation of each piece of plant equipment is not the same as the performance of that same piece of equipment calculated in the evaluation of overall expected plant performance. The operating conditions imposed on plant equipment at the current (actual) plant operation (taking into account the degradation of the surrounding equipment) are not the same as the expected operating conditions if all plant equipment performs as rated. Since expected equipment performance is evaluated at the current (actual) operating conditions of each piece of equipment, the expected equipment performance will not match the equipment performance when the entire power plant is operating as expected (with rated capability). The sum of the expected powers generated in each piece of equipment in the power plant is not equal to the expected power of the overall power plant. The example below illustrates this situation for a steam turbine that is operating in a degraded power plant system. Consider a steam turbine whose throttle steam flow comes from a steam generator (boiler for a Rankine cycle, or HRSG for a combined-cycle) located upstream of the steam turbine. At the reference plant operating conditions, the rated steam flow is 700,000 Ib/hr, and the rated steam turbine power is 100 MW. If the steam generation degrades such that the throttle steam flow becomes 600,000 lb/hr at the reference operating conditions, the expected steam turbine power would be evaluated at the actual steam flow of 600,000 Ib/hr instead of the rated steam flow 700,000 Ibvhr, and would equal approximately 85 MW. Thus, at the plant reference operating conditions, the expected steam turbine power used to evaluate the steam turbine performance would be 85 MW, but the steam turbine power calculated in the expected overall power plant performance evaluation would be 100 MW. If under these degraded steam flow page 117 5. Overall Plant Performance conditions, the steam turbine actually produced 85 MW at the plant reference operating conditions, the degradation of the steam turbine would be zero because the expected steam turbine power equals the actual steam turbine power. The degradation of the steam generator would be 100,000 Ib/hr, and the power degradation of the overall plant due to the degradation of the steam generator would be 15 MW. If the steam turbine in this example was the only power generating equipment in the power plant, the expected power of the steam turbine (85 MW) would not equal the expected power of the overall power plant (100 MW). This illustrates the concept stated above that the sum of the expected powers generated in each piece of equipment in the power plant is not equal to the expected power of the overall power plant, 5.2 Specification of Overall Power Plant Performance A power plant is typically warranted to produce a guaranteed amount of electric power and a guaranteed plant heat rate if operated at the guarantee operating conditions. In this book, the operating conditions where the plant performance guarantee is specified are called the reference operating conditions, and the plant performance at these conditions is called the rated performance. Table 5-1 illustrates typical parameters used to rate the performance of a power plant. A power plant can be expected to produce rated power and rated heat rate only at the reference operating conditions. 5. Overall Plant Performance page 118, Table 5-1 Rating Specification for a Power Plant Power Plant Rating Specification Example Data RATIN¢ Net Power 500 Mw Net Heat Rate 8800 BlukW-hr REFERENCE OPERATING CONDITIONS: Ambient Air Temperature OF Ambient Air Pressure 146 psia Ambient Relative Humidity 60% Fuel Type Natural Gas. Fuel Lower Heating Value 20800 Buu/tbm Load Level or Operational Mode Base Process Steam Requirement: 100.000 ib/hr 200 Flow, Pressure and Temperature pia 300F Process Return Water Temperature 140F River or Ocean Temperature NA ‘The rated power and rated heat rate are the expected performance values for the power plant at the reference operating conditions. If any of the operating conditions change, the expected power and the expected heat rate will change. The plant vendor will often guarantee plant performance at a set of selected operating conditions other than the reference operating conditions. This set of selected plant performance specificatiors is typically part of the plant “thermal kit”. For a combined-cycle power plant the plant performance may be guaranteed on a hot day, on a cold day, with various process steam requirements, with an alternate fuel (oil instead of gas), with one gas turbine ‘out of service, and/or with duct bumers in operation. page 119 5. Overall Plant Performance ‘A performance monitoring system must predict the expected plant power and heat rate at any possible operating concition and then compare that expected performance to the actual performance. The plant guarantee points form the basis for the validation of an expected performance model of the power plant. Any model used to predict the expected performance of the plant must match the guaranteed plant performance at all operating conditions where those guarantees are stated. Vartation of Combined: cyet lant Performance with Ambiont Temperature 10 ° = ” © ” 00 By 0 able: Temperature) Figure 5-1 Variation of Combined-Cycle Plant Performance with Ambient Temperature Figure 5-1 illustrates the variation in overall plant performance that can be expected for a combined-cycle power plant as ambient temperature changes. The large change in plant power as the ambient air temperature changes is due primarily to the increase in gas turbine power at lower inlet temperature. The variation of condenser pressure with ambient temperature also contributes to the plant power change, but its effect is much smaller than the effect of the change in gas turbine power. The primary driving force for the variation of power with ambient in a combined-cycle power plant is the increase in air mass flow rate into the gas, 5. Overall Plant Performance page 120 turbines with lower ambient temperatures. To a good approximation, the air flow into a gas turbine is at constant velocity. The density of air, since it acts like an ideal gas, varies directly with pressure and inversely with absolute temperature. As the air temperature goes down, the air density goes up, and the inlet mass flow rate to the gas turbine goes up. Rankine Power Plant Performance Versus Ambient Temperature Temperature) Figure 5-2 Variation of Rankine-Cycle Plant Performance with Ambient Temperature The performance changes of a Rankine cycle power plant with changes in ambient temperature are relatively less pronounced and are sometimes ignored in Rankine cycle performance-monitoring systems. Itis difficult to generalize the performance changes of Rankine cycle plants with ambient temperature. The data in Figure 5-2 illustrates performance changes that might be expected for a Rankine cycle power plant as ambient conditions change, but the precise performance changes will be plant specific. The change in condenser pressure and cooling tower performance are important parameters in the Rankine cycle plant performance changes with ambient temperature. One reason these effects are so plant specific is that boiler forced-draft fans and induced-draft fans may page 121 5. Overall Plant Performance reach limits at either low or at high ambient temperatures, depending upon the boiler design. 5.3. Overall Plant Expected Performance Models There are three methods to predict expected overall plant performance given the plant operating conditions. * Curve-Based Method: use performance or correction curves, © Model-Based Method: use a computer-based model of the power plant © Impact Method: use the total of the equipment impacts on plant power 5.3.1 Curve-Based Method for Expected Plant Performance Performance or correction curves may be used to predict expected overall plant power and heat rate as long as the parameters that impact plant performance are independent of each other. Figures 5-1 and 5-2 show performance curves illustrating the change in plant performance as ambient temperature changes for two different types of power plants. ‘The basic idea behind curve-based performance methods is to assemble a set of performance or correction curves that plot the variation in a plant performance parameter (power or heat rate) when one of the operating conditions changes. ‘The total power plant performance fractional change is then computed by multiplying together the fractional changes for each operating condition, where each multiplying factor is generated from a separate correction curve. Curve-Based Method for Expected Plant Performance: Powergen = Poets T] CurveValue(actual conditions) sitspes\ CurveValue(reference conditons) } ser cares (6.1) HeatRateyayg = HEAIROE yay TT] ( CurveValue(actual conditions) | llduries\ Curve Value(reference conditons) )ysecunss (5.2) where is a mathematical operator indicating that all the terms in the silos following parenthesis are to be multiplied together, one term for each plant performance curve, until terms from all the plant performance curves are included in the final product 5, Overall Plant Performance page 122, A curve-based performance-prediction method becomes inaccurate when multiple parameters (operating conditions) change the plant performance and the performance change is large (greater than five percent). This happens because the operating conditions are not truly independent of each other, but this independence is assumed when a separate curve is used for each operating condition. Consider the effects that ambient temperature and relative humidity (or wet- bulb temperature) have on overall plant performance. In a combined-cycle power plant the ambient temperature strongly affects gas turbine power, steam turbine power and condenser duty, which changes with plant load. The wet- bulb temperature (for a wet cooling-tower) has a strong impact on condenser pressure, but the magnitude of this impact depends upon condenser duty, which in turn depends upon ambient temperature. The result is that the change in plant performance caused by a change in wet- bulb temperature is not independent of ambient temperature. [fa curve of plant power versus wet-bulb temperature is used, the curve will not account for changes in ambient temperature. It may be possible to produce a family of wet- bulb temperature curves, one for each ambient temperature. This two- dimensional curve would surely be better than using a single curve for wet- bulb temperature. Unfortunately, even two-dimensional curves do not always solve the problem. ‘The ambient air pressure will also affect plant load in a manner that is not independent of ambient temperature or wet-bulb temperature. Maybe a set of ambient temperature curves is required for each ambient pressure, and then a set of wet-bulb temperature curves for each ambient temperature. The fuel type, process steam loads, return water fraction, and return water temperature will all impact the plant performance in ways that are not independent of ambient conditions. The basic conclusion is that a curve-based method will generally work well for small changes in overall plant performance, when the expected plant performance is within five percent of the tated plant performance. For application in situations resulting in larger changes in plant performance need, a curve-based method is probably not the best choice. 5.3.2. Model-Based Method for Expected Plant Performance The model-based method overcomes the problems associated with the eurve- based method because the effects of each operating condition can be combined with the effects of the other operating conditions in a single calculation. The disadvantages of the model-based method are the high degree of plant page 123, 5, Overall Plant Performance information and knowledge required to create an accurate model, and the level, of engineering skill and effort required to implement and maintain the model. Figure 5-3 Overall Plant Model for a Combined Cycle Power Plant When applying the model-based method, the power plant mode! must be “tuned” so that it predicts the rated plant performance when run at the reference operating conditions. The model should be validated versus guarantees at those operating points other than the reference operating conditions where the plant performance is guaranteed. Note that to generate an accurate plant model for model-based methods of performance prediction; itis usually not enough to simply generate a physical model of the key plant equipment (turbines, pumps, heat exchangers, piping, etc.). It will probably also be necessary to recognize and understand the operational limits on the plant equipment and how the plant is controlled to respond to those limits, and implement chese limits and controls when building 5, Overall Plant Performance age 124

You might also like