You are on page 1of 385

Sustainability in the

Process Industry
About the Authors
Ji Kleme, H DSC, is one of the key personalities
of the world-leading Centre of Excellence in Process
Integration at the University of Manchester Institute
of Science and Technology in the United Kingdom.

Ferenc Friedler, DSC, is a leading research figure at

the University of Pannonia in Hungary.

Igor Bulatov is a researcher at the Centre for Process

Integration at the University of Manchester in the
United Kingdom.

Petar Varbanov is a senior lecturer at the Centre for

Process Integration and Intensification (CPI2) at the
University of Pannonia in Hungary.
Sustainability in the
Process Industry
Integration and Optimization

Ji Kleme
Ferenc Friedler
Igor Bulatov
Petar Varbanov

New York Chicago San Francisco

Lisbon London Madrid Mexico City
Milan New Delhi San Juan
Seoul Singapore Sydney Toronto
Copyright 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under
the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed
in any form or by any means, or stored in a database or retrieval system, without the prior written
permission of the publisher.

ISBN: 978-0-07-160555-7

MHID: 0-07-160555-X

The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-160554-0,
MHID: 0-07-160554-1.

All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after
every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit
of the trademark owner, with no intention of infringement of the trademark. Where such designations
appear in this book, they have been printed with initial caps.

McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales
promotions, or for use in corporate training programs. To contact a representative please e-mail us at

Information contained in this work has been obtained by The McGraw-Hill Companies, Inc.
(McGraw-Hill) from sources believed to be reliable. However, neither McGraw-Hill nor its authors
guarantee the accuracy or completeness of any information published herein, and neither McGraw-Hill
nor its authors shall be responsible for any errors, omissions, or damages arising out of use of this
information. This work is published with the understanding that McGraw-Hill and its authors are
supplying information but are not attempting to render engineering or other professional services. If
such services are required, the assistance of an appropriate professional should be sought.


This is a copyrighted work and The McGraw-Hill Companies, Inc. (McGrawHill) and its licensors
reserve all rights in and to the work. Use of this work is subject to these terms. Except as permitted
under the Copyright Act of 1976 and the right to store and retrieve one copy of the work, you may
not decompile, disassemble, reverse engineer, reproduce, modify, create derivative works based upon,
transmit, distribute, disseminate, sell, publish or sublicense the work or any part of it without McGraw-
Hills prior consent. You may use the work for your own noncommercial and personal use; any other
use of the work is strictly prohibited. Your right to use the work may be terminated if you fail to comply
with these terms.


NESS FOR A PARTICULAR PURPOSE. McGraw-Hill and its licensors do not warrant or guarantee
that the functions contained in the work will meet your requirements or that its operation will be
uninterrupted or error free. Neither McGraw-Hill nor its licensors shall be liable to you or anyone else
for any inaccuracy, error or omission, regardless of cause, in the work or for any damages resulting
therefrom. McGraw-Hill has no responsibility for the content of any information accessed through
the work. Under no circumstances shall McGraw-Hill and/or its licensors be liable for any indirect,
incidental, special, punitive, consequential or similar damages that result from the use of or inability to
use the work, even if any of them has been advised of the possibility of such damages. This limitation
of liability shall apply to any claim or cause whatsoever whether such claim or cause arises in contract,
tort or otherwise.
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi

1 Introduction and Defi nition of the Field . . . . . . . . 1

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Energy Efficiency . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Screening and Scoping: Auditing,
Benchmarking, and Good Housekeeping . . . 5
1.4 Balancing and Flowsheeting Simulation
as a Basis for Optimization . . . . . . . . . . . . . . . . 7
1.5 Integrated Approach: Process Integration . . . 7
1.6 Optimal Process Synthesis and Combinatorial
Graphs ................................ 8
1.7 How to Apply the Process Integration and
Optimization Technology . . . . . . . . . . . . . . . . . 9

2 Process Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1 Introduction: The Need for Process
Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 What Is Process Integration? . . . . . . . . . . . . . . . 12
2.3 History and Development of Process Integration 12
2.4 Pinch Technology and Targeting Heat
Recovery: The Thermodynamic Roots . . . . . . 14
2.5 Supertargeting: Full-Fledged HEN Targeting 15
2.6 Modifying the Pinch Idea for HEN Retrofit . . 16
2.7 Mass Exchange and Water Networks . . . . . . . 17
2.8 Benefits of Process Integration . . . . . . . . . . . . . 18
2.9 The Role of PI in Making Industry Sustainable 20
2.10 Examples of Applied Process Integration . . . . 20
2.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3 Process Optimization . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Model Building and Optimization: General
Framework and Workflow . . . . . . . . . . . . . . . . 24

vi Contents

3.3 Optimization: Definition and Mathematical

Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.1 What Is Optimization? . . . . . . . . . . . . . 25
3.3.2 Mathematical Formulation of
Optimization Problems . . . . . . . . . . . . 25
3.4 Main Classes of Optimization Problems . . . . 26
3.5 Conditions for Optimality . . . . . . . . . . . . . . . . 28
3.5.1 Conditions for Local Optimality . . . . 28
3.5.2 Conditions for Global Optimality . . . 28
3.6 Deterministic Algorithms for Solving Continuous
Linear Optimization Problems . . . . . . . . . . . . . 29
3.7 Deterministic Algorithms for Solving Continuous
Nonlinear Optimization Problems . . . . . . . . . . 29
3.7.1 Search Algorithms for Nonlinear
Unconstrained Problems . . . . . . . . . . . 30
3.7.2 Algorithms for Solving Constrained
Nonlinear Problems . . . . . . . . . . . . . . . 31
3.8 Deterministic Methods for Solving
Discrete Problems . . . . . . . . . . . . . . . . . . . . . . . . 31
3.9 Stochastic Search Methods for Solving
Optimization Problems . . . . . . . . . . . . . . . . . . . 32
3.10 Creating Models . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.10.1 Conceptual Modeling . . . . . . . . . . . . . 34
3.10.2 Mathematical Modeling of Processes:
Constructing the Equations . . . . . . . . 35
3.10.3 Choosing an Objective Function . . . . 37
3.10.4 Handling Process Complexity . . . . . . 38
3.10.5 Applying Process Insight . . . . . . . . . . 40
3.10.6 Handling Model Nonlinearity . . . . . . 41
3.10.7 Evaluating Model Adequacy and
Precision . . . . . . . . . . . . . . . . . . . . . . . . . 43

4 Process Integration for Improving

Energy Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1 Introduction to Heat Exchange and
Heat Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1.1 Heat Exchange Matches . . . . . . . . . . . . 46
4.1.2 Implementing Heat Exchange
Matches . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Basics of Process Integration . . . . . . . . . . . . . . . 47
4.2.1 Process Integration and Heat
Integration . . . . . . . . . . . . . . . . . . . . . . . 47
Contents vii

4.2.2 Hierarchy of Process Design . . . . . . . . 47

4.2.3 Performance Targets . . . . . . . . . . . . . . . 48
4.2.4 Heat Recovery Problem Identification 48
4.3 Basic Pinch Technology . . . . . . . . . . . . . . . . . . . 50
4.3.1 Setting Energy Targets . . . . . . . . . . . . . 51
4.3.2 The Heat Recovery Pinch . . . . . . . . . . 54
4.3.3 Numerical Targeting: The Problem
Table Algorithm . . . . . . . . . . . . . . . . . . 56
4.3.4 Threshold Problems . . . . . . . . . . . . . . . 60
4.3.5 Multiple Utilities Targeting . . . . . . . . . 61
4.4 Extended Pinch Technology . . . . . . . . . . . . . . . 69
4.4.1 Heat Transfer Area, Capital Cost, and
Total Cost Targeting . . . . . . . . . . . . . . . 69
4.4.2 Heat Integration of Energy-Intensive
Processes . . . . . . . . . . . . . . . . . . . . . . . . 71
4.4.3 Process Modification . . . . . . . . . . . . . . 80
4.5 HEN Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.5.1 The Pinch Design Method . . . . . . . . . . 81
4.5.2 Superstructure Approach . . . . . . . . . . 93
4.5.3 A Hybrid Approach . . . . . . . . . . . . . . . 95
4.5.4 Key Features of the Resulting
Networks . . . . . . . . . . . . . . . . . . . . . . . . 96
4.6 Total Site Energy Integration . . . . . . . . . . . . . . 96
4.6.1 Total Site Data Extraction . . . . . . . . . . 97
4.6.2 Total Site Profiles . . . . . . . . . . . . . . . . . . 97
4.6.3 Heat Recovery via the Steam System . . 99
4.6.4 Power Cogeneration . . . . . . . . . . . . . . . 101
4.6.5 Advanced Total Site Optimization and
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 102
5 Mass Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.1 Water Integration . . . . . . . . . . . . . . . . . . . . . . . . 105
5.2 Minimizing Water Use and Maximizing
Water Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.2.1 Legislation . . . . . . . . . . . . . . . . . . . . . . . 106
5.2.2 Best Available Techniques . . . . . . . . . . 107
5.2.3 Water Footprint . . . . . . . . . . . . . . . . . . . 108
5.2.4 Minimizing Water Usage and
Wastewater . . . . . . . . . . . . . . . . . . . . . . 111
5.3 Introduction to Water Pinch Analysis . . . . . . . 113
5.4 Flow-Rate Targeting with the Material
Recovery Pinch Diagram . . . . . . . . . . . . . . . . . . 116
viii Contents

5.5 MRPD Applied to Fruit Juice Case Study . . . . 117

5.6 Water Minimization via Mathematical
Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.6.1 Introduction to Mathematical
Optimization . . . . . . . . . . . . . . . . . . . . . 118
5.6.2 Illustrative Example: A Brewery Plant 120
5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

6 Further Applications of Process Integration . . . . . 123

6.1 Design and Management of Hydrogen
Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6.2 Oxygen Pinch Analysis . . . . . . . . . . . . . . . . . . . 125
6.3 Combined Analyses, I: Energy-Water,
Oxygen-Water, and Pinch-Emergy . . . . . . . . . 126
6.3.1 Simultaneous Minimization of
Energy and Water Use . . . . . . . . . . . . . 126
6.3.2 Oxygen-Water Pinch Analysis . . . . . . 128
6.3.3 Emergy-Pinch Analysis . . . . . . . . . . . . 130
6.4 Combined Analysis, II: Budget-Income-Time,
Materials Reuse-Recycling, Supply Chains,
and CO2 Emissions Targeting . . . . . . . . . . . . . . 131
6.4.1 Budget-Income-Time Pinch Analysis . 131
6.4.2 Materials Reuse-Recycle and Property
Pinch Analysis . . . . . . . . . . . . . . . . . . . 133
6.4.3 Pinch Analysis of Supply Chains . . . . 136
6.4.4 Using the Pinch to Target CO2
Emissions . . . . . . . . . . . . . . . . . . . . . . . . 138
6.4.5 Regional Resource Management . . . . 139
6.5 Heat-Integrated Power Systems:
Decarbonization and Low-Temperature Energy 142
6.5.1 Decarbonization . . . . . . . . . . . . . . . . . . 142
6.5.2 Low-Temperature Energy . . . . . . . . . . 143
6.6 Integrating Reliability, Availability, and
Maintainability into Process Design . . . . . . . . 144
6.6.1 Integration . . . . . . . . . . . . . . . . . . . . . . . 144
6.6.2 Optimization . . . . . . . . . . . . . . . . . . . . . 146
6.7 Pressure Drop and Heat Transfer Enhancement
in Process Integration . . . . . . . . . . . . . . . . . . . . 146
6.8 Locally Integrated Energy Sectors and
Extended Total Sites . . . . . . . . . . . . . . . . . . . . . . 148
6.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Contents ix

7 Process Optimization Frameworks . . . . . . . . . . . . . 151

7.1 Classic Approach: Mathematical Programming 151
7.2 Structural Process Optimization: P-Graphs . . 153
7.2.1 Process Representation via P-Graphs 154
7.2.2 The P-Graphs Significance for
Structural Optimization . . . . . . . . . . . 155
7.2.3 The P-Graphs Mathematical Engine:
MSG, SSG, and ABB . . . . . . . . . . . . . . . 157
7.3 Scheduling of Batch Processes: S-Graphs . . . . 159
7.3.1 Scheduling Frameworks: Suitability
and Limitations . . . . . . . . . . . . . . . . . . . 159
7.3.2 S-Graph Framework for Scheduling . . 161

8 Combined Process Integration and Optimization 165

8.1 The Role of Optimization in Process Synthesis 165
8.2 Optimization Tools for Efficient
Implementation of PI . . . . . . . . . . . . . . . . . . . . . 166
8.3 Optimal Process Synthesis . . . . . . . . . . . . . . . . 167
8.3.1 Reaction Network Synthesis . . . . . . . . 167
8.3.2 Optimal Synthesis of Heterogeneous
Flowsheets . . . . . . . . . . . . . . . . . . . . . . . 169
8.3.3 Synthesis of Green Biorefineries . . . . . 171
8.3.4 Azeotropic Distillation Systems . . . . . 173
8.4 Optimal Synthesis of Energy Systems . . . . . . 176
8.4.1 Simple Heat Integration . . . . . . . . . . . . 176
8.4.2 Optimal Retrofit Design . . . . . . . . . . . 177
8.5 Optimal Scheduling for Increased Throughput,
Profit, and Security . . . . . . . . . . . . . . . . . . . . . . . 179
8.5.1 Maximizing Throughput and Revenue 179
8.5.2 Heat-Integrated Production Schedules 180
8.6 Minimizing Emissions and Effluents . . . . . . . 183
8.7 Availability and Reliability . . . . . . . . . . . . . . . . 186
8.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

9 Software Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

9.1 Overview of Available Tools . . . . . . . . . . . . . . . 191
9.2 Graph-Based Process Optimization Tools . . . 191
9.2.1 PNS Solutions . . . . . . . . . . . . . . . . . . . . 191
9.2.2 S-Graph Studio . . . . . . . . . . . . . . . . . . . 193
9.3 Heat Integration Tools . . . . . . . . . . . . . . . . . . . . 195
9.3.1 SPRINT . . . . . . . . . . . . . . . . . . . . . . . . . . 195
x Contents

9.3.2 HEAT-int . . . . . . . . . . . . . . . . . . . . . . . . 195

9.3.3 STAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
9.3.4 SITE-int . . . . . . . . . . . . . . . . . . . . . . . . . . 198
9.3.5 WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
9.3.6 HEXTRAN . . . . . . . . . . . . . . . . . . . . . . . 199
9.3.7 SuperTarget . . . . . . . . . . . . . . . . . . . . . . 200
9.3.8 Spreadsheet-Based Tools . . . . . . . . . . . 200
9.4 Mass Integration Software: WATER . . . . . . . . 201
9.5 Flowsheeting Simulation Packages . . . . . . . . . 202
9.5.1 ASPEN . . . . . . . . . . . . . . . . . . . . . . . . . . 202
9.5.2 HYSYS and UniSim Design . . . . . . . . . 203
9.5.3 gPROMS . . . . . . . . . . . . . . . . . . . . . . . . . 204
9.5.4 CHEMCAD . . . . . . . . . . . . . . . . . . . . . . 205
9.5.5 PRO/II . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
9.6 General-Purpose Optimization Packages . . . . 206
9.6.1 GAMS . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
9.6.2 MIPSYN . . . . . . . . . . . . . . . . . . . . . . . . . 207
9.6.3 LINDO . . . . . . . . . . . . . . . . . . . . . . . . . . 208
9.6.4 Frontline Systems . . . . . . . . . . . . . . . . . 209
9.6.5 ILOG ODM . . . . . . . . . . . . . . . . . . . . . . 209
9.7 Mathematical Modeling Suites . . . . . . . . . . . . . 210
9.7.1 MATLAB . . . . . . . . . . . . . . . . . . . . . . . . 210
9.7.2 Alternatives to MATLAB . . . . . . . . . . . 211
9.8 Other Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
9.8.1 Modelica . . . . . . . . . . . . . . . . . . . . . . . . . 211
9.8.2 Emerging Trends . . . . . . . . . . . . . . . . . . 212
9.8.3 Balancing and Flowsheeting Simulation
for Energy-Saving Analysis . . . . . . . . . 215
9.8.4 Integrating Renewable Energy into
Other Energy Systems . . . . . . . . . . . . . 216

10 Examples and Case Studies . . . . . . . . . . . . . . . . . . . . 219

10.1 Heat Pinch Technology . . . . . . . . . . . . . . . . . . . 219
10.1.1 Heat Pinch Technology: First Problem 219
10.1.2 Heat Pinch Technology:
Second Problem . . . . . . . . . . . . . . . . . . . 224
10.2 Total Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
10.2.1 Total Sites: First Problem . . . . . . . . . . . 226
10.2.2 Total Sites: Second Problem . . . . . . . . . 231
10.3 Integrated Placement of Processing Units
and Data Extraction . . . . . . . . . . . . . . . . . . . . . . 234
Contents xi

10.4 Utility Placement . . . . . . . . . . . . . . . . . . . . . . . . 238

10.4.1 Utility Placement: First Problem . . . . 238
10.4.2 Utility Placement: Second Problem . . 243
10.5 Water Pinch Technology . . . . . . . . . . . . . . . . . . 247
10.5.1 Water Pinch Technology: First Problem 247
10.5.2 Water Pinch Technology:
Second Problem . . . . . . . . . . . . . . . . . . 249

11 Industrial Applications and Case Studies . . . . . . . 253

11.1 Energy Recovery from an FCC Unit . . . . . . . . 253
11.2 De-bottlenecking a Heat-Integrated Crude-Oil
Distillation System . . . . . . . . . . . . . . . . . . . . . . . 256
11.3 Minimizing Water and Wastewater in a
Citrus Juice Plant . . . . . . . . . . . . . . . . . . . . . . . . 262
11.4 Efficient Energy Use in Other Food and
Drink Industries . . . . . . . . . . . . . . . . . . . . . . . . . 268
11.5 Synthesis of Industrial Utility Systems . . . . . . 271
11.6 Heat and Power Integration in Buildings and
Building Complexes . . . . . . . . . . . . . . . . . . . . . . 275
11.7 Optimal Design of a Supply Chain . . . . . . . . . 277
11.8 Scheduling a Large-Scale Paint Production
System ................................ 279

12 Typical Pitfalls and How to Avoid Them . . . . . . . . 281

12.1 Data Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . 283
12.1.1 When Is a Stream a Stream? . . . . . . . . 284
12.1.2 How Precise Must the Data Be at
Each Step? . . . . . . . . . . . . . . . . . . . . . . . 285
12.1.3 How Can Considerable Changes in
Specific Heat Capacities Be Handled? 286
12.1.4 What Rules and Guidelines Must Be
Followed to Extract Data Properly? . . 287
12.1.5 How Can the Heat Loads, Heat
Capacities, and Temperatures of an
Extracted Stream Be Calculated? . . . . 289
12.1.6 How Soft Are the Data in a Plant or
Process Flowsheet? . . . . . . . . . . . . . . . . 290
12.1.7 How Can Capital Costs and Operating
Costs Be Estimated? . . . . . . . . . . . . . . . 290
12.2 Integration of Renewables: Fluctuating
Demand and Supply . . . . . . . . . . . . . . . . . . . . . 292
12.3 Steady-State and Dynamic Performance . . . . . 292
xii Contents

12.4 Interpreting Results . . . . . . . . . . . . . . . . . . . . . . 293

12.5 Making It Happen . . . . . . . . . . . . . . . . . . . . . . . 293

13 Information Sources and Further Reading . . . . . . 295

13.1 General Sources of Information . . . . . . . . . . . . 295
13.1.1 Conferences . . . . . . . . . . . . . . . . . . . . . . 295
13.1.2 Journals . . . . . . . . . . . . . . . . . . . . . . . . . 297
13.1.3 Service Providers . . . . . . . . . . . . . . . . . 297
13.1.4 Projects . . . . . . . . . . . . . . . . . . . . . . . . . . 301
13.2 Heat Integration . . . . . . . . . . . . . . . . . . . . . . . . . 301
13.2.1 Conferences . . . . . . . . . . . . . . . . . . . . . . 301
13.2.2 Journals . . . . . . . . . . . . . . . . . . . . . . . . . 301
13.2.3 Service Providers . . . . . . . . . . . . . . . . . 301
13.2.4 Projects . . . . . . . . . . . . . . . . . . . . . . . . . . 303
13.3 Mass Integration . . . . . . . . . . . . . . . . . . . . . . . . . 304
13.3.1 Conference . . . . . . . . . . . . . . . . . . . . . . . 304
13.3.2 Journals . . . . . . . . . . . . . . . . . . . . . . . . . 304
13.3.3 Service Providers . . . . . . . . . . . . . . . . . 305
13.3.4 Projects . . . . . . . . . . . . . . . . . . . . . . . . . . 306
13.4 Combined Analysis . . . . . . . . . . . . . . . . . . . . . . 306
13.4.1 Conferences . . . . . . . . . . . . . . . . . . . . . . 306
13.4.2 Journals . . . . . . . . . . . . . . . . . . . . . . . . . 307
13.4.3 Service Providers . . . . . . . . . . . . . . . . . 307
13.4.4 Projects . . . . . . . . . . . . . . . . . . . . . . . . . . 308
13.5 Optimization for Sustainable Industry . . . . . . 309
13.5.1 Conferences . . . . . . . . . . . . . . . . . . . . . . 309
13.5.2 Journals . . . . . . . . . . . . . . . . . . . . . . . . . 310
13.5.3 Service Providers . . . . . . . . . . . . . . . . . 310
13.5.4 Projects . . . . . . . . . . . . . . . . . . . . . . . . . . 311
14 Conclusions and Further Information . . . . . . . . . . 313
14.1 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . 313
14.1.1 Books and Key Articles . . . . . . . . . . . . 313
14.1.2 Lecture Notes and Online Teaching
Resources . . . . . . . . . . . . . . . . . . . . . . . . 315
14.2 Development Trends . . . . . . . . . . . . . . . . . . . . . 316
14.2.1 Top-Level Analysis . . . . . . . . . . . . . . . . 316
14.2.2 Maintenance Scheduling,
Maintainability, and Reliability . . . . . 316
14.2.3 Hybrid Energy Conversion Systems . . . 317
14.2.4 Integration of Renewables and Waste 317
Contents xiii

14.2.5 Better Utilization of Low-Grade Heat 319

14.2.6 Energy Planning That Accounts for
Carbon Footprint . . . . . . . . . . . . . . . . . 319
14.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320

Bibliography ................................. 321

Index ....................................... 345
This page intentionally left blank

his book describes and analyzes an amalgamation of two
effective ways to considerably improve the efficiency and
sustainability of processing industries: Process Integration
and optimization. It is the result of collaborative efforts by two
groups: researchers at the Centre for Process Integration, the
University of Manchester, UK (a renowned center of excellence in
this field), and the Faculty of Information Technology at the
University of Pannonia, Hungary, as represented by the Centre for
Process Integration and Intensification (CPI2) and the Centre for
Advanced Process Optimization. The University of Pannonia centers
are highly regarded for their achievements in optimization, and the
P-graph and S-graph approaches to process optimization originated
at this university.
The book should provide support for graduate and postgraduate
students worldwide as well as for Continuing Professional
Development (CPD) courses and for practitioners from various fields
of the processing industry. Its chapters analyze a number of problems
of practical significance and also suggest various options for solving
them to the benefit of modern society. The book provides a wealth of
material for postgraduate teaching and further professional training.
It is supported by the expertise stemming from the authors work as
well as by a pool of case studies of varying complexity that have been
collected over the years. This wide-ranging material presented here
has been selected and refined over years of postgraduate teaching,
further CPD courses, and training for the industry. It includes eight
industry-based demonstration case studies and nine testing examples
with the solutions developed. An unbiased evaluation and overview
of the software tools available for learning, teaching, and industrial
applications are also included. Text discussions are complemented
by many figures to clarify details and enhance understanding. The
book contains 14 chapters and a comprehensive bibliography.
Chapter 1 is devoted to introducing and defining the field, and it
also includes a basic assessment of energy efficiency. It starts with
screening and scoping, which include auditing, benchmarking, and
recommendations for good housekeeping. Next it describes an

xvi Preface

important tool: balancing and flowsheeting simulation as a basis for

optimization. It is on this basis that an integrated approach to
optimization, Process Integration, is introduced. This approach is then
connected to optimal process synthesis and combinatorial graphs.
The important question of how to apply the Process Integration and
Optimization Technology arises and is dealt with. This is further
tackled and analyzed in the following chapters.
Chapter 2 deals with the basic outline and definitions of Process
Integration (PI). It begins with a historical and methodological
introduction, briefly reviewing refinements and extensions of PI over
its years of development. The thermodynamic roots of the Pinch
Technology and of targeting heat recovery are introduced, followed
by one of the key graphical constructions in the PI methodology:
Composite Curves (CCs) for targeting process heat recovery.
Supertargeting, or targeting for a full-fledged Heat Exchanger
Network (HEN), is the next logical step. These tools are used to assess
modifications of the Pinch idea for HEN retrofitting. Although PI
was initially based on Heat Integration, an important spin-off was
the development of integration for mass exchange and water
networks. The chapter concludes with remarks on the role of PI in
making industry sustainable.
Chapter 3 introduces the other key methodology used for
sustainable process design and synthesis: optimization. It presents a
general framework and workflow of model building and follow-up
optimization, including models that incorporate black boxes or
gray boxes. It is critical to understand both the meaning and the
mathematical formulation of optimization. Toward this end, the
following questions are answered: What is optimization? What are
the main classes of optimization problems? How are optimization
problems formulated mathematically? What are the conditions for
local or global optimality? This is followed by introducing
deterministic algorithms for solving continuous linear and nonlinear,
constrained and unconstrained, optimization problems. The most
popular optimization methods and algorithms that employ stochastic
search are also reviewed. The middle part of Chapter 3 is devoted to
model creation. It includes a detailed description of conceptual
modeling: extracting data about the operating units, identifying
network and topology data, constructing equations to represent the
processes, and finally choosing a right objective function. The last
part of this chapter discusses how to handle complexity and
nonlinearity as well as how to apply process insight when evaluating
model adequacy and precision.
Chapter 4 covers the core topic from which the development of PI
began: improving the energy efficiency of individual processes and
the PI extension into Total Sites. The chapter starts by introducing
heat exchange, heat recovery, and heat exchange matches so that
readers will be prepared for the remaining chapter content. The
Preface xvii
strategic view of Process Integration is outlined; this includes an
overview of the hierarchy of process design, the meaning of
performance targets, and the practical issue of identifying heat
recovery problems from process flowsheets. These three topics are
closely interrelated. Without properly applying the process design
hierarchy, the performance targets cannot find a practical application
and use. However, the process design hierarchy would be difficult to
apply without employing performance targets and estimating upper
bounds on process performance or lower bounds on total cost.
Meaningful and practically useful heat-integrated designs cannot be
obtained without appropriate identification of the heat recovery
problem, a process referred to as data extraction. The chapter
proceeds to describe the use of Composite Curves to set heat recovery
targets, the Problem Table Algorithm for numerical targeting, and
the Heat Recovery Pinch. These tools and concepts form the basis of
Pinch Technology, defining the thermodynamic capabilities of the
heat recovery problems. The more advanced aspects are discussed
next; these include threshold problems, targeting multiple utilities
via the Grand Composite Curve, and establishing targets for heat
transfer area, capital cost, and total cost. There is a short overview of
options for modifying the core process (which defines the heat
recovery problem) that highlights the usefulness of Pinch Technology
and targeting for improving its energy efficiency. The chapter then
focuses on the synthesis of Heat Exchanger Networks; the approach
mainly follows the Pinch Design Method, but there is discussion of
the superstructure-based and hybrid methods used for HEN
synthesis. The next step is Total Site Integration, which provides the
necessary knowledge for energy recovery over complete industrial
complexes and sites.
Chapter 5 deals with an extension of PI known as Mass Integration,
the most widely used instance of which is Water Integration (WI).
The chapter begins with a description of the methodology and bases
for minimizing water use and maximizing water reuse, and the
importance of legislatively imposed constraints is discussed. Best
available techniques are analyzed and recommended for usage, and
the concept of a water footprint is described. At this point, the stage
is set for the main task: minimizing freshwater usage and wastewater
effluents. For this, the methodology of Water Pinch Analysis is
introduced. Also described is a related Mass Integration and
targeting technique, the material recovery Pinch diagram. The
chapter concludes with water minimization using the mathematical
optimization approach. Both the WI and the mathematical
approaches to optimization are illustrated with case studies.
Chapter 6 addresses further PI opportunities that have arisen as
the methodology was developed. These include: Hydrogen Networks
Design and Management; Oxygen Pinch Analysis; combined analyses
(energy-water, oxygen-water, Pinch-emergy, budget-income-time,
xviii Preface

materials reuse-recycling); supply chains; CO2 emissions targeting;

regional resource management; heat-integrated power systems with
decarbonization and low-temperature energy systems; and the
integration of reliability, availability, and maintainability with
process design. An example is given of a real-life PI problem involving
pressure-drop considerations during heat transfer enhancement.
Several recent applications are mentioned, including a Locally
Integrated Energy Sector and extended Total Sites with multiple
energy carriers.
Chapter 7 presents an overview of process optimization from the
perspectives of Mathematical Programming (MPR) and the P-graph.
The main features of these frameworks are analyzed, and it is shown
that the P-graph is better suited than MPR for solving combinatorial
optimization problems and, in particular, problems involving the
synthesis of process networks. Optimization of process scheduling is
the next topic. The most popular models and representations of
process schedules are analyzed, and an efficient tool for obtaining
them is introduced: the S-graph.
Chapter 8 presents an integrated view of PI and optimization. It
discusses how to efficiently apply them jointly in process synthesis
and how to combine them. The chapter presents a number of examples
of the P-graph and S-graph frameworks applied to combinations of
PI and optimization. These applications are grouped thematically:
(1) optimal process synthesis, including examples on reaction networks,
green biorefineries, and azeotropic distillation; (2) synthesis of
general energy systems involving Heat Integration and optimal
retrofit; (3) optimal scheduling for maximizing throughput and
revenue; (4) minimizing emissions via optimal synthesis of advanced
energy conversion systems using Fuel-Cell Combined Cycles; and
(5) availability and reliability features.
Chapter 9 reviews the software tools for process modeling,
integration, and optimization. The engineering field of sustainable
design is complex in terms of scales and relationships, which makes
information technology and computer software essential for solving
problemspreferably with a user-friendly interface. The chapter
reviews a wide spectrum of tools, as follows: (1) graph-based process
optimization (process network synthesis solutions implementing
P-graphs and the S-Graph Studio software); (2) Energy and Mass
Integration tools designed to optimize the implementation of Heat
Integration (SPRINT, HEAT-int, HEXTRAN, SuperTarget, spreadsheet-
based tools), Total Site Integration (STAR, SITE-int), power generation
and combined heat and power (STAR, WORK), and water systems
integration (WATER); (3) process flowsheeting and simulation
packages developed or supported by the major players (Aspen Plus,
HYSYS and UniSim, gPROMS, CHEMCAD, PRO/II); (4) general-
purpose optimization systems (GAMS, MIPSYN); (5) computer
algebra systems; and (6) other tools.
Preface xix
Chapter 10 is a collection of examples and case studies that
support the material presented in previous chapters. This selection
of problems was collected by the authors over decades of teaching
and consulting. Most of the problems have step-by-step solutions
that feature comments and guidance for mastering the methodology.
The examples are organized in several thematic groups: basic Heat
Integration, Total Sites, integrated placement of energy-intensive
processes, placing utilities, and Water Pinch Technology.
Chapter 11 contains advanced examples that are based on
industrial case studies, most of which were performed and published
by the authors. The case studies presented include: (1) retrofit of a
fluid catalytic cracking unit process that featured a large-scale heat
recovery network; (2) de-bottlenecking of a heat-integrated crude-oil
distillation process; (3) water system optimization of a citrus juice
plant; (4) efficient water use in other food industries; (5) synthesis of
industrial utility systems; (6) heat and power integration in buildings
and building complexes; (6) optimal supply chain design; and
(7) scheduling a large-scale paint production system.
Chapter 12 distills some important and rare expertise: typical
pitfalls and how to avoid them. It addresses possible (and probable)
difficulties encountered during problem formulation and data
extraction. The chapter raises seemingly trivial yet fundamental
questions: When is a stream a stream? How precise must the data be
at each step? How can extreme changes in specific heat capacity be
handled? What rules and guidelines must be followed to properly
extract data? How can the heat loads, heat capacities, and temperatures
of an extracted stream be calculated? How soft are the data in a
plant or process flowsheet? How can capital costs and operating costs
be estimated? The provided answers are based on long-term
experience and could help readers to solve the right problemthat
is, a problem that accurately reflects the reality of the process
under consideration. The chapter also addresses the integration of
renewables that exhibit fluctuating demand and supply rates as well
as steady-state and dynamic performance considerations. The chapter
concludes with recommendations for interpreting results and
suggestions on the successful advocacy and implementation of
sustainable and optimal design.
Chapter 13 consists solely of information on sources for further
reading and information gathering. The various sources of
information about optimization and integration in the process
industry are arranged as follows: (1) general sources of information,
(2) Heat Integration, (3) Mass Integration, (4) combined analysis, and
(5) optimization for sustainable industry. Each topic is subdivided
into sections on conferences, journals, service providers, and
Chapter 14 presents conclusions and sources of further
information. Suggestions for further reading point to books and
xx Preface

lecture notes containing applications and details that could not be

adequately described in a book of this size. The book ends with a
comprehensive bibliography that provides details of works cited in
the text and serves also as a source of information and directions for
further study.
This book is intended to increase awareness of the principal
methodologies that could contribute to improving energy and water
efficiency in processing plants while reducing their environmental
impact. Many additional illustrative examples could well have been
discussed, but space limitations required that the authors select the
most important features from a variety of fields and processing
industries; even so, hundreds of references are included for seekers
of more information. If the authors have managed to raise the interest
and awareness of readers, then this book has fulfilled its intentions.

Ji Kleme
Ferenc Friedler
Igor Bulatov
Petar Varbanov

e acknowledge the support from the European
Communityfunded project entitled Integrated Waste to
Energy Management to Prevent Global Warming, or
INEMAGLOW. We would like to thank Prof. Robin Smith and
Mr. Simon Perry from the University of Manchester for numerous
discussions about the finer points of PI and optimization
methodology. We also acknowledge with much gratitude the input
of all collaborators who contributed to this book. Their dedication,
timely responses, and willingness to accept editorial comments and
suggestions are greatly appreciated. We received invaluable help
from the staff of the Faculty of Information Technology at the
University of Pannonia in Hungary: Dr. Rozlia Pigler-Lakner,
Dr. Istvn Heckl, Dr. Boton Bertk, Dr. Zoltn Sle, Mr. Mt
Hegyhti, and Ms. Adrienn Sas. Substantial contributions were also
made by Hon Loong Lam and Zsfia Fodor, Ph.D. students
at CPI2.
Special thanks go to colleagues and close collaborators
worldwide who shared their latest methodologies and case studies.
It would be impossible to list all those involved, but we should like
to mention a few: Prof. L. T. Fan, Department of Chemical
Engineering, Kansas State University, Manhattan, Kansas, USA; Dr.
Dominic C. Y. Foo, Department of Chemical and Environmental
Engineering, University of Nottingham, Malaysia Campus; Prof.
Zdravko Kravanja, Faculty of Chemistry and Chemical Engineering,
University of Maribor, Slovenia; Prof. Valentin Plesu, Centre for
Technology Transfer for the Process Industries, Department of
Chemical Engineering, University POLITEHNICA, Bucharest; and
Prof. Petr Stehlk, Brno University of Technology, Institute of
Process and Environmental Engineering, UPEI VUT Brno, the
Czech Republic.
The authors appreciate the editing and production efforts of the
following people: Taisuke Soda, Michael Penn, Stephen Smith,
Richard Ruzycka, and Michael Mulcahy of McGraw-Hill, and
Aloysius Raj and the staff at Newgen.
It has been a great pleasure for us to work with all of you.

This page intentionally left blank
Introduction and
Definition of
the Field

1.1 Introduction
In recent years there has been increased interest in the development
of renewable, non-carbon-based energy sources in order to combat
the increasing threat of carbon dioxide (CO2) emissions and subsequent
climatic change. More recently, the fluctuations and often large
increases in the prices of oil and gas have further increased interest
in employing alternative, lower-carbon or non-carbon-based energy
sources. These cost and environmental concerns have led to increases
in the industrial sectors efficiency of energy use, although the use of
renewable energy sources in major industry has been sporadic at
best. In contrast, domestic energy supply has moved more positively
toward the integration of renewable energy sources; this movement
includes solar heating, heat pumps, and wind turbines. However,
there have been only limited and ad hoc attempts to design a
combined energy system that includes both industrial and residential
buildings, and few systematic design techniques have been marshaled
toward the end of producing a symbiotic system.
This book provides an overview of the Process Integration and
optimization methodologies and its application to improving the
energy efficiency of not only industrial but also nonindustrial energy
users. An additional aim is to evaluate how these methodologies can
be adapted to include the integration of waste and renewable energy
Industrial production requires a considerable and continuous
supply of energy delivered from natural resourcesprincipally in
the form of fossil fuels such as coal, oil, and natural gas. The increase
in our planets human population and its growing nutritional

2 Chapter One

demands have resulted in annual increases in energy consumption.

Furthermore, many nations have accelerated their development in
the last 10 years, and countries (such as China and India) with large
populations have seen significant increases in energy demands.
This growing energy consumption has also resulted in unsteady
climatic and environmental conditions in many areas because of
increased emissions of CO2, NOx, SOx, dust, black carbon, and
combustion process waste (Kleme et al., 2005a; Kleme, Bulatov,
and Cockeril, 2007). It has become increasingly important to ensure
that the production and processing industries take advantage of
recent developments in energy efficiency and in the use of
nontraditional energy sources (Houdkov et al., 2008; Lam,
Varbanov, and Kleme, 2010). The additional cost is related to the
amount of emitted CO2 and often takes the form of a centrally
imposed tax. A workable solution to this problem would be to
reduce emissions and effluents by optimizing energy consumption,
increasing the efficiency of materials processing, and also increasing
the efficiency of energy conversion and consumption (Kleme et al.,
Although major industry requires large supplies of energy to
meet production, it is not the only sector of the world economy that is
increasing its demands for energy. The particular characteristics of
the other sectors (e.g., transport, residential) make optimizing for
energy efficiency and cost reduction more difficult than in traditional
processing industries, such as oil refining, where continuous mass
production concentrated in a few locations offers an obvious potential
for large energy savings (Al-Riyami, Kleme, and Perry, 2001). In
contrast, for example, agricultural production and food processing
are distributed over large areas, and these activities are not continuous
but rather structured in seasonal campaigns. Hence, energy demands
in this sector are related to specific and limited time periods, so the
design of efficient energy systems to meet this demand is more
problematic than in traditional, steady-state industries.
This chapter proceeds by first outlining the field of energy
efficiency, including its scope, actors, and main features. The next
step is to describe energy-saving techniques generally and then to
specify an integrated approach: Heat Integration. An increasingly
prominent issue is assessing and minimizing emissions and the
carbon footprint. The carbon footprint (CFP) is defined by the U.K.
Parliamentary Office for Science and Technology as the total amount
of CO2 and the other greenhouse gases emitted over the full life cycle
of a process or product (POST, 2006). There have been numerous
studies (see, e.g., Albrecht, 2007; Fiaschi and Carta, 2007) that
emphasize the carbon neutrality of renewable sources of energy.
However, even renewable energy sources make some contribution to
the overall carbon footprint, and assessment studies frequently do
not account for this. The carbon footprint should also be incorporated
Introduction and Definition of the Field 3
into any products life-cycle assessment (LCA); see, for example,
Masruroh, Li, and Kleme (2006).

1.2 Energy Efficiency

The task of saving energyespecially at a time of rising energy
costs, demand, and carbon emissionsmust be taken seriously by
all communities and industries. Society is driven by the economics
of individual situations, and no section of society worldwide can
be expected to save energy at any cost. Thus, energy-saving
measures must be considered within the context of such issues
as environmental factors, legislatively imposed constraints, and
pressure from conscientious consumers. The simplest and most
obvious technique involves energy auditing and applying good
housekeeping measures. In many cases even these simple measures
are not fully understood or completed in sufficient detail. To
undertake a worthwhile energy audit, correct measurements are
necessary. Also, because in many cases energy demand is not
constant and instead fluctuates considerably, the monitoring of
energy consumption has to be performed over specific (or extended)
periods of time. Recommended monitoring techniques are described
by various sources: utility companies, such as SEMPRA ENERGY
(2009); governmental agencies, such as the U.S. Department of
Housing and Urban Development (2009); and international groups,
such as the International Energy Agency (Mandil, 2005).
Improvements in energy efficiency must often be achieved by
more complex means, such as those associated with improved
design and operation. It is of paramount importance that all
energy-related processes operate with maximum efficiency and
minimum energy input. These systems should also ensure that
they are fueled as much as possible by low-value inputs or recycled
wastes, such as process outputsfor example, off-gases and hot-
water waste (AEA Technology, 2000). To ensure that systems are
designed to be as efficient as best practice allows, optimization
methods are frequently employed for grassroots design, retrofit,
control, and intelligent support systems for processes, plants, and
buildings. One technology that has a strong reputation for
improving energy efficiency through better design is Pinch
Technology (Linnhoff and Vredeveld, 1984), which has been in use
for more than 20 years. This technology, through feedback from
practical applications and industry professionals, has been
continuously developed and expanded (Kleme et al., 1997; Smith,
2005; Kleme, Smith, and Kim, 2008). Details on the successful
applications of Pinch Technology in various industrial sectors are
described in Chapter 11.
The sustainability of energy systems can also be considerably
improved by making use of renewable energy sources (e.g., biofuels,
4 Chapter One

wind, water power), which significantly reduce the generation of

greenhouse gases. Implementing Combined Heat and Power (CHP)
systems (AEA Technology, 2000), rather than a separate power system
and heat system, can also substantially improve the efficiency of
energy supply. In addition, the overall situation can be improved by
certain fast-advancing technologies: heat pumps, compact heat
exchangers, fuel cells (FCs), and intensified technologies. Some of
these approaches are not yet fully commercialized but are gradually
becoming available. Some examples discussed in AEA Technology
(2000) are as follows:
Advanced gas turbines for both utility and industrial
applications, including cogeneration (CHP).
Fuel cells are electrochemical devices that may be fueled by
hydrogen, methane, or other organic fuels. High-temperature
FCs (MCFC and SOFC) can also use cleaned and conditioned
synthesis gas directly. These systems produce high-grade
heat (above 500C) in addition to electrical power, and they
are well suited to cogeneration. It is estimated that FCs
typically emit 25 percent less CO2 than a gas turbine. Yet
further advances are required before their full application
becomes economically practical. One option is to integrate
CHP and FCs (Varbanov et al., 2006).
Dividing-wall distillation technology (Triantafyllou and Smith,
1992; Hernndez and Jimnez, 1999). This technology
involves the separation of three components (or groups of
components) in a mixture. In the past this would have
required two distillation columns, with heating and cooling
provided for each column. The dividing-wall technology
combines the separation process into a single vessel to yield
energy savings of about 30 percent and capital savings of
about 25 percent (MW Kellogg, 1998).
Compact heat exchangers are generally made of thin metallic
plates rather than tubes. The plates form complex and small
flow passages that result in a large surface area for heat
transfer per unit volume. Multistream versions of these
exchangers can incorporate 12 or more streams. Compact
heat exchangers can yield energy savings and also reduce the
costs of capital and installation. In a case study at a U.K.
refinery, potential capital savings ranged from 69 to 84 percent
(EEO, 1993).
Cogeneration is being increasingly applied in most sectors. For
example, many oil refineries satisfy a large portion of their power
demands by on-site generation, with the balance being supplied by
externally purchased electricity. Usually all or almost all heating
Introduction and Definition of the Field 5
needs are met by on-site generation of heat carriers (hot oil, steam,
flue gases). CHP generation and even tri-generation (simultaneous
production of heat, power, and cooling) offer an opportunity to
reduce greenhouse gas emissions from the combined power grid
refinery system by utilizing fuel heat content more completely than
do most existing power generation technologies. The improved
utilization rate is achieved by recovering the heat left in the exhaust
streams of the various power generating facilities (gas turbines,

1.3 Screening and Scoping: Auditing, Benchmarking,

and Good Housekeeping
Over the years, screening and scoping tools have had a considerable
effect on reducing the costs of energy and of treating effluents,
thereby improving plant profit margins. For example, energy audits
performed on various food and drink processes have resulted in cost
savings of 1530 percent and in attractive returns on investment
(NRCan, 2007; U.S. DOE, 2007). Because profit margins are generally
small in this sector, efficient management of energy is crucial for
increasing profits while simultaneously reducing the production
plants environmental impacts.
The Carbon Trust (2009) has suggested the following steps for
reducing energy consumption and thus improving energy efficiency.
An analogous approach can be used for optimizing the use of water
and wastewater:

Good housekeeping: (1) improving staff attitude and awareness;

(2) locating heat leakages; (3) preventive maintenance;
(4) insulation; (5) justifying use of heating, cooling, and
lighting; (6) prevention or reduction of fouling; and
(7) monitoring and control.
Energy audits: (1) examining records of energy cost and
consumption; (2) producing an energy balance sheet;
(3) providing high-quality data on energy consumption and
costs; (4) collecting and processing data regularly
(recommended for the analysis and review of energy
information); and (5) establishing a benchmark of energy
consumption based on other organizations or accepted
Energy Efficiency Environmental Management System, ISO
14001: (1) the management system is a network of interrelated
elements; (2) these elements include responsibilities,
authorities, relationships, functions, processes, procedures,
practices, and resources; (3) the management system
6 Chapter One

establishes policies and objectives and also develops ways of

applying the policies to achieve the objectives.
Responsible use of energy: in (1) energy procurement,
(2) metering and billing, (3) performance measurement,
(4) policy development, (5) assigning energy management
responsibility, (6) energy surveying and auditing, (7) training
and education, and (8) capital project management.

The steps listed here have been found by the U.K. Energy Efficiency
Best Practice Programme (EEBPp, 2002) to be central to any resource
and waste management program. Energy screening and scoping
audits fall into three main categories whose use (either individually
or in combination) is based on the required depth of the study, the
process to be analyzed, and the plant size: (1) the walk-through audit,
which provides a quick snapshot of certain opportunities; (2) the
detailed audit, which conducts an in-depth analysis of specific
components; and (3) the Process Integration audit, which analyzes the
plant as a whole and takes a systematic look at all processing steps and
their interconnections.
Audits may be performed by plant personnel, by external experts,
or by teams with members from both groups. Although there are
many possible options and levels of detail, typical activities include
the following (NRCan, 2007):

1. Determination of the production base case and the

reference period.
2. Collection of energy total consumption and cost (information
usually available from fuel records, electricity invoices, etc.).
3. Development of a process flow chart that shows materials,
energy inputs, and energy outputs for the main processing
4. For the largest consumers, collection of energy data from
the plant metering devices, control systems, and process
flow diagrams (if current operating conditions are close to
design data).

In terms of the time required for (and other costs of) identifying
opportunities to save energy, one efficient approach is the top-level
analysis (Varbanov et al., 2004). This procedure accounts for investment
limits or allowances and identifies economically justifiable energy-
saving opportunities, listing them in the order of their expected
economic return. Measurements can be performed using portable
instruments (flow rate, temperature, humidity, etc.) to determine the
overall plant production of steam, refrigeration, compressed air, and
hot water. Interviews with key personnel and operators can also
provide valuable information about plant operations.
Introduction and Definition of the Field 7

1.4 Balancing and Flowsheeting Simulation

as a Basis for Optimization
Balancing reconciliation and flowsheeting simulation tools are
frequently used for sustainability design and savings analysis; in
fact, they have become the main tools in a process engineers toolbox.
These tools help engineers to develop complete material and energy
models based on measurements and/or design values and
mathematical models. Consequently, these simulation tools play an
important role in the technical and economic decision-making
activities related to the planning and/or design stage of processes
under development and to the operation of existing equipment.
A number of computer-based systems have been developed to
help process engineers calculate energy and mass balances. However,
ongoing development costs have left only a few of these systems on
the market: those whose positions have been secured by a substantial
number of sales. An early overview of flowsheeting simulation was
presented by Kleme (1977). The balancing, data validation, and
reconciliation technology involves a set of procedures incorporated
into a software tool. Process data reconciliation has become the main
method for monitoring and optimizing industrial processes as well
as for component diagnosis, condition-based maintenance, and online
calibration of instrumentation. According to Heyen and Kalitventzeff
(2007), the main goals of this technology are: (1) to detect and correct
deviations and errors of measured data so that they satisfy all balance
constraints; (2) to use knowledge about the process system and
structure along with measured data to compute unmeasured data
wherever possible, especially key performance indicators (KPIs); and
(3) to determine the postprocessing accuracy of measured and
unmeasured data, including KPIs. More information about available
software tools is provided in Chapter 9.

1.5 Integrated Approach: Process Integration

Heat Integration is the first part of Process Integration, which
provides the design foundation for CHP systems, refrigeration, air
conditioning, and heat-pump systems. Process Integration is equally
applicable to small, medium, and large industrial sites (e.g.,
power stations and oil refineries engaged in the production of
petrochemicals). The technology answers one of the major challenges
in the design of heating and cooling systemsnamely, the complexity
of energy and power integrationvia a mapping strategy based on
thermodynamically derived upper bounds on the systems thermal
and power performance. The efficient use of available heating and
cooling resources for serving complex systems of various sizes
and designations can significantly reduce energy consumption and
emissions. This methodology can also be used to integrate renewable
8 Chapter One

energy sources such as biomass, solar photovoltaic (PV), and solar

thermal into the combined heating and cooling cycles. Since 1995, the
energy consumption of European Community (EC) member countries
has risen by 11 percent to the equivalent of 1637 Mt (megatons) of oil
equivalent (Eurostat, 2007). This increase in energy consumption
contrasts with the trend of the EC population, which is growing at
only about 0.4 percent annually (Eurostat, 2007). The overall share of
total energy consumption by industry is declining in most countries.
However, domestic energy consumption is rising. In the United
Kingdom, for example, residential consumption rose from 35.6 Mt
(oil equivalent) in 1971 to 48.5 Mt in 2001an increase of 36 percent
despite increases in energy efficiency (DTI, 2006).
Process Integration Technology (Pinch Technology) has been
extensively used in the processing and power generating industry
for more than 30 years. It was pioneered by the Department of Process
Integration, UMIST (now the Centre for Process Integration, CEAS,
the University of Manchester), in the late 1980s and 1990s. Heat
Integration is introduced in Chapter 2 and is described in more detail
in Chapter 4. Water and mass integration is covered in Chapter 5, and
recent developments in the field are reviewed in Chapter 6.

1.6 Optimal Process Synthesis and

Combinatorial Graphs
Process synthesis is a complex engineering activity that involves
process modeling (e.g., chemical engineering) as well as combinatorial
challenges. Although the basic process modeling has reached a
considerable level of maturity, the combinatorial aspects of the
engineering problem still leave significant room for improvement.
One innovative approach to process synthesis is to exploit the
combinatorial nature of network optimization. This approach is used
by the process-graph (P-graph) framework, which explicitly defines
sets of process materials and operations and then uses efficient
combinatorial algorithms to build a rigorous network superstructure
that can be reduced to the optimal network topology. This is different
from the Mathematical Programming (MPR) approach, where the
combinatorial aspects are modeled by algebraic equations and the
structural features are blended with the underlying process models.
These approaches are covered in Chapters 3, 7, and 8.
The P-graph framework has been successfully applied to and
demonstrated on several cases of energy system design. For example,
Varbanov and Friedler (2008) explored FC-based systems in a case
study that evaluated energy conversion systems to reduce CO2
emissions via Fuel-Cell Combined Cycle (FCCC) subsystems that
utilize biomass and/or fossil fuels. The combinatorial complexity of
the problem is efficiently handled by using P-graph framework and
Introduction and Definition of the Field 9
algorithms. The authors developed a methodology for synthesizing
cost-optimal FCCC configurations that accounts for the system
carbon footprint. Their results show that the high energy efficiency
of such systems when using renewable fuels makes them economically
viable for a wide range of conditions. This study and other case
studies (and guided applications) are provided in Chapters 10
and 11.

1.7 How to Apply the Process Integration and

Optimization Technology
The crucial topic of applications is addressed in Chapter 12, and
Chapters 13 and 14 discuss sources of further information for those
who would like to learn more or who seek qualified help from
leading researchers and providers worldwide. Although every effort
has been made to ensure that the provided information is as
comprehensive as possible, it cannot be fully exhaustive. The field of
Process Integration and optimization is developing rapidly, and
every month brings important advancements. For this reason, the
reader is strongly encouraged to remain up-to-date by reading about
and exploring new developments.
This page intentionally left blank
Process Integration

2.1 Introduction: The Need for Process Integration

Energy and water saving, global warming, and greenhouse gas
emissions have become major technological, societal, and political
issues. These issues are of strategic importance because they are all
closely related to energy supply. Numerous studies have been
performed on the subject of improving energy efficiency while
reducing emissions of greenhouse gases, volatile organic compounds,
and other pollutants.
In response to these industrial and societal requirements, several
novel methodologies emerged since 1970. They included process system
engineering (Sargent, 1979; Sargent, 1983) and Process Integration
(Linnhoff et al., 1982; Linnhoff et al., 1994) followed by a number of
works from the UMIST Group. Both disciplines were involved in dedicated
conferences such as ESCAPE (European Symposium on Computer
Aided Process Engineering), which was facilitated by the European
Federation of Chemical Engineering Working Party on Computer
Aided Process Engineering (CAPE, 2009), and PRES (Conference on
Process Integration, Modelling and Optimisation for Energy Saving
and Pollution Reduction; PRES, 2009), which is supported on an annual
basis by chemical and chemical engineering societies (e.g., Hungarian
Chemical Society, Czech Society of Chemical Engineering, Italian
Association of Chemical Engineering, Canadian Society for Chemical
Engineering). It has gradually become evident that resource inputs and
effluents of industrial processes are often connected to each other.
Examples of this connection include the following:

1. Reducing external heating utility is usually accompanied by

an equivalent reduction in the cooling utility demand
(Linnhoff and Flower, 1978; Linnhoff et al., 1982; Linnhoff
et al., 1994); obviously, this also tends to reduce the CO2
emissions from the corresponding sites.
2. Reducing wastewater effluents usually leads to reduced
freshwater intake (Wang and Smith, 1994; Bagajewicz, 2000;
Thevendiraraj et al., 2003).

12 Chapter Two

Reducing the consumption of resources is typically achieved by

increasing internal recycling and reuse of energy and material
streams instead of fresh resources and utilities. Projects for improving
process resource efficiencies can offer economic benefits and also
improve public perceptions of the company undertaking them.
However, motivating, launching, and carrying out such projects
requires proper optimization studies that are based on adequate
models of the process plants.

2.2 What Is Process Integration?

Process Integration (PI) is a family of methodologies for combining
several processes to reduce consumption of resources and/or to
reduce harmful emissions. It started as mainly Heat Integration (HI),
stimulated by the energy crisis of the 1970s (Hohmann, 1971; Linnhoff
and Flower, 1978; Linnhoff, Mason, and Wardle, 1979; Linnhoff et al.,
1982; Linnhoff and Hindmarsh, 1983; Linnhoff and Vredeveld, 1984).
This energy-saving methodology has been used extensively in the
processing and power generating industry over the last 30 years.
Heat Inetegration examines the potential for improving and
optimizing the heat exchange between heat sources and heat sinks in
order to reduce the amount of external heating and cooling required,
thereby reducing costs and emissions. A systematic, rule-based
design procedure has been developed that yields the maximum
energy-saving design for a given system.
There are several definitions of HI (using Pinch Technology);
most refer to the thermal combination of steady-state process streams
or batch operations for achieving heat recovery via heat exchange.
More broadly, the definition of PI, as adopted by the International
Energy Agency (Gundersen, 2000) is as follows: systematic and
general methods for designing integrated production systems
ranging from individual processes to total sites and with special
emphasis on the efficient use of energy and reducing environmental

2.3 History and Development of Process Integration

It is remarkable that PI continues to interest researchers even 35 years
after its emergence. The HI, which developed as the first part of PI,
deals with the integration of heat in Heat Exchanger Networks (HENs).
This methodology has been shown to have considerable application
potential for complete chemical processing sites, reducing overall
energy demand and emissions across the site and thus leading to a
more effective and efficient site utility system. The PI and its subset HI
method have also been successfully applied to the cogeneration of
heat and shaft power. Further details are available elsewhere
Process Integration 13
(Linnhoff et al., 1982; Linnhoff et al., 1994; Shenoy, 1995; Smith, 2005;
El-Halwagi, 2006; Kemp 2007; Kleme, Smith, and Kim, 2008).
One of the first works in this field was Hohmanns (1971) PhD
thesis, which introduced a systematic thermodynamics-based
reasoning for evaluating the minimum energy requirements for a
given HEN synthesis problem. In the late 1970s this work was
continued by Linnhoff and Flower, who used Hohmanns foundation
to develop the basis of Pinch Technologynow considered the
cornerstone of HI. As is often the case with a pioneering innovation,
this work was difficult to publish. Yet the authors strong commitment
eventually led to the publication of their ideas in Linnhoff and
Flower (1978), which has since become the most cited paper in the
history of chemical engineering. Similar work (Umeda et al., 1978;
Umeda, Harada, and Shiroko, 1979) was independently published in
Japan, but it was Linnhoff (supported by teams from UMIST and
later Linnhoff March Ltd.) who pushed the new concept through
academia and industry. The publication of the first red book by
Linnhoff et al. (1982) played a key role in the dissemination of HI
methodology. This users guide to Pinch Analysis detailed the most
common process network design problems, including HEN synthesis,
heat recovery targeting, and selecting multiple utilities.
These methodologies were developed and pioneered by the
Department of Process Integration, UMIST (now the Centre for
Process Integration, CEAS, the University of Manchester) in the late
1980s and 1990s (Linnhoff et al., 1982; Linnhoff and Vredeveld, 1984;
Linnhoff et al., 1994; Kleme et al., 1997; Smith et al., 2000; Smith,
2005). A second edition of Linnhoffs users guide was published by
Kemp (2007). Applications of HI in the food industry were presented
in Kleme and Perry (2007a) and in Kleme, Smith, and Kim (2008).
Tan and Foo (2007) successfully applied the Pinch Analysis approach
to carbon-constrained planning for the energy sector, and Foo, Tan,
and Ng (2008) applied the cascade analysis technique to carbon-
footprint-constrained energy planning.
Another important part in process design and optimization is
the synthesis phase of process flowsheets. From the earliest stages of
PI there have been attempts to combine it with optimization (see, e.g.,
Giammatei, 1994). Such combining is usually performed after the
targeting phase mentioned previously. Ideally, the structure of the
entire processand the configurations of the operating units within
itshould be simultaneously designed and optimized, because the
performance of each unit influences the others. The main source of
complexity in this synthesis is the problems dual nature of being
both continuous and discrete. There are several known methods for
performing the task, including heuristic, evolutionary, and
superstructure-based approaches. Two major classes of methods for
process synthesis are heuristic and algorithmic (or Mathematical
14 Chapter Two

Programming) methods. Hybrid methods have also been proposed;

these approaches incorporate heuristic rules as well as Mathematical
Programming (MPR). Much as with any decision making, process
synthesis (or design) is an activity for which no past experience can be
ignored, especially when it comes to localized details of the design.
The most popular approach is to create a superstructure for the
network being designed and then choose the best possible solution
network from the superstructure options.

2.4 Pinch Technology and Targeting Heat Recovery:

The Thermodynamic Roots
Furman and Sahinidis (2002) have compiled a comprehensive review
of works tracing the development of the research on HENs in time.
Their study shows that there was only mild interest in heat recovery
and energy efficiency until the early 1970s, by which time just a few
works in the field had appeared. But between the oil crises of 1973
1974 and 1979 there were significant advances made in HI. Although
capital cost remained important, the major focus was on saving
energy and reducing related costs. It is exactly this focus that
resulted in attention being paid to energy flows and to the energy
quality represented by temperature. The result was the development
of Pinch Technology, which is firmly based on the first and second
laws of thermodynamics (Linnhoff and Flower, 1978).
In this way HEN synthesisone of the most important and
common tasks of process designhas become the starting point for
the PI revolution in industrial systems design. HENs in industry are
used mainly to save on energy cost. For many years the HEN design
methods relied mostly on heuristics, as necessitated by the large
number of permutations in which the necessary heat exchangers
could be arranged. Masso and Rudd (1969) is a pioneering work that
defines the problem of HEN synthesis; the paper proposes an
evolutionary synthesis procedure that is based on heuristics. An
alternative HEN synthesis method is described in Zhelev et al.
(1985), one that exploits synergies between heuristics and
combinatorics. In this approach, several suboptimal networks are
synthesized, and in most cases they are less integrated (consist of
more than one subnetwork). A complete timeline and thorough
bibliography of HEN design and optimization works is provided in
Furman and Sahinidis (2002). The paper covers many more details,
including the earliest known HEN-related scientific article: Ten
Broeck (1944).
The discovery of the Heat Recovery Pinch concept (Linnhoff
and Flower, 1978) was a critical step in the development of HEN
synthesis. The main idea behind the formulated HEN design
procedure was to obtainprior to the core design stepsguidelines
Process Integration 15
and targets for HEN performance. This procedure is possible
thanks to thermodynamics. The hot and cold streams for the
process under consideration are combined to yield (1) a Hot
Composite Curve representing, collectively, the process heat
sources (the hot streams); and (2) a Cold Composite Curve
representing the process heat sinks (the cold streams). For a
specified minimum allowed temperature difference Tmin, the two
curves are combined in one plot (see Figure 4.7), providing a clear
thermodynamic view of the heat recovery problem.
The overlap between the two Composite Curves represents the
heat recovery target. The overlap projection on the heat exchange
axis represents the maximum amount of process heat being internally
recovered. The vertical projection of the overlap indicates the
temperature range where the maximum heat recovery should take
place. The targets for external (utility) heating and cooling are
represented by the nonoverlapping segments of the Cold and Hot
Composite Curves, respectively. The methodology is described in
more detail in Chapter 4.

2.5 Supertargeting: Full-Fledged HEN Targeting

After obtaining targets for utility demands of a HEN, the next logical
step is to estimate targets for capital and total costs. Capital costs in
HENs are determined by many factors, of which the most significant
is the total heat transfer area and its distribution among the heat
exchangers. Townsend and Linnhoff (1984) proposed a procedure for
estimating HEN capital cost targets by using the Balanced Composite
Curves, which are obtained by adding utilities to the Composite
Curves obtained previously (see Figure 4.7). The HEN heat transfer
area target is computed from the enthalpy intervals in the Balanced
Composite Curves by using the heat transfer coefficients given in the
HEN problem specification (assuming vertical heat transfer and
spaghetti-type topology). Improvements to this procedure that
have been proposed involve one or more of the following factors:

1. Obtaining more accurate surface area targets for HENs

that exhibit nonuniform heat transfer coefficients (Colberg
and Morari, 1990; Jegede and Polley, 1992; Zhu et al., 1995;
Serna-Gonzlez, Jimnez-Gutirrez, and Ponce-Ortega,
2. Accounting for practical implementation factors, such as
construction materials, pressure ratings, and different heat
exchanger types (Hall, Ahmad, and Smith, 1990).
3. Accounting for additional constraints such as safety and
prohibitive distance (Santos and Zemp, 2000).
16 Chapter Two

Cost estimation usually has a significant impact on a projects

predicted profitability. Taal and colleagues (2003) summarized the
common methods used for cost estimation of heat exchange
equipment and the sources of energy price projections. This paper
showed the importance of choosing the right cost estimation method
and a reliable source of energy price forecast when retrofit projects
are evaluated for viability. Both retrofit and grassroots design projects
require accurate cost estimates in the projects early stages so that the
correct choices are made. There are several methods available, most
of which lead to FOB (Free On Board) cost estimates. As for operational
costs, the range of possible future energy prices may prove crucial
when the margins of a proposed design or retrofit project are
analyzed. This is especially true for a large project that has a long
payback period and consumes a lot of energy. Such projects are
typical of plants in the chemical, petrochemical, refinery, and
paper industries. Taal et al. (2003) include a brief review of oil and
natural gas price projections reported by centers of excellence in the
energy economics field. There are numerous forecasts, sometimes
contradictory, but general trends can be observed. Some general
insights in oil and gas price and production behavior are mentioned
and refer to more detailed sources.

2.6 Modifying the Pinch Idea for HEN Retrofit

Bochenek, Jezowski, and Jezowska (1998) compared the approaches
of optimization versus simulation for retrofitting flexible HENs.
This is an important work that should have generated additional
research. Zhu, Zanfir, and Kleme (2000) proposed a heat transfer
enhancement methodology, for HEN retrofit design, from which HI
could benefit substantially. This approach is worthy of wider
implementation, especially in the context of retrofit studies.
Heat Exchanger Network retrofit is a special case of optimization.
In retrofit problems, one must accommodate an existing network
with existing heat exchangers that are already paid for. This
circumstance substantially alters the economics of the problem as
compared with a new design. One example of this approach to
retrofit was the paper of Tjoe and Linnhoff (1986), which suggested
identifying heat exchangers with cross-Pinch heat transfer and,
where appropriate, attempting to replace such heat exchangers with
others that do not transfer heat across the Pinch, thereby reducing
energy consumption.
However, an ideal new design would not account for the existing
HEN equipment and topological constraints. One method for
overcoming these drawbacks is the Network Pinch (Asante,
1996; Asante and Zhu, 1997), which uses continuous nonlinear
optimization to identify the bottlenecking heat exchangers within
Process Integration 17
the existing network. The network bottleneck occurs at a heat
exchanger, which constrains the load shifting to gain further
improvement in heat recovery. At least one of the heat exchanger
sides exhibits the minimum allowed temperature difference between
the involved streams, and the corresponding point in the networks
temperature-load plot is referred to as the Network Pinch. To
overcome this Network Pinch, the network structure must be
modified. Possible modifications include the relocation of an existing
heat exchanger, the addition of a new exchanger, or a change in the
stream splitting arrangement. To identify the most promising
modifications, the Network Pinch method uses MPR guided by
thermodynamic insights. Nonlinear optimization is employed to
evaluate the capital energy trade-offs and to produce a new optimal
structure. This approach allows retrofit to be carried out one step at
a time, leaving the designer in control to accept or reject suggested
modifications at each step.
There have been some successful practical applications of these
targeting and retrofit methodologies. Pleu, Kleme, and Georgescu
(1998) demonstrated the wide applicability of PI in the Romanian
oil refining and petrochemical industry. In one of the first
comprehensive retrofit case studies, Hassan, Kleme, and Pleu
(1999) presented a PI analysis and retrofit suggestions for a fluid
catalytic cracking (FCC) plant. Pinch Technology and its recent
extensions offer an effective and practical method for designing the
HEN for new and retrofit projects. Al-Riyami, Kleme, and Perry
(2001) demonstrated a HI retrofit analysis for the HEN of an FCC
plant. Their study found significant room for improvement in the
heat recovery process, and the new network was designed using
the Network Pinch method.

2.7 Mass Exchange and Water Networks

Water is widely used in the processing industries as an important
raw material. It is also frequently used as a utility (e.g., steam or
cooling water) and as a mass transfer agent (e.g., for washing or
extraction). Large amounts of high-quality water are consumed in
many industries that face strict requirements for product quality and
the associated manufacturing safety issues. The processing industry
is characterized by complex design and operation of storage and
distribution systems for water management. Todays industrial
processes and systems that use water are subjected to increasingly
stringent environmental regulations on the discharge of effluents.
Increases in population and its quality of life have led to increased
demand for freshwater. The rapid pace of these changes has made
more urgent the need for improved water management. Adopting
techniques to minimize both water consumption and wastewater
18 Chapter Two

discharge can considerably reduce demand for freshwater and also

the amount of effluent generated by processing.
For these reasons, the success of HI has inspired researchers to
apply the Pinch and PI concepts to other areasin particular, to
mass exchange networks (El-Halwagi and Manousiouthakis, 1989).
Wang and Smith (1994) developed a method for industrial water
networks as a special case of mass exchange networks (see Figure 5.1).
Their main objective was to minimize the consumption of freshwater
and the disposal of wastewater simultaneously by maximizing the
reuse of internal water, again exploiting the idea of recycling and
reusing valuable streams and materials in order to save resources
and reduce emissions. Wastewater can be further reduced by
applying additional techniques for water regeneration that enable
further reuse or recycling. For the case of a single contaminant,
translating the method of Pinch Analysis to water minimization is
straightforward: the waters Composite Curve is used to construct a
plot of contaminant concentration versus contaminant load.
Extending the Water Pinch Analysis to multiple-contaminant
problems is a complicated and difficult procedure. The principal
issue concerns determining which contaminant to use on the Y axis
when plotting the Composite Curves. Several approaches have been
proposed. One option is to employ MPR, in which case Water Pinch
serves as a preliminary visualization tool (Doyle and Smith, 1997).
Foo, Manan, and Tan (2005) applied Water Pinch Analysis to
synthesize optimal water recovery networks for batch processes.
These authors introduced a numerical technique (time-dependent
water cascade analysis) that has the advantage of clearly depicting
the time-dependent nature of batch water networks. Majozi (2005)
also employed mathematical modelingthe mixed integer nonlinear
programming (MINLP) approachto devise an effective technique
for wastewater minimization in batch processes. Here, too, wastewater
was minimized via application of reuse and recycle.

2.8 Benefits of Process Integration

Heat recovery targeting for HEN synthesis problems is based on
Composite Curves described in Section 2.4 (Linnhoff, Mason, and
Wardle, 1979). The Composite Curves plot is a visual tool that
summarizes the important energy-related properties of a process in
a single view (see Figure 4.7). It was the resulting recognition of the
thermodynamic relationships and limitations in the underlying
heat recovery problem that led to development of the Pinch Design
Method (Linnhoff and Hindmarsh, 1983), which is capable of
producing maximally efficient heat recovery networks. As already
discussed in this chapter, PI has been considerably expanded in
scope since these initial applications. It is now also used for HEN
Process Integration 19
retrofits, both water and combined water-and-energy minimization,
as well as to minimize total site energy consumptiona process
that includes Combined Heat and Power, locally integrated energy
sectors, integration of renewables and waste-to-energy techniques,
and combinatorial tools (P-graph and S-graph; see Chapter 7). In
addition, recent applications have extended the PI approach to
regional energy and emissions planning, financial planning, batch
processes, and the targeting of other constrained resources, such as
land, renewable energy, and emissions. In the wake of the initial
breakthrough of Pinch Analysis for HEN synthesis, all these new PI
applications follow the same simple logic: target setting should
precede designing. In the most straightforward casessuch as HEN
synthesis for Maximum Energy Recovery (MER) and water network
synthesis for maximum water reusethe targets can be interpreted
as indicators of what a rigorous application could actually achieve.
However, the applicability and benefits of PI are not limited to
these straightforward cases. In fact, the target setting can be applied
in various contexts and still yield enormous benefits in terms of
reduced computational and project development time. Kleme,
Kimenov, and Nenov (1998) described several applications of Pinch
Technology within the food industry, work that was further
developed in Kleme et al. (1999). This research showed that Pinch
Technology can provide benefits far beyond oil refining and
The most important property of thermodynamically derived
heat recovery targets is that they cannot be improved upon by any
real system. Composite Curves play an important role in process
design; for HEN synthesis algorithms, they provide strict MER
targets. For process synthesis based on MPR, the Composite Curves
establish relevant lower bounds on utility requirements and capital
cost, thereby narrowing the search space for the following
superstructure construction and optimization.
The preceding observation highlights an important character-
istic of process optimization problems, and specifically those that
involve process synthesis and design. By strategically obtaining
key data about the system, it is possible to evaluate processes based
on limited informationbefore too much time (or other resources)
are spent on the study. This approach follows the logic of oil
drilling projects: potential sites are first evaluated in terms of key
preliminary indicators, and further studies or drilling commence
only if the preliminary evaluations indicate that the revenues
could justify further investment. The logic of this approach
was systematically formulated by Smith in his books on PI for
process synthesis (Smith, 1995; Smith, 2005) and by Daichendt and
Grossmann (1997), whose paper integrated hierarchical decom-
position and MPR to solve process synthesis problems.
20 Chapter Two

2.9 The Role of PI in Making Industry Sustainable

Sustainability in the design and operation of industrial processes is
becoming a key issue of process systems engineering. There are MPR
tools for designing chemical processes that minimize costs (and thus
maximize profits). These MPR formulations typically are highly
complex; therefore, in practical applications, the original problem is
usually decomposed into smaller problems that are solved
There are several definitions of sustainability. It can be defined as
the capacity to sustain the viability of a given system. In this context,
sustainable development implies that current actions should not harm
future generations and that measurable indicators are needed to
ensure compliance. For industrial processes, the relevant indicators
are the rates of resource intake and effluent emissions. These factors
are usually expressed in terms of footprints (De Benedetto and
Kleme, 2009)for example, the carbon footprint (CFP) and/or water
footprint (WFP) of a process. The sustainability of an industrial process
depends on minimizing these footprints. For synthesis and design
problems, the measured indicators should apply to the proposed
systems complete life cycle as part of a life-cycle assessment (LCA).
In short, PI has a direct impact on improving the sustainability
of a given industrial process. All PI techniques are geared toward
reducing the intake of resources and minimizing the release of
harmful effluents, goals that are directly related to the corresponding
footprints. Hence, employing PI and approaching the targeted
values will help minimize those footprints.
Footprint considerations, however, are only part of an industrys
ultimate indicatorthat is, its economic performance. This metric
takes the form of either cost or profit, depending on the selected system
boundaries. Thus, no matter how environmentally attractive a given
project may be, it will probably not be implemented unless doing so is
economically feasible. There are several ways to merge the objectives
of sustainability and profitability. The two most popular are
(1) expressing footprints in strictly economic terms and then folding
them into the cost or profit objective function; and (2) employing
multiobjective optimization, whereby footprint and economic
indicators are evaluated in parallel as objectives to be achieved.

2.10 Examples of Applied Process Integration

In this section, four examples illustrate the potential of PI to reduce
not only resource demand but also operational and capital costs.
The problem areas vary widely, yet in each case the application of
the PI techniques yields significant benefits. Only the main problem
points and outcomes are described here; see Chapter 11 for more
Process Integration 21

Petrochemicals: Fluid Catalytic Cracking Process

The FCC unit is a major process element in oil refineries, and
improvements in yield and efficiency have been attempted over the
years in response to various external drivers. Retrofitting the HEN
associated with an FCC usually leads to improved energy recovery
and thus to reduced energy use and/or increased throughput. In this
example, the HEN of the FCC process includes a main column and a
gas concentration section (Al Riyami, Kleme, and Perry, 2001). The
particular FCC plant considered had 23 hot streams and 11 cold
streams, and the associated cost and economic data required for the
analysis were specified by the refinery owners. The task was to
analyze the existing process and then propose a HEN retrofit plan for
improving energy recovery. Incremental area efficiency was used for
the targeting stage of the retrofit design, which was carried out using
the Network Pinch method (Asante and Zhu, 1997) consisting of a
diagnosis stage and an optimization stage. In the retrofit, four heat
exchangers were added and one existing exchanger was removed.
The resultant design produced energy savings of 8.955 MW or 74
percent. This translated into a 27 percent decrease in the plants utility
bill for an annual savings of $2,388,600. The modified HEN required
an investment of $3,758,420, so the payback period was less than 19

Energy Integration of a Hospital Complex

Herrera, Islas, and Arriola (2003) studied a hospital complex that
included an institute, a general hospital, a regional laundry center, a
sports center, and some other public buildings. The diesel fuel used
to generate the steam required for the complex amounted to
75 percent of its total energy consumption and 68 percent of its total
energy cost ($396,131 in 1999). The Pinch Analysis that was performed
estimated the minimum need for external heating at 388.64 kW.
Because the actual heating used was consuming 625.28 kW, there
existed a potential energy savings of 38 percent.

Sunflower Oil Production

As reported by Kleme, Kimenov, and Nenov (1998), an oil production
process operated with a minimal temperature difference of 65C at
the Process Pinch has been analyzed. The external heating required
by the system was provided by two types of hot utilities, and the
required external cooling was provided by two cold utilities. The
study resulted in increasing heat recovery and reducing the minimum
temperature difference to 814C. This reduced the hot and cold utility
requirements and eliminated the need for water steam and cooling
water, which considerably simplified the overall design.

A Whisky Distillery
In a study by Smith and Linnhoff (1988), the authors found that steam
was being used below the Process Pinch; this resulted in unnecessarily
22 Chapter Two

high utility usage by the distillery. The steam in question resulted

from the use of a heat pump, and so the steam below the Process
Pinch was eliminated by reducing the heat pumps size. Although
this approach required that the steam now be used for process
heating above the Process Pinch, overall energy costs were still
reduced owing to the reduced compressor duty.

2.11 Summary
This chapter offered an introduction to and an overview of the field
of PI, its HI roots, the expansion of its application areas, and the
evolution of process optimization methods toward combining
PI techniques and insights with the tools of MPR. The chapter
discussed the philosophy of PI as well as its contribution to
sustainable development and the economic efficiency of projects.
There is a wealth of useful information on these topics, and a
number of excellent sources of further information are described in
Chapters 13 and 14.
Process Optimization

his chapter deals with process optimization: its definition,
goals, and application areas within sustainable industrial
process design and integration. The aim is to provide
information on how to formulate sustainability tasks as optimization
problems and on what tools to employ for solving them. The chapter
begins with a brief description of the general framework for model
building and optimization; this is followed by basics of optimization
problems and their classes as well as descriptions of the most common
algorithms for solving optimization problems. Finally, the chapter
discusses how to build models efficiently, how to handle complexity,
and how to ensure model adequacy and sufficient precision. The
details of computational implementations of optimization solvers
and other software tools are given in Chapter 9.

3.1 Introduction
Building and operating industrial processes entail costs and
environmental impacts. Emissions and effluents include: gaseous
waste streams, which may harbor CO2, SOx and NOx; wastewater and
various aqueous streams; and flue gases. When attempting to
improve the environmental and economic performance of process
systems, it is important to keep in mind that the processing paths,
which connect the various system inputs and outputs, usually
interact with each other. Therefore, minimizing resource demands
and environmental impacts is greatly facilitated by properly modeling
the process systems and then deciding which designs and operating
policies to pursuein what priority and to what extent.
Maintaining a balance between model accuracy and simplicity is
necessary in order to derive meaningful results with minimal
computational expense. A system model can be created for different
purposes. A lumped steady-state model (i.e., one that neglects
variations in time and space) will contain only algebraic equations.
To simplify the problem, steady-state models assume that the
operating units are black boxes or gray boxes (a black box is a model
that represents an empirical process in terms of its input, output, and
transfer parameters but does not describe any internal physics; a gray

24 Chapter Three

box is a model that incorporates a physical representation, although

some of the physics is approximatedsee, e.g., Hangos and Cameron,
2001). This approach is appropriate for optimizing complex systems.
If time variability has to be accounted for then steady-state modeling
can be applied to a set of operating periods, each of which is charac-
terized by its own fixed parameters.

3.2 Model Building and Optimization: General

Framework and Workflow
A good process model should contain a thorough conceptual
description of the involved phenomena, unit operations, actions,
events, and so forth. Usually this description involves text, flowsheets,
and structural diagrams. Additionally, IT-domain diagramsfor
example, UML diagramscan be used (UML is a specification of
the Object Management Group; UML, 2010). The UML diagrams
include class, object, package, use case, sequence, collaboration,
statechart, component, and deployment diagrams.
A good process model should also contain a sufficiently precise
mathematical description. The mathematical relationships are used to
reflect not only physical laws but also technological constraints and
company rules. Mathematical models include algebraic equations of
some form (i.e., equalities or inequalities) and may be supplemented
with dynamic modeling, which uses differential equations to capture
variations in time, as well as states and actions to express operational
procedures and other dynamic relationships of algorithmic nature.
Structural information is also an essential feature of process network
models. When translated to Mathematical Programming (MPR)
models, such information is expressed by integer (mostly binary)
variables. An efficient alternative to representing superstructures
with binary variables is the P-graph and its related framework
(Friedler et al., 1992b), discussed in Chapter 7.
An efficient computational implementation of the mathematical
description may take the form of a stand-alone compiled application
(e.g., PNS Editor, 2010) or may be modeled within a popular
environment for process and mathematical calculations. Examples
include MATLAB (MathWorks, 2009), Scilab (2009), simulation and
optimization tools tailored for the process industry (AspenTech,
2009c), Modelica (2009a; OpenModelica, 2010), Honeywell UniSim
(Honeywell, 2010), and the open-source DWSIM (2010). All model
components have to be well synchronized to provide appropriate
user interfaces and sufficient visual aids to help understand the
process and the optimization results.
Models often include only the computational implementation
with some mathematical descriptions, but a much better practice is to
start with the concepts before deriving the mathematical relationships
Process Optimization 25
and then finally implementing the model computationally. In the
course of formulating the mathematical and implementation
components, it may be necessary to make some corrections to the
conceptual and/or mathematical components.

3.3 Optimization: Definition and Mathematical

3.3.1 What Is Optimization?
Optimization can be applied to different tasks: new system design,
the synthesis of a new processing network, and the retrofit design
and operational improvements in heat exchanger, reactor, and
separation networks. Optimization is employed to find the best
available option. An objective function consists of a performance
criterion to be maximized or minimized. The system properties that
determine this function are of two types:

1. Parametersa set of characteristics that do not vary with

respect to the choice to be made
2. Variablesa set of characteristics that are allowed to vary

Some of the variables are specified by the decision maker or are

manipulated by the optimization tools; these are referred to as
specifications or decision variables. The remaining variables are termed
dependent variables, and their values are determined by the
specifications and the systems internal relationships. The objective
function can be formulated in terms of a single variable or a
combination of dependent and decision variables. The value of the
objective function can be changed by manipulating the decision

3.3.2 Mathematical Formulation of Optimization Problems

Optimization tasks in industry include increasing heat recovery,
maximizing the efficiency of site utility systems, minimizing water
use and wastewater discharge, and other tasks. The formulations
that are used to solve such optimization problems are known as
mixed integer nonlinear programs (MINLPs). However, they are
frequently linearized to yield the more tractable mixed integer linear
programs (MILPs), and some can be further simplified and solved
via linear programming (LPR). In general, optimization problems
can be formulated as summarized in Table 3.1.
The continuous and discrete domains, together with the
constraints, define the feasible region for the optimization. This
region contains the set of options from which to choose. The value of
the function F depends on the values of the decision variables.
26 Chapter Three

Minimize (or maximize) F(x, y) Objective function,

performance criterion
where x Rn (continuous variables) Continuous domain
y Z n (integer variables) Discrete domain
subject to h(x, y) = 0 Equality constraints
g(x, y) 0 Inequality constraints

TABLE 3.1 Generic Optimization Problem

Continuous variables are used to model properties (e.g., flow rates

and chemical concentrations) that vary gradually within the feasible
region. Integer variables are used to model the status (ON versus
OFF) of operating devices as well as the selection/exclusion of options
for operating units in synthesis problems. Using only the problem
formulations, it is possible to generate many combinatorially infeasible
sets of the integer variable values, which are later analyzed by the
optimization solver. Especially for larger problems, its a good idea to
eliminate these infeasible combinations from the search space or to
build into the optimization solver a mechanism for avoiding them
(Friedler et al., 1996).
The type of the objective function F dictates which extremum
the minimum or the maximumto seek. Common performance
criteria are to minimize the process cost or to maximize the profit.
Because some process subsystems (e.g., water networks) do not usually
generate useful product streams, no revenue is directly realized and
so minimizing the total annualized cost is used instead as an
objective function. For complete production systems and supply
chains the objective is usually to maximize the profit. Thus, additional
variables (reflecting sales and customer behavior) and their
relationships may be added to the formulation.
Equality constraints stem not only from material and energy
balances but also from constitutive relations that normalize the
stream compositions to unity. The balances include those for total
flow rates, balances of the chemical components, and energy balances
of heat exchangers, boilers, and turbines. Inequality constraints stem
from limitations on concentrations, flow rates, temperatures,
pressures, throughput, and so forth. One example of a constitutive
relation is calculation of the fluid heat capacity flow rate from its
mass flow rate and specific heat capacity.

3.4 Main Classes of Optimization Problems

This section discusses methods that can be applied to detect
optimality and solve optimization tasks. Choosing a particular
Process Optimization 27
method should be based on a clear understanding, and toward this
end it is useful to bear in mind three aspects of optimization

1. An optimization problem is convex if it minimizes (or

maximizes) a convex objective function and if all the
constraints are convex (Williams, 1999).
2. An optimization problem is linear if its objective function
and all the constraints are linear; note that all linear
optimization problems are also convex (Williams, 1999).
3. A variable that can assume only integer (whole number)
values is an integer variable; integer variables that are
constrained only to values of 0 or 1 are binary variables.

Optimization problems are classified in terms of the following

specific features:

Objective type: Minimization or maximization problems.

Presence of constraints: Unconstrained versus constrained.
Most practical tasks involve the formulation of constrained
optimization problems.
Problem convexity: Convex versus nonconvex.
Linear and nonlinear problems: This aspect depends on the
nature of the objective function and/or the constraints. Most
process optimization problems are bilinear (i.e., containing
products of two optimization variables), for example, they
feature componentwise mass balances involving products of
mass flow rates and concentration and enthalpy balances
involving products of mass flow rates and enthalpies.
Absence of integer variables: In such cases the entire problem is
continuous and so linear programming (LPR) or nonlinear
programming (NLP) models are employed.
Presence of integer variables: In such cases the problem is
referred to as integer. Integer problems are further
subdivided into pure integer programming (IP) models,
which involve only integer variables; mixed integer linear
programming (MILP) models, which involve linear
relationships with both integer and continuous variables;
and mixed integer nonlinear programming (MINLP)
models, which involve nonlinear relationships with both
integer and continuous variables.

Further details on these classifications and their properties can

be found in Floudas (1995) and Williams (1999). The most important
28 Chapter Three

factors are linearity and the existence of integer variables. There are
two main reasons for the significance of these factors:

1. Continuous problems are solved by using simpler methods

that incorporate gradient or other search mechanisms. The
presence of integer variables introduces combinatorial
complexity caused by the need to build search trees that
branch through the integer variables.
2. Nonlinearity introduces a different type of complexity.
Whereas linear problems (LPR and MILP) have been shown
to be convex (Floudas, 1995; Williams, 1999), the convexity of
nonlinear problems (NLP and MINLP) must be evaluated on
a case-by-case basis. Such problems are generally assumed to
be nonconvex until proven otherwise.

3.5 Conditions for Optimality

One of the most popular methods for handling optimization
formulations is the use of MPR solvers. They usually take
standardized input in the form of matrices of variables, parameters,
and equations, after which they explore the search space, using local
search and gradients, to reach an optimal solution.

3.5.1 Conditions for Local Optimality

When employing local search (e.g., gradient-based) algorithms, the
desired extremum (minimum or maximum) needs to be located and
proven. The optimality condition usually employed by solver
algorithms is based on the mathematical definition of an extremum.
For finding a minimum, the definition requires the existence of a
point in the search space such that any small deviation from that
point, in any direction within the search space, will result in an
increase of the objective function value or in keeping it the same:

F x * , y * d F x , y ,  x , y vicinity of x * , y * (3.1)

Further details are given in specialized textbooks on optimization

(see, e.g., Edgar and Himmelblau, 1988; Floudas, 1995; Luenberger
and Ye, 2008). Rigorous definitions can also be found by searching
the Web for KKT optimality conditions (aka KarushKuhnTucker
conditions, an extension of the method of Lagrange multipliers).

3.5.2 Conditions for Global Optimality

Additional conditions are applied to attain global optimality when
using local search algorithms, which in this case require the problem
Process Optimization 29
to be convex. For linear problems (e.g., MILP), the locally optimal
solution is guaranteed to be globalin other words, the best possible.
Another advantage of the linear model is that linear solvers, unlike
with nonlinear ones, do not require initialization with a feasible
solution. Nonlinear problems are much more difficult to solve, and
attaining feasibility of the solutions is a significant issue. Any result
obtained guarantees global optimality only if the optimization
problem is shown to be convex (Williams, 1999).

3.6 Deterministic Algorithms for Solving Continuous

Linear Optimization Problems
The best-known algorithm for LPR problems is the simplex algorithm
(Dantzig, 1951; Dantzig, Orden, and Wolfe, 1954). It solves LPR
maximization problems by constructing a feasible solution at a vertex
of the search polyhedron and then walking along the edges of the
polyhedron to other vertices with successively higher objective
function values. Although this algorithm is efficient in general, in
some cases it may require exponential time to find the optimum.
Khachiyan (1979) proposed a method that, in the worst case, finds a
solution in polynomial time for the worst case. Today, the LPR
problems are usually solved using one of two methods:

1. Revisions of the simplex algorithm: This algorithm has been

developed almost continuously over the years since its initial
formulation, starting with the revised Simplex Method of
Dantzig and Orchard-Hays (1953). A thorough consideration
of the simplex algorithm and its computational techniques is
given in Maros (2003b). Modern LPR solvers use algorithms
that flexibly solve both continuous and integer linear
2. Interior point methods: In contrast to the simplex algorithm,
which finds the optimal solution by traversing points on the
boundary of the feasible region, interior point methods move
through the interior of the feasible region. One such method
(see Mehrotra, 1992) is used by MATLAB.

3.7 Deterministic Algorithms for Solving Continuous

Nonlinear Optimization Problems
Nonlinear programming is used to solve continuous nonlinear
problems. A common feature of all deterministic search methods is
that, starting from a current solution point, they perform an iterative
search for a better feasible solution within the vicinity of the current
30 Chapter Three

one, repeating this action until no further solution improvement is

possible. This strategy means that the methods find only local
optima, so there is no guarantee of global optimality. Although
modern optimization software packages are capable of solving
constrained problems, they are mostly based on the search techniques
developed for unconstrained optimization.

3.7.1 Search Algorithms for Nonlinear Unconstrained

Many methods have been developed for performing local search for
the minimum of an objective function (Luenberger and Ye, 2008).
Deterministic search algorithms mainly assume that the objective
function is minimized and use the concept of descent: a series of
iterations are performed that aim reducing the objective function
value at each iteration. A thorough review of the deterministic search
algorithms for optimization and their performance, as applied to
chemical process models has been provided elsewhere, for example,
Kleme and Vaek (1973). Such methods include pattern search, the
Rosenbrock method, and conjugate gradients.
All descent-based algorithms rely on a common general strategy.
First, an initial solution point is chosen. Then, a rule-based direction
of movement and a step size are determined and executed. At the
new point, a new pair of direction and step size are determined, and
the process is repeated until the desired convergence is achieved.
The main difference between the various search algorithms is in the
rule applied for determining the iteration steps.
When the search domain is a given line, the process of determining
the function minimum is known as a line search. Higher-dimensional
problems are solved by executing a sequence of successive line
searches. The two simplest line search algorithms are the Fibonacci
method (e.g., Avriel and Wilde, 1966) and the golden section method
(e.g., Kiefer, 1953). Both algorithms search over a fixed interval and
assume that the function is unimodal within it. The golden section
algorithm has a linear convergence rate of dk+1/dk = 0.618. Another line
search algorithm is the Newton method, which is based on the
additional assumption of function smoothness; this technique
achieves even faster convergence than the golden section one. There
are also other line search algorithms that are grouped together as
curve-fitting methods.
Another important search algorithmone that is applicable to
multidimensional function domainsis the method of steepest
descent or gradient method. Its update rule derives the new step
based on the gradient information in the current point, identifying
the direction of the fastest decrease of the objective function. This is
one of the oldest and best-known function minimization methods.
Hence it is often used as a benchmark against which the performance
Process Optimization 31
of other methods is measured. See Luenberger and Ye (2008) for
additional information.

3.7.2 Algorithms for Solving Constrained Nonlinear Problems

Many algorithms for constrained problems proceed in two main
steps: (1) The constrained problems are transformed into equivalent
unconstrained ones, and then (2) the modified problems are subjected
to unconstrained search algorithms. Groups of such algorithms
include primal methods, penalty and barrier methods, and primal-
dual methods.
The primal algorithms search for the optimal solution directly
through the feasible region. For a problem with n variables and
m equality constraints, primal methods work within the (n m)-
dimensional feasible space. Each point in the process is feasible, and
the value of the objective function constantly decreases. Thus, a
feasible solution is guaranteed even if the search is interrupted before
it finishes. Most primal methods do not rely on special problem
structure (such as convexity), so they are applicable to general NLP
problems. The weaknesses of primal algorithms are their requirement
that an initial feasible solution be found or specified and their
slowness (or even failure) to converge in the case of nonlinear
Penalty and barrier methods utilize unconstrained approximations
of the constrained optimization problem. Penalty methods add a
penalty term to the objective function, which results in a high cost if
constraints are violated. Barrier methods instead add a term that
favors points interior to the feasible region over those near the
boundary. There are two major issues with applying these methods.
First, it is important to ensure the accuracy with which the
unconstrained NLP problem being solved approximates the true,
constrained one. Second, it is possible that the penalty or barrier term
may dominate the objective function, skewing the problem to such a
degree that the feasible solutions found are not actually optimal.

3.8 Deterministic Methods for Solving Discrete Problems

All integer programming problems (pure IP, MILP, and MINLP)
possess combinatorial features. The first solution methods used
cutting planes (i.e., added constraints to force integrality), and various
algorithms have been devised on this basis. The effective approach
has been to divide the problem into a number of smaller problems, a
method known as branch and bound. This is a strategy of problem
decomposition: it aims to partition the feasible region into more
manageable subdivisions and then, if required, into further subdivided
partitions. The method was proposed by Land and Doig (1960) for
solving LPR problems and has since evolved to a point where it can
32 Chapter Three

be used to solve IP and MIP problems. The original large MIP problem
is divided into a number of subproblems, called nodes, which form an
enumeration tree. The algorithm starts from a main node and
progresses toward the so-called leaves, adding nodes to the current
solution or discarding them as necessary. In this process, an important
role is played by the bounding function, which (in the case of objective
minimization) provides a lower bound on the remaining part of the
problem under the current branch. Thus, if the lower bound on the
current subproblem node is higher than the current best solution,
then the algorithm can safely discard (prune) the node and all its
subnodes. More information on IP solving algorithms can be found
in Nemhauser and Wolsey (1999).

3.9 Stochastic Search Methods for Solving

Optimization Problems
An alternative to deterministic algorithms are those that employ
stochastic sampling of the search space and hill climbing: pushing the
algorithm to lower objective function values (assuming minimization)
by allowing temporary increases in the objective function value. The
main advantage of stochastic search methods is that their search is
not confined in the neighborhood of a given current solution, which
means there is a higher probability of finding the global optimum.
Such techniques have proven to be successful in applicationsfor
example, process design and synthesis studieswhere the
computational requirement for single samplings is modest and the
time frame for producing a solution is more relaxed. Their major
limitation is that many iterations are required in order to assure some
degree of optimality. This is because the algorithms blindly evaluate
even infeasible combinations of variable values, especially for the
simulated annealing variants.
Simulated annealing (SA) is the most prominent stochastic search
method (Kirkpatrick, Gelatt, and Vecchi, 1983). An interesting
application is the GAPinch toolbox for MATLAB (Prakotpol and
Srinophakun, 2004), which implements a genetic algorithm (GA)
search to solve MINLP network synthesis problems involving water
reuse. Another promising development is the use of the GA
framework to optimize schedules and supply chains (Shopova and
Vaklieva-Bancheva, 2006). Ant colony optimization (Zecchin et al.,
2006) and Tabu search (Cunha and Ribeiro, 2004) have been applied to
process design and operation. Other examples of stochastic search
methods include the following:

1. McKay, Willis, and Barton (1997) used genetic programming

(GP) to identify steady-state models of integrated chemical
Process Optimization 33
2. Manolas et al. (1997) applied GA techniques to the optimal
operation of industrial utility systems.
3. Castell et al. (1998) described a novel GA application to
optimize process layout.
4. Genetic algorithms have been proposed for the synthesis of
Heat Exchanger Networks (HENs) and other processing
systems (Ravagnani et al., 2005; Xiangkun et al., 2007; Fieg,
Xing, and Jeowski, 2009); the paper by Xiangkun and
colleagues presents a hybrid GA-SA method.
5. Ahmad et al., (2008) proposed that SA be used for synthesizing
Heat Exchanger Networks. This approach employs a
completely evolutionary strategy, starting from a trivial
network topology that connects all hot streams to coolers
and all cold streams to heaters.

3.10 Creating Models

The procedure for building a process model is illustrated in Figure 3.1.
The modeling begins with accumulating sufficient information
about the processin order to develop an understanding of the
elements and the relationships between themand proceeds to
formulating a mathematical description of the process that is imple-
mented on a computational platform. A distinctive characteristic of
the procedure is its iterative nature. The mathematical modeling



Conceptual YES

Mathematical Need
modeling corrections?


Computational Need
implementation corrections?



FIGURE 3.1 Model creation procedure.

34 Chapter Three

often leads to changes in the conceptual model; the result is an

iterative feedback loop, as shown in the figure. A similar correction
loop is also present at the output of the implementation block. The
discussion that follows addresses only those activities in Figure 3.1
that involve conceptual and mathematical modeling.

3.10.1 Conceptual Modeling

Conceptual modeling involves collecting and organizing essential
information about the phenomena in the process under consideration.
This step is often referred to as data extraction (Williams, 1999). The
process operating units are described along with the relevant features
of their behavior. The important constraints for the units capacities
and other limitations are identified and added to the description, as
is topological information about the process network. The main
purpose of this description is to serve as an interface between the
process operators and the modelers. Therefore, it is important that
the description be concise enough that modelers can efficiently grasp
the workings of the complete system. That being said, the description
must contain sufficient detail to complete the study.

Extracting Data About the Operating Units

When a production process is modeled for the purpose of Process
Integration, the data extraction involves four main steps:

1. Description of the process operating units and their

interconnections; creation of a flowsheet that reflects this
2. Identification of the heating and cooling needs of the process
through use of the flowsheet and related data about the
operating units.
3. Definition of the Heat Integration process streams: identifying
for each stream the values for its heat load, as well as the
supply and target temperatures. Some process streams may
need to be segmented. This is done if the specific heat capacity
of a given process stream varies significantly within the
interval between its supply and its target temperature.
4. Analysis of the collected data.

If the goal is to minimize water use and wastewater discharge,

then the various water-using operations are analyzed and their
relevant properties are recorded in some standard form. The most
popular way to express the water requirements (Wang and Smith,
1994; Kuo and Smith, 1997) of an operation is in terms of the limitations
on (1) inlet and outlet concentrations of different contaminants and
(2) flow rates of the water to be consumed. This topic is discussed in
more detail in Chapter 12.
Process Optimization 35

Network and Topology Data Identification

Obtaining network-related information is needed to account for
constraints that are related to the system topology and to the limits
imposed by operating units on the suitability of various process
streams to serve as inputs or outputs. In water networks, such
considerations include the acceptability (or unacceptability) of using
the water output from some operations as inputs for other operations.
For instance, the final washing of sugar crystals in sugar production
would require pure water, and for this the outputs from other water-
using operations would be unacceptable. On the otherhand, used
water from blanching might be perfectly acceptable for the initial
washing or rinsing of fruits. This type of information is used to
formulate additional constraints on the compatibility of different
process streams. When supplied to automated process optimization
algorithms, these constraints serve to eliminate a number of infeasible
combinations of process units. When building pure MPR models
(Williams, 1999), network-related information is transformed into
explicit mathematical constraints involving expressions with binary
selection variables. When using the graph-theoretic approach and/or
the P-graph framework (Friedler et al., 1993) to construct a process
model, such information is explicitly encoded in the P-graph building
blocks (materials and operations) and is then used by algorithms that
generate only those topologies that are combinatorially feasible.

3.10.2 Mathematical Modeling of Processes:

Constructing the Equations
After the conceptual basis has been established, it is time to begin
constructing the explicit mathematical formulations of the problem.
The standard procedure in this regard is first to build a super-
structureone that incorporates all possible options and combin-
ations of operating unitsand then to reduce the superstructure via
optimization techniques. In this context, a superstructure is the union
of several feasible flowsheets (see Figure 6.2 for an example of a water
reuse network superstructure). When this union includes all possible
flowsheets, the superstructure is called the maximal structure
(Friedler et al., 1993) or the hyperstructure (Papalexandri and
Pistikopoulos, 1996).
There are two basic approaches to formulating the superstructure
and subjecting it to reduction optimization.

1. Explicit formulation of a superstructure by the design engineer,

followed by translation of that structure into an integer programming
model: The generated problem is then solved by the
corresponding MPR algorithm. Popular codes for solving
MILP problems are OSL (GAMS, 2009) and CPLEX (ILOG,
2009); both are included in such commercial optimization
software packages as GAMS (2009). If the model does not
36 Chapter Three

involve choices between structural options and involves

decisions on flow rates and capacities only, then continuous
optimizers (LPR or NLP) can be used.
2. Automated generation of the maximal superstructure, followed by
enumeration of all feasible network topologies using the P-graph
framework. The unit operations can be described in terms of
(a) the input to the automated procedure, (b) the compatible
connections between them, and (c) the corresponding process
and cost information. More details are provided in
Chapter 7.

The construction of a process network model begins with

formulation of the mass balances. The following key points should
be kept in mind:

Mass balances can be of two types: overall and component-

wise. Overall balances are performed over the total contents
of the input and output streams of a process unit. For steady-
state models of continuous processes, this content is usually
expressed as a mass-based flow rate (e.g., in units of kg/s,
kg/h, or t/h). Componentwise mass balances reflect the mass
conservation principle between the inlet and outlet streams
for individual chemical components (or pseudocomponents).
For a given operating unit, the set of all componentwise mass
balances is exactly sufficient for completely characterizing
the material flows into and out of the unit (adding the overall
balance could lead to an overspecified system of equations).
However, if using the overall material balance is critical to
the system model, then the overall balance can be used in
place of one of the componentwise balances.
In some cases, tracking all chemical species in various process
streams is not necessary. This is true for water networks in
which mass balances are written for the water flow rates and
for the analyzed contaminants. However, an incomplete list
of the material species contained in the water streams will
naturally result in incomplete mass balances (Smith, 2005).
When complete results are necessary, rigorous simulations of
the optimized system must be performed.
The componentwise mass balances of stream mixers and the
more complex operating units involve bilinear terms that
reflect products of the stream flow rates and the component
concentrations. In this case, the result is an NLP or MINLP
problem. If either the concentrations or the flow rates are fixed
then the model could be linear (an LPR or MILP problem),
which would make for an easier computation that might
guarantee global optimality. Although such an approach is
Process Optimization 37
often viable, additional analyses are required to ensure that
the linearization does not result in an inadequate (inaccurate)
model. Similar problems also exist for Heat Exchanger
Networks, where nonisothermal mixing leads to equations
containing bilinear terms: products of mass flow rates and

Process-specific constraints are also included in the optimization

problem. The mass balances are supplemented in the model by lower
and upper bounds on the stream flow rates and component
concentrations. Another source of constraints is the temperature
feasibility of interequipment connections. For example, water coming
from a blanching operation may be too hot to be used for washing
fresh fruits and so may require cooling (perhaps by mixing with a
colder stream) before this use; alternatively, this may be rejected as an
unacceptable connection, leading to a forbidden match. In MPR, such
constraints usually contain integer variables.
A frequent problem when synthesizing process networks is
obtaining extremely low flow rates for some interconnections. When
the model to optimize accounts for the complete capital costs, this
problem is less likely to appear. However, if capital costs are
underestimated (or disregarded) for some reason, then operating costs
may dominate, resulting in degenerate solutions with impractically
small flow rates. Practical solutions require reasonably accurate
estimates of the capital cost, especially if there are fixed costs. A more
straightforward option is to stipulate a lower bound on all network
flow rates.
Another important part of creating a process model is identifying
the energy needs of various operating units. It is crucial to account
for the heating and cooling needs of the process operations, and these
are established by formulating enthalpy balances (Linnhoff et al.,
1982). Most process streams need to be transported between the
operating units and also moved through them, and such transport
involves overcoming certain pressure drops (for fluids) and
performing mechanical work (for solids). These operations require
mechanical shaft power, which can be supplied by direct-drive
machines or electrical motorselements that define the power
requirements of a process. In many cases, additional equations are
needed (e.g., constitutive relations as well as calculations of reaction
rates and equilibriums).

3.10.3 Choosing an Objective Function

The objective function to choose depends on the goal of the
optimization. It is possible to choose from a number of criteria,
including (1) maximizing profit; (2) minimizing operating cost;
(3) minimizing total annualized cost (TAC); (4) minimizing
consumption of certain resources or consumption per unit of product;
38 Chapter Three

and (5) minimizing the systems total environmental footprint or the

footprint per unit of product.
It is frequently necessary to optimize more than one criteria.
There are three main approaches to this task (Ehrgott, 2005):

1. Choose one criterion for formulating the objective function;

then add the other criteria as constraints to the problem.
2. Combine all the criteria into one objective function by
summing them up, where each criterion is weighted with a
given coefficient.
3. Perform a multicriteria optimization, accounting explicitly
for the conflicts between the chosen objectives (criteria).

3.10.4 Handling Process Complexity

Process synthesis and process design taskswhen performed on
real-life, industrial-scale problemstend to involve substantial
number of operating units. Examples can be found in many areas:

Synthesizing Heat Exchanger Networks involves a large

number of possible combinations of potential heat exchangers.
Thermodynamic and process-related constraints usually
reduce this number, but even then the complexity remains
Water subsystem design is no exception, and problems with
20 or more water-using operations are common (Bagajewicz,
2000; Thevendiraraj et al., 2003). This number leads to high
levels of combinatorial complexity. In a superstructure, each
water-using operation (and each intermediate water main)
defines at least one mixer. If the number of water-using
operations is denoted by Nop, then there can be no fewer than
Nop corresponding binary variables in the network super-
structure, and the number of combinations of binary variable
values to be examined by the corresponding MIP solver
would equal 2Nop. Thus, for 20 operations there would be more
than a million (106 = 1,000,000) possible combinations.

When using MPR superstructure models directly, the number of

binary variables is dictated by the number of the candidate operating
units. In the worst case, the solution algorithm will have to examine
the entire search space, which depends exponentially on the number
of the binary variables. One modeling strategy that reduces the
search space by several orders of magnitude is to use the Maximal
Structure Generation (MSG) and Solution Structures Generation
(SSG) algorithms of the P-graph framework (Friedler et al., 1993).
These algorithms effectively discard all infeasible combinations of
the binary selection variables and retain only the feasible ones.
Process Optimization 39
Another popular technique in process design and software
development is modularization or encapsulation. The complexity
management efforts in information technology and process modeling
led to development of the concepts of object-oriented modeling (see,
e.g., Modelica, 2009a; UML, 2010) and object-oriented programming
(e.g., C++, C#, Java, Delphi). With these modeling concepts, a number
of related objects or operations can be grouped together and
represented as a single object or operation. Similarly, flowsheeting
and simulation software may offer the option of representing a
distillation column as a single operating unit at the level of the entire
flowsheet while still allowing simulation of the column at the local
level; such a facility is available, for example, in gPROMS (PSE, 2009)
and HYSYS (HYSYS, 2010).
The key to object-oriented thinking is the principle of information
hiding, according to which every object should conceal as many details
as possible about its own functionality and provide to other objects
only the information relevant to interactions with them. Hence, the
interface describes the bundle of information items that are defined
as being available to an external object. An example of applying
information hiding in water network design is the abstraction of the
detailed information about water-using operations into only three
relevant pieces of information for each operation and contaminant:
the limiting inlet and outlet concentrations and the limiting flow rate
(or contaminant load). This bundle of information thus constitutes the
operations interface to the water minimization problem.
Efficiently managing the model and problem complexity is greatly
facilitated by the practice of documenting the complete modeling
process and optimization results, including their interpretation. It is
best to document all stages, starting from the conceptual modeling
and ending with the computational implementation and obtained
results. All this documentation should be systematic so that the
reasoning and results can be clearly understood and traced back to
their roots. This style of documentation is extremely useful and makes
the work of teams and individual engineers smoother and more
efficient in the long run.
Another important tool for managing complexity is targeting,
which reveals limitations in the underlying design or operation task
to which the optimization is being applied. With proper targeting it
is possible to obtain an upper bound on system performance and/or
a lower bound on system cost. In fact, it is also possible to calculate
practically achievable targets:

For water systems, current targeting practices mainly yield

the first type of estimate: the maximum possible amount of
water reduction.
For HEN synthesis, the Maximum Energy Recovery (MER)
targets can be established, and HENs achieving them also
40 Chapter Three

feature reasonable costs. However, this is seldom the global

minimum for the total annualized cost (i.e., the sum of
annual operating costs and annualized investment costs).

The logic behind evaluating a system in terms of the upper bound

on its performance is this: if the best possible system performance is
still insufficient to satisfy the specified requirements, then no further
time and effort should be spent on designing that system. The next
section gives more details on the use of targeting to managing design
problem complexity.

3.10.5 Applying Process Insight

Mathematical tools are absolutely necessary for optimizing the
design and operation of industrial processes. However, meaningful
and applicable results are obtained only when process insight is used
to guide the model building, the optimizing, and the interpretation
of results. Each optimization problem has its own particular features.
For example:

The processing of fruits for canning imposes different water

requirements and practices from those for the processing of
When designing HENs, the various underlying process
operations that need cooling or heating should be properly
examined for data extraction. In some cases, process knowledge
may aid in the lumping of different heating/cooling needs,
thereby simplifying the flowsheet. Process knowledge is also
employed when partitioning properly into segments a process
stream whose heat capacity flow rate varies widely.

Every specific requirement discovered in the iterative process of

model improvement should be thoroughly documented and
implemented in the modelfor instance, in the form of constraints
or simplifying assumptions.
A good practice is to perform targeting for the desired application:
a heat recovery problem, water management, a separator network, or
reactor network design. Targeting provides information on potential
performance. Targeting procedures have been developed and well
tested for a number of applications and domains (see Chapter 2 for a
The benefits of using variants of Pinch Analysis or other targeting
are twofold. First, the designer can estimate the best possible
performance of the system by using simple models and calculations,
even before using rigorous design procedures, saving valuable time.
The obtained targets can be used in preliminary sensitivity studies
to determine which operations and units should be included in the
design and which should be left out. This approach can greatly
Process Optimization 41
simplify network design if the number of candidate operations is
large and the targets can be used as guides in the network synthesis.
Usually the engineer aims either to achieve the targets exactly or to
approach them closely with the final design. If the targeting model is
too idealized, then the estimates produced will serve as loose
performance or cost bounds, not as tight bounds. Yet in many cases
this strategy results in a simple design procedure and a nearly
optimal outcome.
Second, if the targeting model is exact, as in Pinch Analysis for
Heat Integration (Linnhoff and Flower, 1978), or if it at least captures
all key factors at the corresponding design stage, as with Regional
Energy Clustering (Lam, Varbanov, and Kleme, 2010), then the
targeting procedure also provides a convenient partitioning of the
original design space. This makes it easier to decompose the problem
and to simplify the remaining actions. A good example of partitioning
the design space is the division above/below the Pinch in Heat
Integration (see Chapter 4).

3.10.6 Handling Model Nonlinearity

As discussed in Section 3.5, convex problems are guaranteed to
produce globally optimal results when solved with deterministic
algorithms that employ local search. In contrast, a nonconvex
optimization problem is difficult to solve, and its solution is not
guaranteed to be a global optimum. All linear MP models are convex
(Williams, 1999). With nonlinear models, however, the problem
convexity must be established on a case-by-case basis. Nonlinear
models hinder the computation process of the solvers (e.g., by
requiring that feasible initial solutions be provided), and they often
result in poor numerical convergence. This is why engineers usually
seek ways to obtain linear models in some form. Crucial factors in
this task are preserving the models precision and validity.

Trading Off Precision and Linearization

Sometimes it is possible to linearize relationships that are inherently
nonlinear. This can be done, for instance, by replacing a single
nonlinear relationship with two or more linear ones that, together,
approximate the original function over the required range. This
technique is known as piecewise linearization. For example, it can be
applied when the available cost functions (for piping, distillation
columns, or heat exchangers) are too complex. The result of this
approach is a small reduction in the overall model precision and an
increase in the number of integer variables in the model; thus,
combinatorial complexity is increased but computational complexity
(due to nonlinearity) is reduced. The principal advantage is the
resulting linearity of the model, which almost always makes it easier
to solve than the original one. Caution must be exercised in the
process of linearization: the loss of precision must be kept as small as
42 Chapter Three

possible so that the resulting model remains an adequate

representation of the underlying process. Another pitfall is a
potentially unacceptable increase in combinatorial complexity, which
can result if too many linear segments are used to approximate the
original relationship.

Discretization of Continuous Process Variables

Another approach to avoiding nonlinearity is to define a number of
fixed levels for some variables and then to substitute the original
nonlinear variables in the model with linear combinations of integer
variables and parameters (where the parameters are derived from
the original variables). In this way, the bilinear terms in the mass
balances of contaminants can be reduced to purely linear expressions.
In its consequences and pitfalls, this technique is similar to piecewise

Other Techniques
There are other approaches and techniques for coping with nonlinear
models. Two of them are of particular interest to practical process
optimization: successive MILP (SMILP), which is used for model
decomposition and solving; and model reformulation.
Successive MILP can be applied to many process engineering
optimization modelsas long as the nonlinearities are not too strong.
One example is the optimization of utility systems, which consist
mainly of a set of steam headers combined with steam turbines, gas
turbines, boilers, and letdowns. Most of the nonlinearities in such
systems are bound to the enthalpy balances of the steam headers, the
steam turbines, and the letdowns. The computational difficulties
imposed by the nonlinearities can be overcome by first fixing the
values of some system properties during optimization (e.g., enthalpies
of steam mains), thereby producing a linear optimization model, and
following this with a rigorous simulation after each optimization
step. The linear optimization steps are repeated, followed again by
simulation, and so on until convergence is achieved; see Figure 3.2.
This procedure converges rapidly when applied to the optimization
of existing utility systems: usually five iterations at most are required
to reach reasonably small error levels.
Model reformulation refers to the symbolic transformation of the
original nonlinear equations into another set of equations that are
equivalent but linear. The resulting set usually contains more
equations than the initial one. One such reformulation technique,
known as the Glover transformation (Floudas 1995), can transform
equations containing the product of a continuous and a binary
variable. The essence of the technique is to replace each term that is a
product of a continuous variable and a binary variable with additional
continuous variables and an additional set of linear inequality
Process Optimization 43



MILP Simulation

Convergence? NO



FIGURE 3.2 SMILP procedure for solving nonlinear optimization problems.

3.10.7 Evaluating Model Adequacy and Precision

Once the model is built, the next step is validation. This process boils
down to evaluating how precisely the model predicts real-life
phenomena as well as how adequately it represents the modeled
system (Steppan, Werner, and Yeater, 1998; Montgomery, 2005). If the
model turns out to be imprecise or inadequate, then the reasons for
these shortcomings must be discovered and addressed. This iterative
process is similar to debugging during software development.
It is generally accepted that residuals (and their plots) are
sufficient for assessing whether a given model accurately predicts
the underlying process. The residual plots can be used to minimize or
even eliminate stochastic errors. In addition, parity plots are helpful
in exposing any systematic errors in the model.
The final check is to analyze the models variance (Steppan,
Werner, Yeater, 1998; Montgomery, 2005). In essence, this means
determining whether the empirically derived coefficients and the
models predictions have any statistical significance. This is
performed by means of a standard procedure for the Analysis of
Variance (ANOVA).
This page intentionally left blank
Process Integration
for Improving
Energy Efficiency

eat recovery is widely applied in industrial processes and
has an extensive historical record. However, systematic
methods for performing heat recovery are relatively new
when compared with the age of modern industry.

4.1 Introduction to Heat Exchange and Heat Recovery

In industry, large amounts of thermal energy are used to perform
heating. Examples of this can be found in crude oil preheating before
distillation, preheating of feed flows to chemical reactors, and heat
addition to carry out endothermic chemical reactions. Similarly,
some processessuch as condensation, exothermal chemical
reactions, and product finalizationrequire that heat be extracted,
which results in process cooling. There are several options for utility
heating; these include steam, hot mineral oils, and direct fired
heating. Steam is the most prevalent option because of its high
specific heating value in the form of latent heat. Utility cooling
options include water (used for moderate-temperature cooling when
water is available), air (used when water is scarce or not economical
to use), and refrigeration (when subambient cooling is needed). Heat
recovery can be used to provide either heating or cooling to processes.
Heat recovery may take various forms: transferring heat between
process streams, generating steam from higher-temperature process
waste heat, and preheating (e.g., air for a furnace, air or feed water for
a boiler using waste heat).
Heat transfer takes place in heat exchangers, which can employ
either direct mixing or indirect heat transfer via a wall. Direct heat
exchange is also referred to as nonisothermal mixing because the
temperatures of the mixed streams are different. Mixing heat
exchangers are efficient at transferring heat and usually have low

46 Chapter Four

capital cost. In most industries, the bulk of the heat exchange must
occur without mixing the heat-exchanging streams. In order to
exchange only heat while keeping the streams separate, surface heat
exchangers are employed. In these devices, heat is exchanged through
a dividing wall. Because of its high thermal efficiency, the counter-
current stream arrangement is the most common with surface
heat exchangers. To simplify the discussion, counter-current heat
exchangers are assumed unless stated otherwise. In terms of
construction types, the traditional shell-and-tube heat exchanger is
still the most common. However, plate-type and other compact heat
exchangers are gaining increased attention. Their compactness,
together with significant improvements in their resistance to leaking,
have made them preferable in many cases.

4.1.1 Heat Exchange Matches

A hot process stream can supply heat to a cold one when paired in
one or several physical heat exchangers arranged in parallel or
sequence. Each such pairing is referred to as a heat exchange match.
The form of the steady-state balance equations for heat exchange
matches that is most convenient for Heat Integration calculations is
based on modeling a match as consisting of hot and cold sides, as
shown in Figure 4.1. The hot and cold part each have a simple, steady-
state enthalpy balance that involves just one material stream and one
heat transfer flow.
The main components of the model are (1) calculations of the
heat transfer flows accounted for by the enthalpy balances and
(2) estimation of the necessary heat transfer area. For the latter, both
the log-mean temperature difference and the overall heat transfer
coefficient are employed. The enthalpy balance of the hot and cold
parts, and the kinetic equation of the heat transfer, may be written
as follows:

QHE mhot hin,hot  hout,hot (4.1)

Hot part




Cold part

FIGURE 4.1 Process flow diagram of a heat exchange match.

Process Integration for Improving Energy Efficiency 47

QHE mcold hout,cold  hin,cold (4.2)

QHE U A TLM (4.3)

where QHE [kW] is the heat flow across the whole heat exchanger, U
[kWm2C1] is the overall heat transfer coefficient, A [m2] is the heat
transfer area, and TLM [C] is the logarithmic-mean temperature
difference. More information can be found in Shah and Sekuli
(2003), Tovazshnyansky et al. (2004), and Shilling et al. (2008).

4.1.2 Implementing Heat Exchange Matches

The heat exchange matches are often viewed as being identical to
heat exchangers, but this is not always the case. A given heat exchange
match may be implemented by devices of different construction or
by a combination of devicesfor example, two heat exchangers in
sequence may implement a single heat exchange match. The
distinction between the concept of a heat exchange match and its
implementation via heat exchangers is important because of capital
cost considerations.

4.2 Basics of Process Integration

4.2.1 Process Integration and Heat Integration
A historical review of the field was given in Chapter 2. Initially,
attention was focused on reusing any waste heat generated on
different sites. Each surface heat exchanger was described with only
a few steady-state equations, and the thermal energy saved by reusing
waste heat led to reductions in the expense of utility resources. This
approach became popular under the names Heat Integration (HI) and
the more general term Process Integration (PI). In this context, HI
means integrating different processes to achieve energy savings.
Engineers realized that integration could also reduce the consumption
of other resources as well as the emission of pollutants. Heat and
Process Integration came to be defined more widely in response to
similar developments in water reuse and wastewater minimization.

4.2.2 Hierarchy of Process Design

Process design has an inherent hierarchy that can be exploited for
making design decisions. This hierarchy may be represented by the
so-called onion diagram (Linnhoff et al., 1982), as shown in Figure 4.2.
The design of an industrial process starts with the reactors or other
key operating units (the onions core). This is supplemented and
served by other parts of the process, such as the separation subsystem
(the next layer) and the Heat Exchanger Network (HEN) subsystem.
48 Chapter Four

xcha er Ne
tE t



S r

Feed + Product


Steam Turbine

FIGURE 4.2 The onion diagram.

The remaining heating and cooling duties, as well as the power

demands, are handled by the utility system.

4.2.3 Performance Targets

The thermodynamic bounds on heat exchange can be used to estimate
the utility usage and heat exchange area for a given heat recovery
problem. The resulting estimates of the process performance are a
lower bound on the utility demands and a lower bound on the
required heat transfer area. These bounds are known as targets for
the reason that heat recovery estimates are achievable in practice and
usually minimize the total cost of the HEN being designed.

4.2.4 Heat Recovery Problem Identification

For efficient heat recovery in industry, the relevant data must be
identified and presented systematically. In the field of Heat
Integration, this process is referred to as data extraction. The heat
recovery problem data are extracted in several steps:

1. Inspect the general process flowsheet, which may contain

heat recovery exchangers.
2. Remove the recovery heat exchangers and replace them with
equivalent virtual heaters and coolers.
3. Lump all consecutive heaters and coolers.
4. The resulting virtual heaters and coolers represent the net
heating and cooling demands of the flowsheet streams.
5. The heating and cooling demands of the flowsheet streams
are then listed in a tabular format, where each heating
Process Integration for Improving Energy Efficiency 49
demand is referred to as a cold stream and, conversely, each
cooling demand as a hot stream.

This procedure is best illustrated by an example. Figure 4.3 shows a

process flowsheet involving two reactors and a distillation column.
The process already incorporates two recovery heat exchangers. The
utility heating demand of the process is H = 1760 kW, and the utility
cooling demand is C = 920 kW.
The necessary thermal data has to be extracted from the initial
flowsheet. Figure 4.4 shows the flowsheet after steps 1 through 4. The
heating and cooling demands of the streams have been consolidated
by removing the existing exchangers, and the reboiler and condenser
duties have been left out of the analysis for simplicity (although these
duties would be retained in an actual study). It is assumed that any
process cooling duty is available to match up with any heating duty.
Applying step 5 to the data in Figure 4.4 produces the data set in
Table 4.1. By convention, heating duties are positive and cooling ones
are negative. (The subscripts S and T denote supply and target
temperatures for the process streams.)
The last column of Table 4.1 gives the heat capacity flow rate (CP).
For streams that do not change phase (i.e., from liquid to gas or vice
versa), CP is defined as the product of the specific heat capacity and
the mass flow rate of the corresponding stream:

CP mstream Cp ,stream (4.4)



2080 kW

78C 182C

Reactor 2

1760 kW
H 120C
Reboiler Reactor 1
100C 120C
3240 kW

920 kW 30C
C 34C

FIGURE 4.3 Data extraction: Example process flowsheet (after CPI, 2004 and
50 Chapter Four



78C Cooling 182C


2080 kW

Reactor 2

3840 kW 120C
Reboiler Reactor 1
100C 120C
4160 kW Heating
3240 kW

FIGURE 4.4 Data extraction: Heating and cooling demands (after CPI, 2004
and 2005).

Stream Type T S [C] T T [C] H [kW] CP [kW/C]

H1 Hot 182 78 2080 20
H2 Hot 138 34 4160 40
C3 Cold 52 100 3840 80
C4 Cold 30 120 3240 36

TABLE 4.1 Data Set for Heat Recovery Analysis

The CP can also be calculated using the following simple equation:

CP (4.5)

When phase transition occurs, the latent heat is used instead of

CP to calculate the stream duties. Some more problems related to
data extraction are discussed in Chapter 12.

4.3 Basic Pinch Technology

The main strategy of Pinch-based Process Integration is to identify
the performance targets before starting the core process design
activity. Following this strategy yields important clues and design
guidelines. The most common hot utility is steam. Heating with
steam is usually approximated as a constant-temperature heating
Process Integration for Improving Energy Efficiency 51
utility. Cooling with water is nonisothermal because the cooling
effect results from sensible heat absorption into the water stream and
thus leads to increasing the temperature.

4.3.1 Setting Energy Targets

Heat Recovery between One Hot and One Cold Stream
The Second Law of thermodynamics states that heat flows from
higher-temperature to lower-temperature regions. As shown in Eq. (4.3),
in a heat exchanger the required heat transfer area is proportional to
the temperature difference between the streams. In HEN design, the
minimum allowed temperature difference (Tmin) is the lower bound
on any temperature difference to be encountered in any heat
exchanger in the network. The value of Tmin is a design parameter
determined by exploring the trade-offs between more heat recovery
and the larger heat transfer area requirement. Any given pair of hot
and cold process streams may exchange as much heat as allowed by
their temperatures and the minimum temperature difference.
Consider the two-stream example shown in Figure 4.5(a). The
amount of heat recovery is 10 MW, which is achieved by allowing
Tmin = 20C. If Tmin = 10C, as in Figure 4.5(b), then it is possible to
squeeze out one more megawatt of heat recovery. To obtain the
heat recovery targets for a practical HEN design problem, this
principle needs to be extended to handle multiple streams.

Evaluation of Heat Recovery for Multiple

Streams: The Composite Curves
The analysis starts by combining all hot streams and all cold
streams into two Composite Curves or CCs (Linnhoff et al., 1982).
For each process there are two curves: one for the hot streams (Hot
Composite Curve, HCC) and another for the cold streams (Cold
Composite Curve, CCC). Each Composite Curve (CC) consists of a

(a) T [C] (b) T [C]

200 Steam 200 Steam

150 150

100 100

50 20C 50 10C
0 10 20 0 10 20
2 10 4 H [MW] 1 11 3 H [MW]

Heat recovery at TMIN = 20C Heat recovery at TMIN = 10C

FIGURE 4.5 Thermodynamic limits on heat recovery.

52 Chapter Four

temperature-enthalpy (T-H) profile, representing the overall heat

availability in the process (the HCC) and the overall heat demands
of the process (the CCC). The procedure of HCC construction is
illustrated in Figure 4.6 on the data from Table 4.1. All temperature
intervals are formed by the starting and target temperatures of the
hot process streams. Within each temperature interval, a composite
segment is formed consisting of (1) a temperature difference equal
to that of the interval and (2) a total cooling requirement equal to
the sum of the cooling requirements of all streams within the
interval. This is achieved by summing up the heat capacity flow
rates of the streams crossing the interval. Next, the composite
segments from all temperature intervals are combined to form the
HCC. Construction of the CCC is entirely analogous.
The Composite Curves are combined in the same graph in order
to identify the maximum overlap, which represents the maximum
amount of heat that could be recovered. The HCC and CCC for the
example from Table 4.1 are shown together in Figure 4.7.
Both CCs can be moved horizontally (i.e., along the H axis), but
usually the HCC position is fixed and the CCC is shifted. This is
equivalent to varying the amount of heat recovery and (simultaneously)
the amount of required utility heating and cooling. Where the curves
overlap, heat can be recuperated between the hot and cold streams.
More overlap means more heat recovery and smaller utility
requirements, and vice versa. As the overlap increases, the temperature
differences between the overlapping curve segments decrease. Finally,
at a certain overlap, the curves reach the minimum allowed temperature
difference, Tmin. Beyond this point, no further overlap is possible. The
closest approach between the curves is termed the Pinch point (or
simply the Pinch); it is also known as the heat recovery Pinch.
It is important to note that the amount of largest overlap (and
thus the maximum heat recovery) would be different if the minimum

(a) (b)
T [C] T [C]


182 CP2 = 40 kW/C



138 P2

+C C
W CP 1 kW/

0k 60
=4 78
78 CP

34 34
CP1 = 20 kW/C
2080 4160 H [kW] 1760 3600 880 H [kW]
6240 6240
The hot streams plotted separately The composite hot stream

FIGURE 4.6 Constructing the Hot Composite Curve.

Process Integration for Improving Energy Efficiency 53
allowed temperature difference is changed for the same set of hot
and cold streams. The larger is the value of Tmin, the smaller is the
possible maximum heat recovery. Specifying the minimum utility
heating, the minimum utility cooling, or the minimum temperature
difference fixes the relative position of the two Composite Curves
and hence the maximum possible amount of heat recovery. The
identified heat recovery targets are not absolutethey are relative
to the specified value of Tmin. If that value is increased, then the
minimum utility requirements also increase and the potential for
maximum recovery drops; see Figure 4.8.

T [C]



100 Pinch

Tmin = 10C

H [kW]
Qc,min = 328 Qrec = 5912 Qh,min = 1168

FIGURE 4.7 The HCC and CCC at Tmin = 10C.

T [C]



100 Tmin = 10C

Tmin = 20C

H [kW]
Qc,min = 328 Qrec = 5912 Qh,min = 1168
Qc,min = 728 Qrec = 5512 Qh,min = 1568

FIGURE 4.8 Variation of heat recovery targets with Tmin.

54 Chapter Four

The appropriate value for Tmin is determined by economic trade-

offs. Increasing Tmin results in larger minimum utility demands and
increased energy costs; choosing a higher value reflects the need to
reduce heat transfer area and its corresponding investment cost.
Conversely, if Tmin is reduced then utility costs go down but
investment costs go up. This trade-off is illustrated in Figure 4.9.

4.3.2 The Heat Recovery Pinch

The heat recovery Pinch has important implications for the HEN
being designed. As illustrated in Figure 4.10, the Pinch sets the
absolute limits for heat recovery within the process.





1 2

Optimum TMIN

FIGURE 4.9 Trade-off between investment and energy costs as a function of


T QH,min



FIGURE 4.10 Limits for process heat recovery set by the Pinch.
Process Integration for Improving Energy Efficiency 55
The Pinch point divides the heat recovery problem into a net heat
sink above the Pinch point and a net heat source below it (Figure 4.11).
At the Pinch point, the temperature difference between the hot and
cold streams is exactly equal to Tmin, which means that at this point
the streams are not allowed to exchange heat. As a result, the heat
sink above the Pinch is in balance with the minimum hot utility
(QH,min) and the heat source below the Pinch is in balance with the
minimum cold utility (QC,min), while no heat is transferred across the
Pinch via utilities or via process-to-process heat transfer.
No heat can be transferred from below to above the Pinch,
because this is thermodynamically infeasible. However, it is feasible
to transfer heat from hot streams above the Pinch to cold streams
below the Pinch. All cold streamseven those below the Pinch
could be heated by a hot utility; likewise, the hot streams (even above
the Pinch) could be cooled by a cold utility. Although these
arrangements are thermodynamically feasible, applying them would
cause utility use to exceed the minimum, as identified by the Pinch
Analysis. This is a fundamental relationship in the design of heat
recovery systems.
What happens if heat is transferred across the Pinch? Recall that
it is possible to transfer heat only from above to below the Pinch. If,
say, XP units of heat are transferred across the Pinch (Figure 4.12),
then QH,min and QC,min will each increase by the same amount in order
to maintain the heat balances of the two problem parts. Any extra
heat that is added to the system by the hot utility must then be taken
away by the cold utility, in addition to the minimum requirement
Cross-Pinch process-to-process heat transfer is not the only way
by which a problems thermodynamic Pinch partitioning can be

T QH,min
Zero cross-
pinch transfer



FIGURE 4.11 Partitioning the heat recovery problem.

56 Chapter Four

QH,min QUC,above XP

QUH,below XP


XP QUH,below QC,min QUC,above H

FIGURE 4.12 More in, more out.

violated. This could also happen if the external utilities are placed
incorrectly. For example, any utility heating below the Pinch will
create a need for additional utility cooling in that part of the system
(Figure 4.12). Conversely, any utility cooling above the Pinch will
create a need for additional utility heating. The implications of the
Pinch for heat recovery problems can be distilled into the following
three conditions, which must hold if the minimum energy targets for
a process are to be achieved.

1. Heat must not be transferred across the Pinch.

2. There must be no external cooling above the Pinch.
3. There must be no external heating below the Pinch.

Violating any of these rules will lead to an increase in energy utility

demands. The rules are applied explicitly in the context of HEN
synthesis by the Pinch Design Method (Linnhoff and Hindmarsh,
1983) and also before a HEN retrofit analysis to identify causes of
excessive utility demands by a process. Other HEN synthesis
methodsif they achieve the minimum utility demandsalso
conform to the Pinch rules (though sometimes only implicitly).

4.3.3 Numerical Targeting: The Problem Table Algorithm

The Composite Curves are a useful tool for visualizing heat recovery
targets. However, they can be time-consuming to draw for problems
that involve many process streams. In addition, targeting that relies
solely on such graphical techniques cannot be very precise. The
process of identifying numerical targets is therefore usually based
on an algorithm known as the Problem Table Algorithm (PTA). Some
Process Integration for Improving Energy Efficiency 57
MPR-oriented authors employ the equivalent transshipment model
(Cerd et al., 1990). The steps are as follows:

1. Shift the process stream temperatures.

2. Set up temperature intervals.
3. Calculate interval heat balances.
4. Assuming zero hot utility, cascade the balances as heat
5. Ensure positive heat flows by increasing the hot utility as

The algorithm will be illustrated using the sample data in Table 4.2.

Step 1
Because the PTA uses temperature intervals, it is necessary to set up
a unified temperature scale for the calculations. If the real stream
temperatures are used, then some of the heat content would be left
out of the recovery. The problem is avoided by obtaining shifted
stream temperatures (T*) for PTA calculations. The hot streams are
shifted to be colder by Tmin/2 and the cold streams are shifted to be
hotter by Tmin/2. If the shifted temperatures (T*) of a cold and a hot
stream (or their parts) are the same, then their real temperatures are
still actually Tmin apart, which allows for feasible heat transfer. This
operation is equivalent to shifting the Composite Curves toward
each other vertically, as illustrated in Figure 4.13. The last two
columns in Table 4.2 show the shifted process stream temperatures.

Step 2
Temperature intervals are formed by listing all shifted process
stream temperatures in descending order (any duplicate values are

Shifter Composite Curves






FIGURE 4.13 Temperature shifting to ensure feasible heat transfer.

58 Chapter Four

No. Type T S [C] T T [C] CP [kW/C] T S*[C] T T*[C]

1 Cold 20 180 20 25 185
2 Hot 250 40 15 245 35
3 Cold 140 230 30 145 235
4 Hot 200 80 25 195 75

TABLE 4.2 Problem Table Algorithm Example: Process Streams Data (Tmin = 10C)

Interval Stream Tinterval CPH CPC Hinterval Surplus/

Temp. Population [oC] [kW/oC] [kW] Deficit

245 2
10 15 150 Surplus

4 40 15 600 Deficit
195 10 10 100 Surplus
CP = 30

CP = 15

CP = 25

40 10 400 Deficit

145 70 20 1400 Surplus

CP = 20


40 5 200 Deficit

10 20 200 Deficit

TABLE 4.3 Problem Table Algorithm for the Streams in Table 4.2

considered just once). This action creates temperature boundaries

(TBs), which form the temperature intervals for the problem. For the
example in Table 4.2, the TBs are 245C, 235C, 195C, 185C, 145C,
75C, 35C, and 25C.

Step 3
The heat balance is calculated for each temperature interval. First,
the stream population of the process segments falling within each
temperature interval (the first two columns of Table 4.3) is identified.
Process Integration for Improving Energy Efficiency 59
Next, the sums of the segment CPs (heat capacity flow rates) in each
interval are calculated; then that sum is multiplied by the interval
temperature difference (i.e., the difference between the TBs that
define each interval). This calculation is also illustrated in Table 4.3.

Step 4
The Problem Heat Cascade shown in Figure 4.14 has a box allocated
to each temperature interval; each box contains the corresponding
interval enthalpy balances. The boxes are connected with heat flow
arrows in order of descending temperature. The top heat flow
represents the total hot utility provided to the cascade, and the
bottom heat flow represents the total cold utility. The hot utility flow
is initially assumed to be zero and this value is combined (summed)
with the enthalpy balance of the top cascade interval to produce the
value for the next lower cascade heat flow. This operation is repeated
for the lower temperature intervals and connecting heat flows until
the bottom heat flow is calculated, resulting in the cascade shown in
Figure 4.14(a).

Step 5
The resulting heat flow values in the cascade are examined, and a
feasible heat cascade is obtained; see Figure 4.14(b). From the
cascading heat flows, the smallest value is identified; if it is
nonnegative (i.e., positive or zero), then the heat cascade is
thermodynamically feasible. If a negative value is obtained then a
positive utility flow of the same absolute value has to be provided at


245 C 0 kW 245 C 750 kW
H = 150 kW H = 150 kW
235 C 150 kW 235 C 900 kW
H = 600 kW H = 600 kW
195 C 450 kW 195 C 300 kW
H = 100 kW H = 100 kW
185 C 350 kW 185 C 400 kW
H = 400 kW H = 400 kW
145 C 750 kW 145 C *
H = 1400 kW H = 1400 kW
75 C 650 kW 75 C 1400 kW
H = 200 kW H = 200 kW
35 C 450 kW 35 C 1200 kW
H = 200 kW H = 200 kW

25 C 250 kW 25 C 1000 kW
(a) Initial cascade (b) Feasible cascade

FIGURE 4.14 Heat Cascade for the process data in Table 4.2.
60 Chapter Four

the topmost heat flow, after which the cascading described in Step 4
is repeated. The resulting heat cascade is guaranteed to be feasible
and provides numerical heat recovery targets for the problem. The
topmost heat flow represents the minimum hot utility, the bottommost
heat flow represents the minimum cold utility, and the TB with zero
heat flow represents the location of the (heat recovery) Pinch. It is
often possible to obtain more than one zero-flow temperature
boundary, each representing a separate Pinch point.

4.3.4 Threshold Problems

Threshold problems feature only one utility typeeither hot or
cold. They are important mostly because they often result in no utility
capital trade-off below a certain value of Tmin, since the minimum
utility demand (hot or cold) becomes invariant; see Figure 4.15.

(a) T (b) T

Tmin = 14C
Tmin = 20C


Heat recovery, hot and More heat recovery, no hot utility
cold utilities
(c) T (d) T

Tmin = 10C Steam

Tmin = 10C


No increase in heat recovery Utility substitution

FIGURE 4.15 Threshold HEN design cases.

Typical examples of threshold Heat Integration problems involve

high-temperature fuel cells, which usually have large net cooling
demands but no net heating demands (Varbanov et al., 2006;
Varbanov and Kleme, 2008). An essential feature that distinguishes
threshold problems is that, as Tmin is varied, demands for only one
Process Integration for Improving Energy Efficiency 61
utility type (hot or cold) are identified over the variation range; in
contrast, pinched problems require both hot and cold utilities over
this range. When synthesizing HENs for threshold problems, one
can distinguish between two subtypes (see Figure 4.16):

1. Low-threshold Tmin: Problems of this type should be treated

exactly as Pinch-type problems.
2. High-threshold Tmin: For these problems, it is first necessary
to satisfy the required temperature for the no-utility end
before proceeding with the remaining design (using the tick-
off heuristic; see Figure 4.53).

4.3.5 Multiple Utilities Targeting

Utility Placement: Grand Composite Curve
In many cases, more than one hot and one cold utility are available
for providing the external heating and cooling requirements after
energy recovery and it is necessary to find and evaluate the

(a) T ST


Tmin = 10C

10C Tmin CW
H [MW]
Low Threshold Tmin

(b) T


CW Tmin = 10C

10C Tmin H [MW]

High Threshold Tmin

FIGURE 4.16 Threshold problems.

62 Chapter Four

cheapest and most effective combination of the available utilities

(Figure 4.17).
To assist with this choice and to enhance the information
derived from the HCC and CCC, another graphical construction
has been developed, known as the Grand Composite Curve (GCC)
(Townsend and Linnhoff, 1983). The heat cascade and the PTA
(Linnhoff and Flower, 1978) offer guidelines for the optimum
placement of hot and cold utilities, and this allows one to determine
the heat loads associated with each utility. For the previous sections,
the assumption has been that only one cold and one hot utility are
availablealbeit with sufficiently low and high temperatures to
satisfy the cooling and/or heating demands of the process. However,
most industrial sites feature multiple heating and cooling utilities
at several different temperature levels (e.g., steam levels, refrigeration
levels, hot oil circuit, furnace flue gas). Each utility has a different
unit cost. Usually the higher-temperature hot utilities and the
lower-temperature cold utilities cost more than the ones with
temperatures closer to the ambient. This fact underscores the need
to choose a mix that results in the lowest utility cost. The general
objective is to maximize the use of cheaper utilities and to minimize
the use of more expensive utilities. For example, it is usually
preferable to use low-pressure (LP) instead of high-pressure (HP)
steam and to use cooling water (CW) instead of refrigeration. The
Composite Curves plot in Figure 4.7 provides a convenient view for
evaluating the process driving force and the general heat recovery
targets. However, the CCs are not useful for identifying targets
when multiple utility levels are available; the GCC is used for
this task.

Boiler House and Power Plant

Fuel HP
Power Power
Steam MP Gas
Levels turbine
preheat Heating Heating Heating Heating
Processes, building complexes
Heating Heat
Cooling Cooling Power
Cooling (Q) (W)
Furnace Refrigeration
Fuel preheat Power

FIGURE 4.17 Choices of hot and cold utilities (amended after CPI 2004 and
Process Integration for Improving Energy Efficiency 63

Construction of the Grand Composite Curve

The GCC is constructed using the Problem Heat Cascade (Figure 4.14).
The heat flows are plotted in the T-H space, where the heat flow at
each temperature boundary corresponds to the X coordinate and the
temperature to the Y coordinate (Figure 4.18).
The GCC can be directly related to the Shifted Composite Curves
(SCCs), which are the result of shifting the CCs toward each other by
Tmin/2 so that the curves touch each other at the Pinch; see Figure 4.19.
At each temperature boundary, the heat flow in the Problem Heat
Cascade and GCC corresponds to the horizontal distance between
the SCCs.
The GCC has several fundamental properties that facilitate an
understanding of the underlying heat recovery problem. The parts
with positive slope (i.e., running uphill from left to right) indicate
that cold streams dominate (Figures 4.18 and 4.19). Similarly, the
parts with negative slope indicate excess hot streams. The shaded
areas in the GCC plot, which signify opportunities for process-to-
process heat recovery, are referred to as heat recovery pockets.

Utility Placement Options

The GCC shows the hot and cold utility requirements of the process in
terms of both enthalpy and temperature. This allows one to distinguish
between utilities at different temperature levels. There are typically

Hot Utility T* (C)

750 kW
245 C
H= 150 kW
900 kW
235 C

H= 600 kW

300 kW
195 C
H= 100 kW
400 kW
185 C
H= 400 kW PINCH
0 kW

145 C

H= 1400 kW
1400 C
75 C
H= 200 kW

1200 kW
H= 200 kW 35 C
1000 kW
25 C
Cold Utility
500 1000 1500
Q (kW)

FIGURE 4.18 Constructing the GCC for the streams in Table 4.2.
64 Chapter Four

T* [C] T* [C]

300 300

750 750
900 900
200 200 300
400 400

100 100
1400 1400

1200 1200
1000 1000
0 500 1000 1500 0 2000 4000 6000 8000

Q [kW] H [kW]

FIGURE 4.19 Relation between the GCC (left) and the SCC (right) for the
streams in Table 4.2.

(a) T* [C] (b) T* [C]

HP steam HP steam

200 200 MP

100 100

Cooling Water Cooling Water

Q [kW] Q [kW]
0 500 1000 1500 0 500 1000 1500
Single steam level Multiple steam levels

FIGURE 4.20 Using the GCC to target for single and multiple steam levels.

utilities at several different temperature levels available on a sitefor

example, there may be a supply of both high-pressure (HP) and
medium-pressure (MP) steam. As indicated previously, it is desirable to
maximize the use of cheaper utilities and to minimize the use of more
expensive ones. Utilities at higher temperature and/or pressure are
usually more expensive; see Figure 4.20. Therefore, MP steam is used
first: ranging from the Y axis until it touches the GCC (resulting in Tmin
in this point), which maximizes its usage. Only then is HP steam used.
When a utility line or profile touches the GCC, a new Pinch point
is created, termed a Utility Pinch (the MP steam line touching the
GCC in Figure 4.20). Each additional steam level creates another
Utility Pinch and increases the complexity of the utility system.
Process Integration for Improving Energy Efficiency 65
Higher complexity has several negative consequences, including
increased capital costs, greater potential for leaks, reduced safety,
and more maintenance expenses. Therefore limits are typically
placed on the number of steam levels.
Higher-temperature heating demands are satisfied by
nonisothermal utilities. These include hot oil and hot flue gas, both
of which maintain their physical phase (liquid and gaseous) across
a wide range of temperatures. The operating costs associated with
such utilities are largely dependent on furnace efficiency and on the
intensity and efficiency of the pumping or fan blowing. When
targeting the placement of a nonisothermal hot utility, its profile is
represented by a straight line,1 which runs from the upper right to
the lower left in the graph of Figure 4.21. The lines starting point
corresponds to the utility supply temperature and also to the
rightmost point for the utilitys heating duty. The utility use endpoint
corresponds either to the zero of the H axisin which case all
utility heating is covered by the current nonisothermal utilityor
to the rightmost point on the H axis for other, cheaper hot

FIGURE 4.21 T*
Properties of
nonisothermal hot *


CP1 > CP2


1 This linear representation assumes an approximately constant specific heat

capacity of the corresponding stream.
66 Chapter Four

As plotted in the figure, the nonisothermal utilitys termination

point corresponds to the ambient temperature. The distance from
this point to the zero of the H axis represents the thermal losses
from using the utility. The heat capacity flow rate of the nonisothermal
utility target is determined by making the utility line as steep as
possible, thereby minimizing its CP and the corresponding losses
(Figure 4.21). Its supply temperature is usually fixed at the maximum
allowed by the furnace and the heat carrier composition; the
remaining degree of freedom corresponds to the utilitys exact CP.
Smaller CP values result in steeper slopes and smaller losses.
The placement for a nonisothermal utility (e.g., hot oil) may be
constrained by two problem features: the process Pinch point and a
kink in the GCC at the top end of a heat recovery pocket; see
Figure 4.22 for an example.
When fuel is burned in a furnace or a boiler, the resulting flue
gas becomes available to heat up the corresponding cold-stream
medium (for steam generation or direct process duty). Transferring
heat to the process causes the flue gas temperature to drop as it moves
from the furnace to the stack. The stack temperature has to be above
a specified value: the minimum allowed stack temperature, which is
determined by limitations due to corrosion. If flue gas is used directly
for heating, then the Pinch point, if it is higher, may become more
limiting than the minimum allowed stack temperature. If the
analyzed process features both high-temperature and moderate-
temperature utility heating demands, then flue gas heating may not
be appropriate for satisfying all those demands. If steam is cheaper,

(a) Tmin= 20 (b) *

T* 290
290 Minimizing
Tsupply = 300
the CP
the CP

The Pinch does not

need to be limiting

TReturn, min TReturn, min= 150

Pinch Pinch
120 120

Process Pinch limitation Heat recovery pocket limitation

FIGURE 4.22 Constraints for placing hot oil utilities, Tmin = 20 C.

Process Integration for Improving Energy Efficiency 67
then combining it with flue gas reduces the latters CP and the
corresponding stack heat losses.
Another option for utility placement is to use part of the cooling
demand of a process for generating steam. This is illustrated in
Figure 4.23, in which steam generation is placed below the Pinch.
The GCC can reveal where utility substitution may improve
energy efficiency; see Figure 4.24. The main idea is to exploit heat
recovery pockets that span two or more utility temperature levels.
The technical feasibility of this approach is determined by both the
temperature span and the heat duty within the pocket, which should
be large enough to make utility substitution worthwhile when
weighed against the required capital costs.
Utility cooling below ambient temperatures may be required, a
need that is usually met by refrigeration. Refrigerants absorb heat by
evaporation, and pure refrigerants evaporate at a constant
temperature. Therefore, refrigerants are representedon the plot of
T (or T*) versus Hby horizontal bars, similarly to the steam levels.
On the GCC, refrigeration levels are placed similarly to steam levels;
see Figure 4.25.
When the level of a placed utility is between the temperatures of
a heat recovery pocket, the Utility Pinch cannot be located by using

T* Pinch


Point of closest
approach. Not
120 necessarily at the
boiling point

Superheat Evaporation


40 Cooling Water


0 H

FIGURE 4.23 Generating steam below the Pinch.

68 Chapter Four

T* T*

HP Steam HP Steam
Generation Generation

LP Steam use


FIGURE 4.24 Exploiting a GCC pocket for utility substitution.






FIGURE 4.25 Placing refrigeration levels for pure refrigerants.

the GCC. In this case, the Balanced Composite Curves (BCCs) are
used. Figure 4.26 shows how the data about the placed utilities can be
transferred from the GCC to the BCCs, enabling the correct location
of the Utility Pinch associated with LP steam.
The BCCs create a combined view in which all heat sources and
sinks (including utilities) are in balance and all Pinches are shown.
Process Integration for Improving Energy Efficiency 69
(a) T* (b) T
HP steam HP steam

LP steam LP steam
LP steam Pinch

Process Pinch Process Pinch LP steam



Grand Composite Curve Balanced Composite Curve

FIGURE 4.26 Locating the LP-steam Utility Pinch.

BCCs are a useful additional tool for evaluating heat recovery,

obtaining targets for specific utilities, and planning HEN design

4.4 Extended Pinch Technology

4.4.1 Heat Transfer Area, Capital Cost, and
Total Cost Targeting
In addition to maximizing heat recovery, it is also possible to estimate
the required capital cost. The expressions for obtaining these
estimates are derived from the relationship between heat transfer
area and the efficiency of a heat exchanger. Methods for targeting
capital cost and total cost were initially developed by Townsend and
Linnhoff (1984) and further elaborated by others (e.g., Ahmad,
Linnhoff, and Smith 1990; Colberg and Morari, 1990; Linnhoff and
Ahmad, 1990; Zhu et al., 1995).
The HEN capital cost depends on the heat transfer area, the
number of the heat exchangers, the number of shell-and-tube passes
in each heat exchanger, construction materials, equipment type, and
operating pressures. The heat transfer area is the most significant
factor, and assuming one-pass shell and tube exchangers it is
possible to estimate the overall minimum required heat transfer
area; this value helps establish the lower bound on the networks
capital cost. Estimating the minimum heat transfer area is based on
the concept of an enthalpy interval. As shown in Figure 4.27, an
enthalpy interval is a slice constrained by two vertical lines with fixed
values on the H axis. This interval is characterized by its H
70 Chapter Four

FIGURE 4.27 Enthalpy intervals. T

Enthalpy interval
1 qstream
Amin =
TLM streams hstream

difference, the corresponding temperatures of the CCs at the interval

boundaries, its process stream population, and the film heat transfer
coefficients of those streams.
The minimum heat transfer area target can be obtained by
estimating it within each enthalpy interval of the BCCs and then
summing up the values over all intervals (Linnhoff and Ahmad,
1 NS
qs , i
AHEN, min
TLM, i s 1 hs , i
i 1

Here EI and NS denote the number of enthalpy intervals and the

number of streams; i denotes ith enthalpy interval; s, the sth stream;
TLM,i, the log-mean temperature difference in interval i (from the
CC segments); qs, the enthalpy change of the sth stream; and hs, the
heat transfer coefficient of sth stream. The area targets can be
supplemented by targets for number of shells (Ahmad and Smith,
1989) and for the number of heat exchanger units, thus providing a
basis for estimating the HEN capital cost and the total cost. This
approach is known as supertargeting (Ahmad, Linnhoff, and Smith,
1989). With supertargeting it is also possible to optimize the value of
Tmin prior to designing the HEN. Proposed improvements to the
capital cost targeting procedure of Townsend and Linnhoff (1984)
mainly involve: (1) obtaining more accurate surface area targets for
HENs with nonuniform heat transfer coefficients (Colberg and
Morari, 1990; Jegede and Polley, 1992; Zhu et al., 1995; Serna-
Gonzlez, Jimnez-Gutirrez, and Ponce-Ortega, 2007); (2) accounting
for construction materials, pressure rating, and different heat
exchanger types (Hall, Ahmad, and Smith, 1990); or (3) accounting
for safety factors, including prohibitive distance (Santos and
Zemp, 2000). Further information can be found in the paper by Taal
and colleagues (2003), which summarizes the common methods
used to estimate the cost of heat exchange equipment and also
provides sources of projections for energy prices.
Process Integration for Improving Energy Efficiency 71

4.4.2 Heat Integration of Energy-Intensive Processes

Heat Engines
Particularly important processes are the heat enginessteam and gas
turbines. They operate by drawing heat from a higher-temperature
source, converting part of it to mechanical power; then (after some
energy loss) they reject the remaining heat at a lower temperature
(see Figure 4.28). For targeting purposes the energy losses are usually
Integrating a heat engine across the Pinch, which is equivalent
to a Cross-Pinch process-to-process heat transfer, results in a
simultaneous increase of hot and cold utility, which usually leads to
excessive capital investment for the utility exchangers. If a heat
engine is integrated across the Pinch, then the hot utility requirement
is increased by Q and the cold by Q W (in the notation of Figure 4.28).
Heat engines should be integrated in one of two ways.

1. Above the Pinch (Figure 4.29a): This increases the hot utility
for the main process by W, but this extra heat is converted
into shaftwork.
2. Below the Pinch (Figure 4.29b): This offers a double benefit. It
saves on a cold utility, and the process heat below the Pinch
supplies Q to the heat engine (instead of rejecting it to a
cooling utility).

Different heat engines differ in their placement. On the one hand,

steam turbines may be placed either below or above the Pinch because
they draw and exhaust steam. Figure 4.30 shows steam turbine
integration above the Pinch, which has the benefit of cogenerating




FIGURE 4.28 Heat engine configuration.

72 Chapter Four

T* T* A
A(Q W) Q
100% conversion
PINCH HeatWork

100% conversion QW
B HeatWork BQ

Integration above the pinch Integration below the pinch

(a) (b)

FIGURE 4.29 Appropriate placement of heat engines.

HP Steam

QLP LP Steam Boiler




QC,min H

FIGURE 4.30 Integrating a steam turbine above the Pinch.

extra power. In contrast, gas turbineswhich use fuel as inputare

typically used only as a utility heat source for the processes and can
be placed only above the Pinch.

Heat Pumps
Heat pumps present another opportunity for improving the energy
performance of an industrial process. Their operation is the reverse
of heat engines. That is, heat pumps take heat from a lower-
temperature source, upgrade it by applying mechanical power, and
then deliver the combined flow to a higher-temperature heat sink
(Figure 4.31).
An important characteristic of heat pumps is their coefficient of
performance (COP). This metric for device efficiency is defined as the
Process Integration for Improving Energy Efficiency 73

FIGURE 4.31 Heat-pump T*

configuration. Sink

Q + W = 125 kW

W = 25 kW

Q = 100 kW

ratio between the heat delivered to the heat sink and the consumed
shaftwork (mechanical power):

Q sink Q source  W (4.6)

Q sink Q source  W
COP = (4.7)

The COP is a nonlinear function of the temperature difference

between the heat sink and the heat source (Laue, 2006); this difference
is also referred to as temperature lift. Figure 4.32(a) shows the
appropriate integration of a heat pump across the Pinch, with the
heat source located below the Pinch and the heat sink above it.
The GCC facilitates sizing of the heat pump by evaluating the possible
temperatures of the heat source and heat sink, and their loads; see
Figure 4.32(b). Integrating entirely above the Pinch results in direct
conversion of mechanical power to heat. This is a waste of resources
because most of the power is generated at the expense of two to three
times the amount of fuel energy. The second alternativeplacing the
heat pump entirely below the Pinchresults in the power flow
consumed by the heat pump being added to the cooling utility
demand below the Pinch.
The procedure for sizing heat pumps to be placed across a
(process or utility) Pinch is illustrated in Figure 4.33. First, tem-
peratures are chosen for the heat source and the heat sink. Then the
horizontal projections spanning from the temperature axis to the
GCC provide the maximum values for the heat source and sink loads.
Recall that the GCC shows shifted temperatures. Because real
temperatures are used when calculating the heat-pump temperature
lift, the GCC values must be modified by subtracting or adding
Tmin/2 (see Section 4.3.3). The COP value can be derived from the
calculated Tpump and can be then used to calculate the necessary
74 Chapter Four

(a) T* QHmin (QHP + W) (b) T* Steam




Qcmin QHP
Appropriate placement Load and temperature lift on the GCC

FIGURE 4.32 Heat pump placement against a heat recovery problem.



FIGURE 4.33 Procedure for heat-pump sizing.

As a specific example, assume that the GCC in Figure 4.34 reflects

an industrial process with Tmin = 20C and that a heat pump is
available, described as follows:
COP 100.18 Tpump (4.8)

Focusing on the Pinch nose (a sharp nose provides a better

integration option) allows choosing a shifted temperature for the
heat source, in this example: T*source = 85C; see Figure 4.35. Using this
value allows extracting a maximum of 6.9 MW from the GCC below
the Pinch. Setting T *sink = 100C results in an upper bound of 2.634
MW for the sink load. Transforming to real temperatures and taking
the difference yields a temperature lift of Tpump = 35C. By Eq. (4.8),
the COP is thus equal to 4.4799. Given the upper bounds on the sink
Process Integration for Improving Energy Efficiency 75

TMIN = 20C
H[MW] T*[C]

21.90 440 500

29.40 410
23.82 131 350
18.00 118 300
T*[C] 250
1.80 115
0.00 94 150
4.30 91 100
11.50 79
15.00 30 0 10 20 30 40
H [MW]

FIGURE 4.34 Heat-pump sizing example: Initial data of the GCC.

FIGURE 4.35 140

Heat-pump sizing
example: Attempt 1.
T* [C]



0 5 10 15 20 25
H [MW]

and source heat loads, the smaller one is chosen as a basis. Here the
sink bound is smaller, so the sink is sized to its upper
bound: Qsink = Qsink,max = 2.634 MW. From this, the required pump
power consumption is computed to be 0.588 MW. As a result, the
actual source load for the heat pump is 2.046 MW. Comparing this
value with the upper bound of 6.9 MW, it is evident that the source
heat availability is considerably underutilized.
A different selection of source and sink temperatures is needed if
the source availability is to be better utilized. As a second attempt,
the sink temperature is increased from 100C to 110C. The maximum
source heat remains 6.9 MW, but the maximum sink capacity
increases from 2.634 to 7.024 MW. This results in the desired
76 Chapter Four

temperature lift of Tpump = 45C; the COP is now 3.5964, W = 2.657

MW, and the sink load Qsink = 9.557 MW; see Figure 4.36.
However, the heat sink is oversized in this case (Figure 4.36), so
the heat source temperature has to be shifted upward. An increase of
2.28C (see Figure 4.37) yields the following values: T *source = 87.28C;
Tpump = 42.72C; a maximum source load (also taken as the selected
source load) of 5.152 MW; a maximum sink load equal to the selected
sink load of Qsink = 7.016 MW; COP = 3.7637; and W = 1.864 MW. Better
results can be obtained by optimizing the two temperatures
simultaneously while using the overall utility cost as a criterion.


The sink is oversized
T* [C]



0 5 10 15 20 25
H [MW]

FIGURE 4.36 Heat-pump sizing example: Attempt 2.


T* [C]



0 5 10 15 20 25
H [MW]

FIGURE 4.37 Heat-pump sizing example: Attempt 3.

Process Integration for Improving Energy Efficiency 77
Once utility levels are chosen, the heat pump can be placed also
across a Utility Pinch (Figure 4.38). A special case of integration
involves the placement of refrigeration levels (Figure 4.39).
Refrigeration facilities are heat pumps whose main value is that their
cold end absorbs heat. Utilizing their hot ends for process heating
can save considerable amounts of hot utility, especially when
relatively low-temperature heating is needed.

Distillation Processes and Other Separations

The simplest distillation column includes one reboiler and one
condenser, and these components account for most of the columns





FIGURE 4.38 Heat-pump placement across the utility Pinch.


Rcond CW



FIGURE 4.39 Refrigeration systems.

78 Chapter Four

energy demands. For purposes of Heat Integration, the column is

represented generally by a rectangle: the top side denotes the reboiler
as a cold stream, and the bottom side denotes the condenser as a hot
stream; see Figure 4.40.
There are three options for integrating distillation columns:
across the Pinch, as shown in Figure 4.41(a); and entirely below or
entirely above the Pinch, as shown in Figure 4.41(b). Integrating
across the Pinch only increases overall energy needs, so this option
is not useful. The other two options result in net benefits by
eliminating the need to use an external utility for supplying the
distillation reboiler (above the Pinch) or the condenser (below the
Pinch). The GCC is used to identify the appropriate column integration
options. Figure 4.42 illustrates two examples of appropriately placed
distillation columns.


A Reboiler

A, B



FIGURE 4.40 Distillation column: TH representation.

(a) (b) A




Across the pinch Below or above the pinch

FIGURE 4.41 Distillation column: Integration options.

Process Integration for Improving Energy Efficiency 79

FIGURE 4.42 Appropriate placement T*

of distillation columns against the

When the operating conditions of a column result in placing it

across the Pinch, there are several degrees of freedom that can be
utilized to facilitate appropriate placement. It may be possible to
change the operating pressure, which could shift the column with
respect to the temperature scale until it fits above or below the Pinch.
Varying the reflux ratio results in simultaneous changes of the column
temperature span and the duties of the reboiler and condenser.
Increasing the reflux ratio (the ratio between the reflux flowrate
returned to the column and the distillate product flowrate) yields a
smaller temperature span and larger duties, whereas reducing the
ratio has the opposite effect. It is also possible to split the column into
two parts, introducing a double effect distillation arrangement. In
this approach, one of the effects is placed below and the other above
the Pinch; this prevents internal thermal integration of the column
Additional options are available, such as interreboiling and
intercondensing. When using available degrees of freedom, it is
important to keep in mind that the energycapital trade-offs of the
column and the main process are combined and thus become more
complicated than the individual trade-offs. Another important issue
is controllability of the integrated designs: unnecessary complications
should be avoided, and disturbance propagation paths should be
discontinued. It is usually enough to integrate the reboiler or the
condenser. If inappropriate column placement cannot be avoided,
then condenser vapor recompression with a heat pump can be used
to heat up the reboiler.
Evaporators constitute another class of thermal separators.
Because they, too, feature a reboiler and a condenser, their operation
is similar to that of distillation columns and so the same integration
principles can be applied to them. Absorber-stripper loops and
80 Chapter Four

dryers are similarly integrated. This subject has been extensively

studied by Kemp (2007, sections 6.3 to 6.5).

4.4.3 Process Modifications

The basic Pinch Analysis calculations assume that the core process
layers in the onion diagram (Figure 4.2) remain fixed. However, it is
possibleand in some cases beneficialto alter certain properties of
the process. Properties that can be exploited as degrees of freedom
include: (1) the pressure, temperature, or conversion rate in reactors;
(2) the pressure, reflux ratio, or pump-around flow rate in distillation
columns; and (3) the pressure of feed streams in evaporators.
Such modifications will also alter the heat capacity flow rates and
temperatures of the related process streams for Heat Integration; this
will cause further changes in the shapes of the CCs and the GCC,
thereby modifying the utility targets. The CCs are a valuable tool for
suggesting beneficial process modifications (Linnhoff et al., 1982).
Figure 4.43 illustrates this application of the CCs in terms of the plus-
minus principle. The main idea is to alter the CCs slope in the proper
direction in order to reduce the amount of utilities needed. This can
be achieved by changing CP (e.g., by mass flow variation). According
to Smith (2005), such decreases in utility requirements can be brought
about by (1) increases in the total hot stream duty above the Pinch,
(2) decreases in the total cold stream duty above the Pinch,
(3) decreases in the total hot stream duty below the Pinch, and/or
(4) increases in the total cold stream duty below the Pinch.
Another guide to modifying processes is the principle of keep hot
streams hot (KHSH) and keep cold streams cold (KCSC). As illustrated in
Figure 4.44, increasing the temperature differences by process

FIGURE 4.43 The plusminus principle.

Process Integration for Improving Energy Efficiency 81


Hot streams Cold streams
(a) (b)

FIGURE 4.44 (a) Keep hot streams hot; (b) keep cold streams cold.

modification allows for more overlap of the curves and results in

improved heat recovery. In particular, energy targets improve if the
heating and cooling demands can be shifted across the Pinch. The
principle suggests (1) shifting hot streams (cooling demands) from
below to above the Pinch and/or (2) shifting cold streams (heating
demands) from above to below the Pinch. See Chapter 12 (and
Figure 12.7) for more details.

4.5 HEN Synthesis

Most industrial-scale methods synthesize heat recovery networks
under the assumption of a steady state.

4.5.1 The Pinch Design Method

The Pinch Design Method (Linnhoff and Hindmarsh, 1983) became
popular owing to its simplicity and efficient management of
complexity. The method has evolved into a complete suite of tools for
heat recovery and design techniques for energy efficiency, including
guidelines for changing and integrating a number of energy-intensive

HEN Representation
The representation of HENs by a general process flowsheet, as in
Figure 4.45, is not always convenient. The reason is that this
representation makes it difficult to answer a number of important
questions: Where is the Pinch? What is the degree of heat recovery?
How much cooling and heating from utilities is needed?
The so-called conventional HEN flowsheet (Figure 4.46) offers a
small improvement. It shows only heat transfer operations and is
based on a simple convention: cold streams run horizontally and hot
82 Chapter Four

Feed 2 140C 4

Reactor 2

Product 2
Reactor 1 80C


Feed 1 40C

2 40C

Product 1, 40C

FIGURE 4.45 Using a general process flowsheet to represent a HEN.

FIGURE 4.46 2 250C 4 200C

Conventional HEN
flowsheet. 12.5 MW

205C 230C
7.0 MW 7.5 MW

8.0 MW 180C

6.5 MW
17.5 MW
20C 106.7C
C Above the
10.0 MW
40C 80C

streams vertically. Although the Pinch location can be marked for

simple cases, it is still difficult to see. This representation makes it
difficult to express the proper sequencing of heat exchangers and to
represent clearly the network temperatures. Furthermore, changing
Process Integration for Improving Energy Efficiency 83
the positions of some matches often results in complicated path
The grid diagram, as shown in Figure 4.47, provides a convenient
and efficient representation of HENs by eliminating the problems
just described. The grid diagram has several advantages: the
representation of streams and heat exchangers is clearer, it is more
convenient for representating temperatures, and the Pinch location
(and its implications) are clearly visible; see Figure 4.48. As in the
conventional HEN flowsheet, only heat transfer operations are shown
in the grid diagram. Temperature increases from left to right in the
grid, which is intuitive and in line with CC diagrams, making
(re)sequencing heat exchangers are straightforward.

The Design Procedure

The procedure for designing a HEN follows several simple steps:

1. Specification of the heat recovery problem.

(a) HOT 1



Simplified view

(b) MP Steam






Showing the utility streams explicitly

H Hot utility Heat exchange

between streams
C Cold utility

FIGURE 4.47 The grid diagram for HENs.

84 Chapter Four

2. Identification of the heat recovery targets and the heat

recovery Pinch, as explained in Section 4.3.
3. Synthesis
4. Evolution of the HEN topology

The first two steps were discussed in previous sections. The synthesis
step begins by dividing the problem at the Pinch and then positioning
the process streams as shown in Figure 4.49.
The engineering practice suggests starting the network design
from the Pinch (the most restricted part of the design owing to
temperature differences approaching Tmin) and then to place
heat exchanger matches while moving away from the Pinch
(Figure 4.50). When placing matches, several rules have to be followed
in order to obtain a network that minimizes utility use: (1) no
exchanger may have a temperature difference smaller than Tmin;

No cross-Pinch heat transfer A cross-Pinch heat transfer match

FIGURE 4.48 The grid diagram and implications of the Pinch.

40C 150C 250C
2 15
80C 150C 200C
4 25

20C 140C 180C

1 20
140C 230C
3 30

QHmin = 750 kW QCmin = 1000 kW

FIGURE 4.49 Dividing at the Pinch for the streams in Table 4.2.
Process Integration for Improving Energy Efficiency 85


FIGURE 4.50 Pinch design principle.

(2) no process-to-process heat transfer may occur across the Pinch;

and (3) no inappropriate use of utilities should occur.
At the Pinch, the enthalpy balance restrictions entail that certain
matches must be made if the design is to achieve minimum utility
usage without violating the Tmin constraint; these are referred to as
essential matches. Above the Pinch, the hot streams should be cooled
only by transferring heat to cold process streams, not to utility
cooling. Therefore, all hot streams above the Pinch have to be matched
up with cold streams. This means that all hot streams entering the
Pinch must be given priority when matches are made above the
Pinch. Conversely, cold streams entering the Pinch are given priority
when matches are made below the Pinch.
Now recall the example from Table 4.2. Figure 4.49 shows the
scaled grid diagram, indicating the hot and cold Pinch temperatures.
The part above the Pinch requires essential matches for streams 2
and 4, since they are entering the Pinch. Consider stream 4. One
possibility is to match it against stream 1, as shown in Figure 4.51.
Stream 4 is the hot stream, and its CP is greater than the CP for cold
stream 1. As shown in the figure, at the Pinch the temperature
distance between the two streams is exactly equal to Tmin. Moving
away from the Pinch results in temperature convergence because the
slope of the hot stream line is less steep owing to its larger CP. Since
Tmin is the lower bound on network temperature differences, the
proposed heat exchanger match is infeasible and thus is rejected.
Another possibility for handling the cooling demand of stream 4
is to implement a match with stream 3, as shown in Figure 4.52. The
CP of stream 3 is larger than the CP of stream 4, resulting in divergent
86 Chapter Four

150C 250C
2 15
150C 200C
4 25
140C 180C
1 20 H
Temperature difference smaller
140C 230C than Tmin
3 30

FIGURE 4.51 An infeasible heat exchanger match above the Pinch.

150C 250C
2 15
150C 200C 4
4 25

Tmin H
140C 180C
1 20 The match is feasible

140C 230C
3 30

FIGURE 4.52 A feasible heat exchanger match above the Pinch.

temperature profiles in the direction away from the Pinch. Thus, the
rule above the Pinch may be expressed as follows:

CPhot stream d CPcold stream (4.9)

The logic below the Pinch is symmetric. This part of the design is
a net heat source, which means that the heating requirements of the
cold streams have to be satisfied by matching up with hot streams.
The same type of reasoning as before yields the following requirement:
the CP value of a cold stream must not be greater than the CP value
Process Integration for Improving Energy Efficiency 87
of a hot stream if a feasible essential match is to result. Generalizing
Eq. (4.9) shows that the CP of the stream entering the Pinch must be
less than or equal to the CP of the stream leaving the Pinch:

CPentering pinch d CPleaving pinch (4.10)

The Pinch Design Method incorporates a special tool for handling

this stage: CP tables (Linnhoff and Hindmarsh, 1983). Here the
streams are represented by their CP values, which are sorted in
descending order. This facilitates the identification of promising
combinations of streams for candidate essential matches. Sizing the
matches follows the so-called tick-off heuristic, which stipulates that
the heat exchange match should be as large as possible so that at least
one of the involved streams will be completely satisfied and then
ticked off from the design list; see Figure 4.53.

Completing the Design

The HEN design above the Pinch is illustrated in Figure 4.54. The
design below the Pinch follows the same basic rules, with the small
difference that here it is the cold streams that define the essential
Figure 4.55 details the design below the Pinch for the considered
example. First the match between streams 4 and 1 is placed and sized
to the duty required by stream 4. The other match, between streams 2
and 1, formally violates the Pinch rule for placing essential matches.
However, since stream 1 already has another match at the Pinch, the
current match (between 4 and 1) is not strictly termed essential. Up

[kW/C] [kW]
203.3C 250C
2 15 1500

150C 200C
4 25 1250

140C 180C
1 20 800

800 kW

140C TX = 181.7C 230C

3 30 2700
1250 kW TX =140 + =181.7C

FIGURE 4.53 The tick-off heuristic.

88 Chapter Four

[kW/C] [kW]
203.3C 250C
2 15 1500

150C 200C
4 25 1250

140C 180C
1 20 800

800 kW

140C TX = 181.7C 205C 230C

3 H 30 2700

1250 kW 700 kW 750 kW

FIGURE 4.54 The HEN design above the Pinch.

[kW/C] [kW]

40C 106.7C 150C

C 2 15 1650
1000 kW
80C 150C
4 25 1750

20C 52.5C 140C

1 20 2400

650 kW 1750 kW

FIGURE 4.55 The HEN design below the Pinch.

to the required duty of 650 kW, this match does not violate the Tmin
constraint, which is the relevant one. The completed HEN topology
is shown in Figure 4.56.
It is not always possible to follow the basic design rules described
previously, so in some cases it is necessary to split the streams so that
heat exchange matches can be appropriately placed. Splitting may be
required in the following situations:

1. Above the Pinch when the number of hot streams is greater

than the number of cold streams (NH > NC); see Figure 4.57.
2. Below the Pinch when the number of cold streams is greater
than the number of hot streams (NC > NH); see Figure 4.58.
3. When the CP values do not suggest any feasible essential
match; see Figure 4.59.
Process Integration for Improving Energy Efficiency 89

40C 106.7C 150C 203.3C 250C
C 2 15
1000 kW 150C
80C 200C 4 25

20C 52.5C 140C 180C

1 20
650 kW 1750 kW 800 kW
140C Tx =181.7 C 205C 230C
3 H 30
1250 kW 700 kW 750 kW

QHmin = 750 kW QCmin =1000 kW

FIGURE 4.56 Completed HEN design.

1 12
2 20
3 7


90C T > 90
4 15
90C T > 90
5 30

More Hot Streams than Cold Streams

1 12
2 20
3 7

4 15
5 30

Split a cold stream

FIGURE 4.57 Splitting above the Pinch for NH > NC .

90 Chapter Four

T<100 100C
35 1
T<100 100C
60 2

24 3
30 4

20 5

(a) More Cold Streams than Hot Streams

100C 1
60 2

24 3
30 4
20 5

(b) Split a Hot Stream

FIGURE 4.58 Splitting below the Pinch for NC > NH.

The loads of the matches involving stream branches are again

determined using the tick-off heuristic. Because each stream splitter
presents an additional degree of freedom, it is necessary to decide
how to divide the overall stream CP between the branches. One
possibility is illustrated in Figure 4.60. The suggested split ratio is 4:3.
This approach would completely satisfy the heating needs of the cold
stream branches.
The arrangement in Figure 4.60 is actually trivial and can be
improved upon. Unless the stream combinations impose some severe
constraints, there is a large number of possible split ratios. This fact
can be exploited to optimize the network. In many cases it is possible
to save an extra heat exchanger unit by ticking off two steams with a
Process Integration for Improving Energy Efficiency 91
30C 100C
1 5

40C 100C 4
? ?
15C 90C
3 7

CP rules are not satisfied

30C 100C
1 5

40C 100C
2 4

15C 90C
3 4

Split the cold Stream

FIGURE 4.59 Splitting to enable CP values for essential matches.

30C 100C
C1 1 1 5
T1 = 100 300/5 = 40C
T2 = 100 225/4

40C T2 = 43.75C 100C

C2 2 2 4

15C 90C
3 4

Q1 = 4 (90 15)= 300 kW Q2 = 3 (9 15) =225 kW

FIGURE 4.60 Splitting and trivial tick-off.

92 Chapter Four

single match, as illustrated in Figure 4.61. This satisfies both hot

stream 2 and the corresponding cold branch of stream 3, thereby
eliminating the second cooler.
The complete algorithm for splitting streams above the Pinch is
given in Figure 4.62. The procedure for splitting below the Pinch is
symmetrical, with the cold and hot streams switching their roles.

Network Evolution
Any HEN obtained using the design guidelines described previously
is optimal with respect to its energy requirements, but it is usually
away from the total cost optimum. Observing the Pinch division

T1 = 100 285/5 [kW/C]

30C T1 = 43C 100C

C1 1 1 5

Step 1: Q2 = 4 (100 40) = 240 kW

40C 1 100C
2 2 4
Step 3: Q1 = 3.8 (90 15) = 285 kW

15C 90C
3 Q1 3.8
Q2 3.2
Step 2: CPC2 = 240(90 15) = 3.2

FIGURE 4.61 Splitting and advanced tick-off.

Stream Data
at Pinch

Yes Yes
? ?
Place No
Split Cold

Split Hot

FIGURE 4.62 Splitting procedure above the Pinch.

Process Integration for Improving Energy Efficiency 93
typically introduces loops into the final topology and leads to larger
number of heat exchanger units. The final step in HEN design is
evolution of the topology: identifying heat load loops and open heat
load paths; then using them to optimize the network in terms of heat
loads, heat transfer area, and topology. During this phase, formerly
rigorous requirementsfor example, that all temperature differences
exceed Tmin and that cross-Pinch heat transfers be excludedare
usually relaxed. The resulting optimization formulations are typically
nonlinear and involve structural decisions, so they are MINLP
problems. Different approximations and simplifying assumptions
can be introduced to obtain linear and/or continuous formulations.
The design evolution step can even be performed manually by
breaking the loops and reducing the number of heat exchangers.
Eliminating heat exchangers from the topology is done at the expense
of shifting heat loads (from the eliminated heat recovery exchangers)
to utility exchangers: heaters and coolers. Topology evolution
terminates when the resulting energy cost increase exceeds the
projected savings in capital costs, which corresponds to a total cost
Network evolution is performed by shifting loads within the
network toward the end of eliminating excessive heat exchangers
and/or reducing the effective heat transfer area. To shift loads, it is
necessary to exploit the degrees of freedom provided by loops and
utility paths. In this context, a loop is a circular closed path connecting
two or more heat exchangers, and a utility path connects a hot with a
cold utility or connects two utilities of the same type. Figure 4.63
shows a HEN loop and a utility path. A network may contain many
such loops and paths.

4.5.2 Superstructure Approach

As presented so far, the Pinch Design Method is based on a sequential
strategy for the conceptual design of HENs. It first develops an

40 250
C 1 4 5 2

80 200
2 3 4

20 180
140 230
3 H
W +W

FIGURE 4.63 Loop and path in a Heat Exchanger Network.

94 Chapter Four

understanding of the thermodynamic limitations imposed by the set

of process streams, and then it exploits this knowledge to design a
highly energy-efficient HEN. However, another approach has also
been developed: the superstructure approach to HEN synthesis,
which relies on developing a reducible structure (the superstructure)
of the network under consideration. An example of the spaghetti-
type HEN superstructure fragments typically generated by such
methods (Yee et al., 1990) is shown in Figure 4.64.
Typically, this kind of superstructure is developed in stages (Yee
et al., 1990) or blocks (Zhu et al., 1995), each of which is a group of
several consecutive enthalpy intervals (discussed previously). Within
each block, each hot process stream is split into a number of branches
corresponding to the number of the cold streams presented in the
block, and all cold streams are split similarly. This is followed by
matching each hot branch with each cold branch. Once developed,
the superstructure is subjected to optimization. The set of decision
variables includes the existence of the different stream split branches
and heat exchangers, the heat duties of the exchangers, and the split
fractions or flow rates of the split streams. The objective function
involves mainly the total annualized cost of the network, although
the function may be augmented by some penalty terms for dealing
with difficult constraints. Because the optimization procedure makes
structural as well as operating decisions about the network being
designed, it is called a structure-parameter2 optimization. Depending

Enthalpy Interval



FIGURE 4.64 Spaghetti superstructure fragment.

2 This should not be confused with the parameter entities from Mathematical
Process Integration for Improving Energy Efficiency 95
on what assumptions are adopted, it is possible to obtain both MILP
and MINLP formulations. Linear formulations are usually derived
by assuming isothermal mixing of the split branches and then using
piecewise linearization on the heat exchanger capital cost functions.
With the superstructure approach it is possible to include other
heat exchange optionsfor example, direct heat transfer units (i.e.,
mixing) and different heat exchanger types (e.g., double-pipe, plate-
fin, and shell-and-tube). Sorak and Kravanja (2004) presented a
method incorporating different heat exchanger types into a
superstructure block for each potential heat exchange match. Some
other interesting works in this area are by Daichendt and Grossmann
(1997), Zamora and Grossman (1998), Bjrk and Westerlund (2002),
and Frausto-Hernndez et al. (2003).

4.5.3 A Hybrid Approach

It is clear that the superstructure methodology offers some
advantages in the synthesis of process systems and, in particular, of
HENs. Among these advantages are: (1) the capacity to evaluate a
large number of structural and operating alternatives simultaneously;
(2) the possibility of automating (to a high degree) the synthesis
procedure; and (3) the ability to deal efficiently with many additional
issues, such as different heat exchanger types and additional
constraints (e.g., forbidden matches).
However, these advantages also give rise to certain weaknesses.
First, the superstructure approaches, in general, cannot eliminate the
inherent nonlinearity of the problem. Hence they resort to
linearization and simplifying assumptions, such as allowing only
isothermal mixing of split streams. Second, the transparency and
visualization of the synthesis procedure are almost completely lost,
excluding the engineer from the process. Third, the final network is
merely given as an answer to the initial problem, and it is difficult to
assess how good a solution it represents or whether a better solution
is possible. Fourth, the difficulties of computation and interpreting
the result grow dramatically with problem size; this is a consequence
of the large number of discrete alternatives to be evaluated. Finally,
the resulting networks often contain subnetworks that exhibit a
spaghetti structure: a cluster of parallel branches on several hot and
cold streams with multiple exchangers between them. Because of
how superstructures are constructed, these subnetworks often
cannot be eliminated by the solvers.
All these considerations highlight the fundamental trade-off
between applying techniques that are based on thermodynamic
insights, such as the Pinch Design Method, and relying on a
superstructure approach. It would therefore be useful to find a
combination of both approaches and, if there is, to see whether it
offers any advantages. One such middle way is the class of hybrid
synthesis methods described next. A method of this class first
96 Chapter Four

applies Pinch Analysis to obtain a picture of the thermodynamic

limitations of the problem; but then, instead of continuing on to
direct synthesis, it builds a reduced superstructure. At this point
the method follows the route of the classical superstructure
approaches, including structure-parameter optimization and
topology simplifications. The cycle of optimization and simplification
is usually repeated several times before the final optimal network
is obtained. All resulting networks feature a high degree of heat
recovery, though rarely the maximum possible. A key component
of this technique is avoiding the addition of unnecessary features
to the superstructure, and this is an area where Pinch Analysis can
prove helpful. A good example of a hybrid method for HEN
synthesis is the block decomposition method (Zhu, 1997).

4.5.4 Key Features of the Resulting Networks

The networks obtained by the different synthesis methods have
distinct features, which influence their total cost and their properties
of operation and control. Because the Pinch Design Method
incorporates the tick-off heuristic rule (Figure 4.53), the networks
synthesized by this method tend to have simple topologies with
few stream splits and feature minimum number of units (Linnhoff
et al., 1982). Both the tick-off rule and the Pinch principle dictate
that utility exchangers be placed last, so they are usually located
immediately before the target temperatures of the streams.
However, the tick-off rule may also result in many process streams
not having utility exchangers assigned to them, which may reduce
control efficiency. The Pinch Design Method may reduce network
flexibility because it relies on the Pinch decomposition of the
problem (Figure 4.50) and so, to a large degree, fixes the network
Both the pure superstructure approach and the hybrid approach
tend to produce more complex topologies. Their distinctive feature is
the greater number of heat exchangers and stream splits, a result of
how the initial superstructure is built. Spaghetti-type subnetworks
also present a significant challenge to control.

4.6 Total Site Energy Integration

The concept of the Total Site was introduced by Dhole and Linnhoff
(1993b). Figure 4.65 shows a typical industrial Total Site. Refinery and
petrochemical processes usually operate as parts of large sites or
factories. These sites have several processes serviced by a centralized
utility system involved in steam and power generation. The two
major components of Total Site integration are closely related: heat
recovery (through the steam or utility system) and power
Process Integration for Improving Energy Efficiency 97


Fuel 1
Fuel 2 Condensing HP Steam
Emissions MP Steam
LP Steam

Fuel 3 Plant A Plant B Plant C


FIGURE 4.65 Schematic of an industrial Total Site.

4.6.1 Total Site Data Extraction

The heating and cooling requirements of the individual processes
are represented by their respective GCCs (see Section 4.3.5). The
GCC represents the process-utility interface for a single process.
These individual GCCs can be used to identify the potential heat
recovery via steam mains. When a site houses several production
processes, the GCC of each process may indicate certain steam levels
that are suitable for the given process. This suggests that trade-offs
need to be made among energy demands of the various processes
on a Total Site, since each process usually needs utility heating or
cooling at levels different from the other processes.

4.6.2 Total Site Profiles

It is possible to set utility targets for sites involving several processes
(Dhole and Linnhoff, 1993b; Linnhoff and Dhole, 1993; see also
extensions by Kleme et al., 1997). The procedure is based on
thermal profiles for the entire site that are called, naturally enough,
Total Site Profiles (TSPs). These profiles are constructed from the
GCCs of the individual processes on the site. The first step (see
Figure 4.66) is to extract the net heating and cooling demands. Two
options are possible: restricting the Heat Integration to each process
(so that heat recovery pockets are not considered for Total Site
analysis) or to allow extended integration across processes by
including the GCC segments forming the pockets. When the scope
of site integration is extended, however, some design possibilities
98 Chapter Four

Shifed T**

Shifed T**
1.1. Remove the pockets
1.2. Shift
1. Extract segments 2. Rotate source segments

T** T**

0 H
0 H
3. Combine segments
4. Align profiles

FIGURE 4.66 Construction of the Total Site Profiles when the heat recovery
pockets are excluded from site integration.

become impracticalfor instance, the analysis may try to integrate

streams that are distant, or there may be control and start-up
problems. Note that the parts of each GCC that are directly satisfied
by local utilities (e.g., furnaces within the processes) are also
excluded from the analysis; the remaining curve parts are those
representing the net heat source and sink demands to be satisfied
by the central utility system. As shown in the figure, subseqent
steps include the rotation of heat source segments (for purely
graphical reasons), the thermal combination of stream segments
(much as in the construction of process CCs), and alignment of the
resulting Total Site source profile and Total Site sink profile.
As shown in Figure 4.66, the source and sink elements extracted
from the GCCs are shifted by Tmin/2: the temperatures of the heat
source segments are reduced while those for the sinks are increased
This operation ensures that all temperatures in the picture remain in
the scale of the true utility temperatures so that, if a utility profile
touches the process-derived profiles, then there will be just enough
temperature driving force to effect the heat transfer. The composite
of the heat sources elements is the Site Source Profile and that of the
sinks is the Site Sink Profile.
Site Source Profiles and Site Sink Profiles derive primarily from
the process GCCs. Other steam requirements (mostly for process
use not directly related to heating) are usually not represented in
the GCC; examples include steam for ejectors and reactors as well
Process Integration for Improving Energy Efficiency 99
as unaccountable steam usage. These additional requirements have
to be considered when analyzing or designing the sites utility
system. Such steam demands are considered to be sink elements
and are added to the Site Sink Profile without any temperature

4.6.3 Heat Recovery via the Steam System

The maximum possible heat recovery through a utility system can be
targeted by using the Site Sink and Source Profiles in combination
with the steam header saturation temperatures. Source CCs for utility
generation and usage are constructed that account for feasible heat
transfer from the Site Source Profile to the site source CC and from
the site sink CC to the Site Sink Profile. The site CCs are analogous to
the individual process CCs. The source CC is built starting from the
highest feasible steam level. The steam generation at each level is
maximized before the next lower levels are analyzed. This ensures
maximum utilization of the heat sources temperature potential. The
remainder of the Site Source Profile (i.e., the part that does not overlap
the newly built source CC) is served by CW. Building the sink CC
follows a symmetrical procedure only starting from the lowest
possible steam level. The use of this level is maximized before moving
up to the next higher temperature level, and so forth until steam with
the highest possible level is usedincluding boiler-generated
Figure 4.67 illustrates the procedure for building the Total Site
CCs; here, the TSPs from Figure 4.66 were reused. There are several

From boilers
250 VHP

200 HP
Temperature [C]

2 MP

3 1

50 4

25 20 15 10 5 0 5 10 15 20 25
Enthalpy [MW]

FIGURE 4.67 Constructing Total Site Composite Curves.

100 Chapter Four

steam pressure levels, which are represented by their corresponding

saturation temperatures. The main distribution levels are high-
pressure (HP) steam at a saturation temperature of 200C , medium-
pressure (MP) steam at 170C, and LP steam at 115C. In addition,
very-high-pressure (VHP) steam at 250C is generated by the steam
boilers, and CW is also available.
The Site Source Profile in Figure 4.67 can generate at most 2 MW
of HP steam, another 2 MW of MP steam, and 15 MW of LP steam.
The rest of the heat sources will need to be served by the CW, which
completes the Site Source CC. The Site Sink CC, indicates needs for 1
MW of LP steam, 6.5 MW of MP steam, 2.5 MW HP steam, and the
remaining 5 MW demand to be satisfied by VHP steam. The two site
CCs can be overlapped in the same way as were the process CCs,
thereby illustrating the Total Site heat recovery possible through the
steam system. Figure 4.68 shows the corresponding site CCs
overlapped. The amount of heat recovery for the Total Site is indicated
by the amount of overlap between the CCs. Heat recovery is
maximized when the two Site Composite Curves touch and cannot
be shifted further. The area where the curves touch, which is usually
confined between two steam levels (here, the MP and LP levels), is
the Total Site Pinch. The steam mains located at the Site Pinch feature
opposite net steam loads; in other words, the steam main above the
Site Pinch is a net steam user while the one below the Site Pinch is a
net steam generator. Just as with the Process Pinch, the site Pinch
divides an overall heat recovery problem into a net heat source and a
net heat sink.

250 VHP

200 HP
Temperature [C]



Heat recovery via steam

25 20 15 10 5 0 5 10 15 20 25
Enthalpy [MW]

FIGURE 4.68 Targeting heat recovery.

Process Integration for Improving Energy Efficiency 101

4.6.4 Power Cogeneration

The factors that control the economics of utility systems are fuels and
their properties, the ratio of fuel prices to power prices, efficiency of
the utility system, and the amount of power to be imported/exported.
Depending on these factors and the Total Site power demand, the site
may be a net power importer or exporter or be in power balance.
Most industrial processes require steam at different pressure
levels up to about 30 bar. Central utility boilers usually generate steam
at higher pressure (40100 bar). Back-pressure steam turbines are used
to expand the steam from higher- to lower-level steam headers, thus
generating power while delivering steam to processes. Another
method of cogeneration employs a gas turbine, which is in itself a
power generating equipment. A gas turbine produces large amounts
of waste heat along with the powerthe ratio of heat to power is
about 1.52. This is high-temperature (450600C) waste heat capable
of generating even VHP steam. The heat from a gas turbines exhaust
stream can be utilized by heat recovery steam generators with or
without supplementary firing. The generated steam is expanded
through steam turbines to produce additional power.
The work of Dhole and Linnhoff (1993b) has been further
developed by Raissi (1994) and Kleme et al. (1997). The latter paper
describes the development of a tool called the Site Utility GCC
(SUGCC). The area enclosed by this curve is proportional to the
power cogeneration potential of the site steam system; Kleme et al.
(1997) also defined a simple proportionality coefficient, whose value
is usually evaluated for each industrial site separately. This
cogeneration targeting model is referred to as the T-H model
because it is based on heat flows through the steam system.
Using SUGCCs allowed Kleme and colleagues (1997) to set
thermodynamic targets for cogeneration of power along with targets
for site-scope heat recovery that would minimize the cost of utilities.
Satisfying the goal of maximum heat recovery leads to a minimum
boiler VHP steam requirement, which in turn can be achieved by
maximizing steam recovery. Here the power generation by steam
turbines is also minimal, which has the effect of maximizing
imported power. This scenario can be represented by the Site CCs
that are shifted to a position of maximum overlap (i.e., pinched). This
target represents the thermodynamic limitation on system efficiency,
but this is not a specification that must be achieved. The case of
minimizing the cost of utilities is handled by exploring the trade-off
between steam recovery and power cogeneration by steam turbines.
If design guidelines are thus based on minimizing cost, then the
resulting network design is usually different from that produced
when aiming to minimize fuel consumption.
Mavromatis and Kokossis (1998) proposed a simple model of
back-pressure steam turbine performance. In this model, the
performance of a steam turbine is related to its size (in terms of
102 Chapter Four

maximum shaft power) and to part-load performance; the shaft

power is modeled as a function of the steam mass flow that is known
as the Willan line. This model was extended to condensing steam
turbines by Shang (2000). All these works follow the same model
structure and employ the same equations; however, they use different
values for the turbine regression coefficients.
The intercept of the Willan line was mapped by Mavromatis and
Kokossis (1998) and by Shang (2000) as identical to the turbine energy
losses, also assuming a fixed loss rate. Varbanov, Doyle, and Smith
(2004) introduced improvements to those models by (1) recognizing
that the Willan line intercept has no direct physical meaning and is
simply the intercept of a linearization, and (2) accounting for both
inlet and outlet pressures of the steam turbines. These improved
steam turbine models have been incorporated into methodologies for
simulating and optimizing steam networks; they have also been
used to target heat and power cogeneration by assuming a single
large steam turbine for each expansion zone between two consecutive
steam headers.

4.6.5 Advanced Total Site Optimization and Analysis

A model for optimizing the utility system serves as a tool for reducing
site operating costs related to energy and for analyzing the
thermodynamic limitations of energy conversions. An advanced
approach to these concepts, known as top-level analysis, is one that
allows scoping which site processes to target for Heat Integration
improvement (Varbanov, Doyle, and Smith, 2004). Consider the utility
system shown in Figure 4.69 (Smith and Varbanov, 2005), whose
operating properties have been optimized for the given steam and
power demands.
Suppose it were possible to reduce the HP steam demandfor
example, by improving the energy efficiency within the processes
that use HP steam. What, then, would such a saving in steam actually
be worth? Reducing HP steam demand means that less steam needs
to be expanded from the VHP level, which could lead in turn to less
power cogeneration and increased import of power. As a support
tool for deciding how best to utilize the potential steam excess and
estimate the value of potential steam savings, Varbanov, Doyle, and
Smith (2004) introduced the concept of marginal steam price. This
characteristic captures the change in a utility systems energy cost
per unit change in steam demand, and it is specific to a given
combination of steam header and operating conditions. By
optimizing the utility system at gradual successive reductions of
potential steam demand on the headers, it is possible to obtain a
curve of the marginal steam price versus the savings that could be
obtained. The marginal price curve for the utility system in
Process Integration for Improving Energy Efficiency 103
Natural Gas

Coal Fuel Oil

Current: 250.0 Current: 55.4
[50.0; 250.0] GT
[40.0; 180.0]
Boiler 1 Boiler 2 Air

Current: 208.4 Current: 28

[115.0; 335.0]
Current: 105.0
[60.0; 165.0] VHP (101 bar abs.)

Current: 15.0 T4 Current: 59.5 T6 20.0

[15.0; 75.0] [15.0; 65.0] 0.0
HP (41 bar abs.)
Current: 90.0 Current: 40.9
T5 54.5
[30.0; 9.0] [30.0; 70.0] 0.0
MP (15 bar abs.)
Current: 20.0 Current: 108.0
T7 55.9
[20.0; 20.0] [60.0; 150.0] 0.0
LP (3 bar abs.)
LP Vent CW
Current: 75.0 128.0
0.073 bar abs.
[40.0; 75.0]
All flowrates are in [t/h]. The lower and upper bounds are shown in the format [min;max].

FIGURE 4.69 Optimized utility system.

FIGURE 4.70 Marginal

steam prices for the 8 HP Steam MP Steam
utility system shown in 7
Figure 4.69.
Marginal Price, $/t

0 10 20 30 40 50 60 70
Steam Savings, t/h

Figure 4.69 is shown in Figure 4.70. The plot indicates that the most
potential for improvements is in the HP steam using processes,
followed by MP users; the potential for LP steam savings is evidently
quite modest.
This page intentionally left blank
Mass Integration

he two main branches of Process Integration are energy
integration and mass integration. Mass integration is a
systematic methodology that provides a fundamental
understanding of the global flow of mass within the process and
then employs this understanding to identify performance targets
and to optimize the allocation, separation, and generation of streams
and species. In the context of wastewater minimization, a mass
integration problem involves transferring mass (contaminant load)
from rich process streams to lean process streams in order to achieve
their target outlet concentration and simultaneously minimizing
waste generation and the consumption of utilitiesincluding
freshwater and external mass separating agents (Rakovi, 2006).

5.1 Water Integration

Water is widely used in various industries as an important raw
material. It is also frequently used in the heating and cooling utility
systems (e.g., steam production, cooling water) and as a mass
separating agent for various mass transfer operations (e.g., washing,
extraction). Strict requirements for product quality and associated
safety issues in manufacturing contribute to large amounts of
high-quality water being consumed by the industry. Stringent
environmental regulations coupled with a growing human population
that seeks improved quality of life have led to increased demand for
quality water. These developments have increased the need for
improved water management and wastewater minimization.
Adopting techniques to minimize water usage can effectively reduce
both the demand for freshwater and the amount of effluents generated
by the industry. In addition to this environmental benefit, efficient
water management reduces the costs for acquiring freshwater and
treating effluents.
A number of different methodologies have been applied to
minimizing water use and effluents. These include

Minimizing water consumption through efficient management

and control of process operations

106 Chapter Five

Optimizing the material and energy balances of processes by

applying advanced optimization strategies that aim to reduce
Integrating optimization and production planning techniques
in conjunction with real-time plant measurements to control
for product quality and minimize losses
Increasing the use of enhanced intelligent support to operators
by applying knowledge-based decision-making procedures
to select options that best protect the environment
Employing Process Integration techniques that are based on
Pinch Analysis

Each processing industry has its own unique and specific

features. In all cases, however, it is advisable to progress from the
simplest measuressuch as good housekeeping based on efficient
management, control, and maintenanceto more advanced
methodologies. Some processes are continuous and run seven days a
week for the whole year; others are intermittent and/or highly
dependent on availability of the feed stock. Typical of such campaign
production are plants that process sugar, fruit juice, and cereal. In
contrast, breweries operate on a nearly continuous basis, although
the processing is performed in batches. All these factors influence
investments in processing plants and the technologies adopted,
including those that involve water.

5.2 Minimizing Water Use and Maximizing Water Reuse

5.2.1 Legislation
Water use and wastewater discharge are both subject to national and
international standards.
For the United States, the most significant water-related federal
legislation includes: (1) the National Pollutant Discharge Elimination
System (NPDES) permit program (1972); (2) the Clean Water Act
(Federal Water Pollution Control Amendments of 1972), as amended
by the Clean Water Act of 1977; (3) the Safe Drinking Water Act of
1974; (4) the Toxic Substances Control Act of 1976; and (5) the Water
Quality Act of 1987. Individual states also legislate regarding water
for example, the comprehensive water legislation passed in California
and signed by the governor in 2009.
Legislation in member countries of the European Union (EU)
follows the directives of the European Commission (EC), which is
the executive body of the EU. The most relevant of these directives
are published on the official EU web site (EUROPA, 2009). Selected
topics and titles include:

The new European water policy: river basin management

Mass Integration 107
Water Framework Directive (2000/60/EC)
Strategies to prevent chemical pollution of surface water
under the Water Framework Directive
Priority substances under Article 16 of the Water Framework
A European action program on flood risk management
Discharges of Dangerous Substances Directive (76/464/EEC)
Water pollution stemming from urban wastewater and
certain industrial sectors; Urban Waste Water Treatment
Directive (91/271/EEC)
Water pollution caused by nitrates from agricultural sources;
Nitrates Directive (91/676/EEC)
Bathing water quality of rivers, lakes, and coastal waters;
Bathing Water Quality Directive (76/160/EEC) and proposed
Drinking water quality; Drinking Water Directive (98/83/EC)

Most of these items directly (or indirectly) concern the water used
and wastewater discharged by processing industries.

5.2.2 Best Available Techniques

The Integrated Pollution Prevention and Control (IPPC) Directive
(96/61/EC) introduced a framework within which EU member states
are required to issue operating permits for industrial installations
performing certain activities. These permits must prescribe
conditions that are based on best available techniques (BATs). Best
available techniques are those with the best overall environmental
performance that can be introduced at a reasonable cost, and their
purpose is to ensure a high level of protection for the environment as
a whole. A key aim of the IPPC Directive is to stimulate an intensive
exchange of information on BAT between the European member
states and affected industries. The European IPPC Bureau (eippcb. organizes this exchange of information and produces BAT
reference documents (BREFs), which member states must take into
account when establishing permit conditions. The bureau carries out
its work through technical working groups (TWGs) consisting of
nominated experts from industry, EU member states, countries in
the European Free Trade Association, and nongovernmental
organizations concerned with the environment. Because the
European IPPC Bureau is located in Seville, Spain, activities carried
out within the framework of the IPPC Directive are often referred to
as the Seville process.
Several BAT-oriented studies have been made in food processing
industries. A good example comes from the Flemish Centre for Best
Available Techniques (BAT-CENTRE, 2009). This document contains
108 Chapter Five

an overview of available information on the fruit and vegetable

processing industry. Using BAT as guidance, the study proposes:

To Flemish authorities: Permit conditions and techniques for

which investment support may be offered because they are
less detrimental to the environment
To Flemish companies: Guidelines for implementing the
concept of BAT

As described in the study, the fruit and vegetableprocessing

industry comprises the sectors of frozen fruits and vegetables, canned
fruits and vegetables, processed potatoes, peeled potatoes, and fruit
juices. The most important environmental problems are the use of
large volumes of ground water and the production of wastewater
polluted with organic carbon, nitrogen, and phosphorus. Information
on BAT candidates was obtained mostly from expertise present in
Belgium and neighboring countries. More than a hundred different
techniques were selected and examined in terms of technical and
economical feasibility. Best available techniques in wastewater
treatment incorporate, for example, primary and aerobic wastewater
treatment for small potato-peeling enterprises as well as primary,
anaerobic, and aerobic wastewater treatment for large-scale
processing sites. The BAT concept was the basis for concluding that
Flemish wastewater discharge limits on surface water were
technologically and economically feasible, although new limits
(2550 mg/L) on total phosphorus discharge were suggested. Annual
wastewater treatment costs for an average enterprise were estimated
to be 2.53.5 million. For small potato-peeling companies,
wastewater discharge into the sewer system was found to be most
Water-saving measures and the reuse of water may reduce
groundwater consumption by as much as 2530 percent. A good
source for BAT practices is ENVIROWISEa U.K. government
program managed by Momenta (a division of AEA Technology Plc)
and TTI (a division of Serco Ltd.)which offers practical
environmental advice for business. Their web site (ENVIROWISE,
2009) provides a wide range of information including news and best
practice examples.

5.2.3 Water Footprint

Hoekstra (2008) defined the water footprint (WFP) as an indicator of
direct and indirect water use, which is measured in terms of water
volumes consumed, evaporated, and/or polluted. The WFP includes
consumptive use of virtual green, blue, and gray water. The virtual
green water content of a product is the volume of rainwater that
evaporated during the production process. For the food industry,
this volume is consumed mainly by agricultural products and
Mass Integration 109
represents the total rainwater evaporation from the fields during a
crops growing period. The virtual blue water content of a product is
the volume of surface water or groundwater that evaporates as a
result of productionfor example, the evaporation of irrigation water
from fields, irrigation canals, and storage reservoirs. The virtual grey
water content of a product is the volume of water required to dilute
pollutants in order to meet water quality standards for reuse or
discharge to the environment. A water footprint can be calculated for
any product or activity that has a well-defined group of producers
and consumers. The water footprint is a geographically and
temporally explicit indicator: it reflects not only volumes of water
consumption and pollution but also the type of water use as well as
where and when the water was used.
The idea of water life-cycle assessment has gained more interest
since the concept of a water footprint was introduced (Hoekstra and
Chapagain, 2007; Hoekstra, 2008). In food supply chains, the actual
water content of the final product is usually negligible when
compared with the virtual water content, which is the total fresh
water used during the various steps of supply and production. Aside
from the water that appears as an ingredient in prepared foods, most
water use in the food industry consists of the virtual water described
in the previous paragraph (Casani, Rouhany, and Knchel, 2005).
The most common water-using operations are as follows:

Heating: Boilers, heat exchangers, etc.

Process water: Cooling towers
Potable uses: Offices, canteens, etc.
Washing: Equipment, bottles, floors, vehicles, etc.
Rinsing: Equipment, bottles, food materials, final products
Transport medium

The food industry consumes a large amount of water: its

consumption was estimated to be 347.2 Mm3 in Canada (Dupont and
Renzetti, 1998) and 455 Mm3 in Germany (Fahnich, Mavrov, and
Chmiel, 1998). Other studies (Hoekstra and Chapagain, 2007; Water
Footprint Network, 2009) have reported figures for use of virtual water
in the production of some common food products; see Table 5.1.
Just as in the case of heat recovery, for water recovery it is best to
start with simple measures based on efficient management (e.g., good
housekeeping) before moving on to more advanced methodologies
(e.g., Process Integration techniques). Industrial operations are not
always run in continuous mode, since they depend to a great extent
on the availability of feed stock and the need to control quality. As
mentioned before, sugar, fruit juices, and cereal are typically
processed intermittently, and breweries operate continuously. These
110 Chapter Five

Product Virtual water (L)

1 glass of beer (250 mL) 75
1 glass of milk (200 mL) 200
1 cup of coffee (125 mL) 140
1 cup of tea (250 mL) 35
1 slice of bread (30 g) 40
1 slice of bread (30 g) with cheese (10 g) 90
1 potato (100 g) 25
1 apple (100 g) 70
1 glass of wine (125 mL) 120
1 glass of apple juice (200 mL) 190
1 glass of orange juice (200 mL) 170
1 bag of potato crisps (200 g) 185
1 egg (40 g) 135
1 hamburger (150 g) 2400
1 tomato (70 g) 13
1 orange (100 g) 50

TABLE 5.1 Virtual Water Consumed while Processing Selected Food

Industry Products (after Hoekstra and Chapagain, 2007)

factors will affect the choice of technologies to be adopted, including

those that involve water and wastewater.
A comprehensive survey of water and wastewater applications
for the processing industry can be found in the books of Smith (2005)
and Kleme, Smith, and Kim (2008). Most of these techniques for
water minimization fall into one of two groups:

1. Process changes: This category groups fundamental changes

in unit operations that consume fresh water. Examples
include increasing the number of stages in an extraction
process to reduce its consumption of water, changing from
wet cooling towers to air coolers, improving energy efficiency
to reduce steam demand, increasing the condensate return
from steam systems, and improved housekeeping. Good
housekeeping practices include analyzing and measuring
water use and wastage, reducing water wastage, regular
cleaning operations, and equipment maintenance.
2. Reuse, recycling, and regeneration: These options enable the
reuse of wastewater between water-consuming operations. Of
course, the presence of pollutants in wastewater streams
Mass Integration 111
must be considered so that subsequent water-using processes
will not be adversely affected. This aspect is discussed in
more detail in Section 5.2.4.

Several particular methodologies for water minimization are

listed in the chapter by Kleme and Perry (2007b):

Water Pinch Analysis techniques

Mathematical optimization techniques
Efficient management and control of process operations
Integrating optimization and production planning techniques
in conjunction with real-time plant measurements to control
for product quality and minimize losses

5.2.4 Minimizing Water Usage and Wastewater

Overview of the Measures
The task of minimizing water usage and wastewater discharge has
received considerable attention during the last few years as water has
become more costly and an environmentally strategic concern. Smith
(2005) summed up the measures applied to minimizing water usage
and wastewater as follows:

1. Process changes: These include all the measures described

under item 1 of the listing in Section 5.2.3. Water quality can
also be improved by reducing the use of certain processing
components, such as hazardous cleaning agents, chemicals,
and additives. Additional process changes may be driven
through inspection or through process optimization via
developed technologiesfor instance, Process Integration
(Pinch Technology). Reduced consumption and increased
efficiency may be achieved either by upgrading equipment
or adopting new technologies.
2. Reuse: This is a viable strategy when wastewater from a given
operation is used directly in other operations, provided that
the pollutants in the reused water do not disturb the processes
in the downstream operations. Methods for maximizing
water reuse are detailed in Section 5.3, along with a discussion
on the use of recycled water in food processing.
3. Regeneration reuse: This is the process of purifying wastewater
from one operation and then reusing it in another operation
or process.
4. Regeneration recycling: Here the contaminants in the
wastewater are only partly eliminated before the water is
returned for use in the same process.
112 Chapter Five

The paper by Blomquist and Brown (2004) offers a useful review

of wastewater minimization. The authors examined a large number
of preassessment and assessment techniques for respectively
identifying waste minimization focus areas (opportunities) and
options (solutions) during a waste minimization audit. Blomquist
and Brown critically reviewed these techniques and assessed their
relative merits. The preassessment techniques were analyzed in
terms of their ease and speed of implementation; the assessment
techniques were evaluated in terms of their usefulness and

Wastewater Treatment
Methodologies for wastewater handling can be subdivided into
different stages of treatment, as follows:

Pretreatment: Mechanical separation of coarse particles (e.g.,

sticks, plastics).
Primary treatment: Removal of suspended solids by physical
or physical-chemical treatment. This process may consist of
natural sedimentation or may be assisted via adding
coagulants and/or flocculants or via centrifugation. Primary
treatment also includes neutralization, stripping (e.g.,
elimination of ammonia, NH3), and the removal of oils and
grease by flotation.
Secondary treatment: The removal of colloids and similar
matters from the wastewater. This treatment, which may
include chemical and biological processes, minimizes the
wastewaters organic load. Processes commonly used include
activated sludge treatment and anaerobic digestion, both of
which lead to the critical removal of phosphate, ammonia,
and oxygen-depleting contaminants.
Tertiary treatment: This stage comprises physical and chemical
processes that eliminate such pollutants as phosphate,
ammonia, minerals, heavy metals, and organic compounds.
The processes are viewed as a polishing phase and are
usually more expensive than conventional techniques. The
necessity of applying this type of treatment is largely dictated
by the following two factors:
(i) Meeting discharge conditions established by environ-
mental quality standards (EQS), which may be stricter
than BAT requirements. Subject contaminants include
ammonia, so-called List I and List II (BAT-CENTRE,
2009) substances, and suspended solids.
(ii) Recycling the wastewater for further use in the factory
as either process water or washing water.
Mass Integration 113
Tertiary treatment is especially important in environmentally
sensitive areas where effluent has to have low concentration levels
and loads of nitrogen and phosphorus.

5.3 Introduction to Water Pinch Analysis

Pinch Analysis was first developed for Heat Exchanger Network
synthesis and subsequently extended to yield other energy integration
applications (El-Halwagi, 1997; Kleme et al., 1997; Smith, 2005). The
analogous characteristics of heat and mass transfer allowed the
application of Pinch Analysis to the synthesis of mass exchange
networks and a series of other mass integration problems (El-Halwagi,
1997). Water Pinch Analysis emerged as a special case of mass
integration following the seminal work of Wang and Smith (1994).
However, that papers targeting technique was limited to the fixed
load problem, where water-using processes are modeled as mass
transfer operations. Later work on Water Pinch Analysis has focused
mainly on the fixed flow-rate problem, where flow-rate requirements
are viewed as the important constraints for water-using processes
(Dhole et al., 1996; Hallale, 2002; El-Halwagi, Gabriel, and Harell, 2003;
Manan, Foo, and Tan, 2004; Prakash and Shenoy, 2005).
In the context of Water Pinch Analysis, reuse means that the
effluent from one unit is used in another unit and does not reenter
the unit where it was previously used; in contrast, recycling means
that the effluent will reenter the unit where it was previously used,
usually after certain purification. In addition, one may also use a
regeneration unit (e.g., filter, stripper) to partially purify the water
stream prior to reuse or recycling (Wang and Smith, 1994).
A typical Pinch Analysis study proceeds in two stages. The first is
targeting, whereby minimum freshwater and wastewater flow rates
are set; this is followed by network design to achieve the targeted flow
rates. It is worth emphasizing that the targeting step is the primary
focus in Water Pinch Analysis. The target is needed in order to
determine how well a reuse or recycle system can actually perform in
terms of thermodynamic constraints. Establishing targets in advance
of design provides a clear picture of the mass exchange limitations of
the design problem, indicating the smallest achievable freshwater
intake and wastewater discharge. Once the targets are established, a
water network can be designed using any network design tools.
Wang and Smith (1994) described a methodology for determining
the amount of water required by a set of operations when water is
reused. They showed that significant water savings can be achieved
compared with the case when only freshwater is used. The authors
employ a simple example that makes use of the limiting Composite
Curve (CC) and incorporates four water-using operations. The
problem data is presented in Table 5.2.
114 Chapter Five

Operation Contaminant Cin Cout FL

number mass flow [ppm] [ppm] [t/h]
1 2 0 100 20
2 5 50 100 100
3 30 50 800 40
4 4 400 800 10

TABLE 5.2 Problem Data for Wang and Smiths (1994)

Water-Using Operations

The table lists the maximum inlet and outlet concentrations of a

single contaminant for four operations. The last column gives the
limiting water flow rate, which is the flow rate required by the
operation if the contaminant mass is taken up by the water between
the inlet and outlet concentrations. Note, however, that for an
operation whose inlet concentration is greater than zero, using
uncontaminated freshwater enables a lower flow rate than the
limiting water flow rate for that operation. A straightforward analysis
of the problem data, assuming that each operation uses freshwater,
reveals that the total (uncontaminated) freshwater required by the
operations is 112.5 t/h, with the four operations requiring 20, 50, 37.5,
and 5 t/h.
However, if water reuse is allowed, then an analysis that makes
use of the limiting CC produces a target for the minimum water flow
rate of 90 t/h. The limiting CC of the four water-using operations is
plotted in Figure 5.1. The water supply linewhich satisfies the water-
using operations represented by the limiting CChas its origin at zero
concentration and lies below the curve. The slope of the line is such
that it touches the CC at one point, which is termed the Water Pinch.
Other water supply lines with the same origin could be drawn, but
these would not touch the CC and thus would indicate flow rates
greater than the (preferred) minimum. If the water supply line were
drawn with a steeper slope to indicate a smaller flow rate, then the line
would actually cross the limiting CC and so could be part of an
infeasible design.
Wang and Smith (1994) provided a methodology for calculating
the minimum flow rate of water (including reuse) required to remove
contaminants from water-using operations. In addition, this paper
provided a methodology for designing a water reuse system. Figure 5.2
displays the final system design for the water operations described
in Table 5.2. The figure shows that, of the original targeted freshwater
amount of 90 t/h, 20 t/h is fed to operation 1 and 50 t/h is fed to
operation 2. The remaining 20 t/h is fed to operation 3 along with
Mass Integration 115
C [ppm]






1 9 21 41 m [kg/h]

FIGURE 5.1 Limiting Composite Curve and the Water Pinch.

20 t/h

20 t/h 40 t/h
Operation 1 Operation 3
Feedwater Wastewater
90 t/h 90 t/h
50 t/h 5.7 t/h
Operation 2 Operation 4

44.3 t/h

FIGURE 5.2 Water treatment system designed using Water Pinch methodology.

20 t/h from operation 1. Of the original 50 t/h fed to operation 2,

5.7 t/h is fed to operation 4 and the remaining 44.3 t/h goes directly
to wastewater. The authors acknowledge that this design could be
evolved further to produce alternative networks.
To set the flow-rate targets for water reuse or recycling, various
graphical and tabulated targeting techniques may be employed. In
addition to the limiting CCs (Wang and Smith, 1994) mentioned
previously, the following methods have been used: the water surplus
diagram (Hallale, 2002), the material recovery Pinch diagram
(El-Halwagi, Gabriel, and Harell, 2003; Prakash and Shenoy, 2005),
116 Chapter Five

the cascade analysis technique (Manan, Foo, and Tan, 2004), and the
source CC (Bandyopadhyay, Ghanekar, and Pillai, 2006). Once the
flow-rate targets have been identified, numerous techniques can be
used to design a water network that achieves those targets. The works
just cited were developed for continuous processes, but there have
been several reported efforts to apply Water Pinch Analysis to batch
processes; these include the works of Wang and Smith (1995), Liu,
Yuan, and Luo (2007), Foo et al. (2006), and Majozi, Brouckaert, and
Buckley (2006).

5.4 Flow-Rate Targeting with the Material

Recovery Pinch Diagram
This section illustrates the targeting technique of Material Recovery
Pinch diagramMRPD (El-Halwagi, Gabriel, and Harell, 2003;
Prakash and Shenoy, 2005). Constructing an MRPD requires knowledge
about the material flow rates and loads of each process sink and
source. Given this information, one may construct an MRPD as

1. Arrange the individual water sources (SRi) and demands

(SKj) into two lists, in ascending order of concentration
level (C).
2. For each source and demand, calculate the load given by the
product of its flow rate and concentration level (i.e., F C).
3. Plot the cumulative sources and demands, on a diagram of
load versus cumulative flow rate, in ascending order of their
concentration levels to form the sink and source CCs. In
order to render the problem feasible, the cumulative water
source CC must lie below the cumulative water demand CC,
ensuring that the water purity requirements are satisfied.
4. For pure fresh resources (zero concentration of impurities),
the sink and source CCs are separated horizontally until
they barely touchwith the source composite lying below
and to the right of the sink compositeas shown in
Figure 5.3(a).
5. For impure fresh resources, the source CC is shifted along an
impure fresh locus until it lies below and to the right of the
sink CC; this is shown in Figure 5.3(b).

The overlap area of the sink and source CCs represents the
maximum recovery among all sink and source streams within the
network. The point where the two composites touch is called
the Material Recovery Pinch, which is the bottleneck for maximum
recovery. The segment where the sink CC extends to the left of the
Mass Integration 117
Load Load
Sink composite
Sink composite

Pinch point Pinch point

Impure fresh

Flowrate Flowrate
Minimum Maximum Minimum waste Minimum Maximum Minimum waste
fresh source recovery discharge fresh source recovery discharge

(a) (b)

FIGURE 5.3 MRPD for (a) pure fresh resource and (b) impure fresh resource.

source CC represents the minimum feed needed for fresh resources

(to be purchased); the region where the source CC extends to
the right of the sink CC represents the minimum waste discharge
from the network (for final treatment before releasing into the
Both the minimum fresh resources needed and the minimum
waste generated by the network are network resource targets, and
they are determined before the recovery network is designed. In the
next section, the MRPD is used to establish flow-rate targets for a
case study in the production of fruit juice.

5.5 MRPD Applied to Fruit Juice Case Study

Table 5.3 shows the limiting water data for a case study involving
the production of fruit juice (Almat, Espua, and Puigjaner, 1999;
Li and Chang, 2006). The water amounts are expressed in cubic
meters (m3) per each batch. There is a start time (tS ) and an end time
(tT ) for each water sink and source. Prior to water recovery, freshwater

SKj FSKj Cj tjS tjT SRi FSKi Ci tiS tiT

[m3] [ppm] [h] [h] [m3] [ppm] [h] [h]
SK1 20 0 0.5 2.5 SR1 20 5 2.5 4.5
SK2 20 6 5.0 7.0 SR2 20 14 7.0 9.0
SK3 20 15 9.5 11.5 SR3 20 20 11.5 13.5
SK4 16 5 17.0 19.0 SR4 8 25 17.0 19.0
SK5 20 7 6.0 8.0 SR5 16 10 10.5 14.5

TABLE 5.3 Fruit Juice Production: Limiting Water Data

118 Chapter Five

and wastewater amounts are calculated to be 96 m3 and 84 m3,

respectively (these values are given by the sum of the individual
water flows).
For this case study, three assumptions are made in order to
simplify the analysis:

1. The batch process is operated repeatedly on a yearly basis.

Therefore, the process behaves as if it is operated in
continuous mode. It has been shown elsewhere (Foo, Manan,
and Tan, 2005) that a repeated batch operation achieves the
same flow targets as for equivalent continuous operation.
2. An unlimited water storage tank is always available. Thus, water
can be stored for later use.
3. Water recovery is always carried out between two consecutive
batches. In other words, water sources in an earlier batch will
be sent to water storage tanks before being reused or recycled
to the water sinks of the next batch operation. A similar
assumption was made in the paper by Shoaib and colleagues

Given these assumptions, any established targeting technique for

continuous processes can be used to identify rigorous water targets
for the case study. For purposes of this example, the MRPD is used.
The MRPD is illustrated in Figure 5.4(a), and the network that
achieves the MRPD-derived water targets is shown in Figure 5.4(b).
As shown in the figure, the minimum freshwater demand (FFW) is
determined to be 35 m3; the wastewater flow (FWW), 23 m3. When
compared with the total water flow prior to water recovery, this
represents a significant reduction of 63.5 and 72.6 percent for
freshwater and wastewater, respectively.

5.6 Water Minimization via Mathematical Optimization

5.6.1 Introduction to Mathematical Optimization
Besides Water Pinch Analysis, water minimization problems have
also been solved using mathematical optimization techniques.
Various mathematical optimization approaches have been developed
to complement Water Pinch Analysis in dealing with more complex
problemsfor example, multicontaminant systems (Takama et al.,
1980; Alva-Argez, Kokossis, and Smith, 1998; Huang et al., 1999);
complex operational constraints, which include limiting the number
of pipeline connections (Hul et al., 2007); and forbidden/compulsory
matches between water-using processes (Bagajewicz and Savelski,
2001; Kim and Smith, 2004; Li and Chang, 2006). New research results
Mass Integration 119

(a) [kg]





20 40 60 80 100 120
FFW =35m3 [m3]

20 16

4 10
6 SR4
5.7 8
14.3 5
FFw 0.7 15 Fww
35m3 23m3


SK5 SR5 16

0 5 10 15 20 t [h]
All water amounts are in m3
Network design

FIGURE 5.4 Fruit juice production: (a) MRPD; (b) network design.

and case studies have recently been published by authors from South
Africa (Gouws and Majozi, 2008a; Gouws and Majozi, 2008b), Asia
(Chen, Chang, and Lee, 2008; Ng et al., 2008; Chen et al., 2009), and
Europe (Tokos and Novak Pintari, 2009).
120 Chapter Five

5.6.2 Illustrative Example: A Brewery Plant

This section discusses the case study of a brewery plant (Tokos and
Novak Pintari, 2009) as a means to illustrate how mathematical
optimization is used to solve water minimization problems. In the
brewery studied, the ratio (by volume) of process water consumed to
product beer sold was 6.04 : 1; this translated into 653,300 m3/y of
water consumed. In terms of the ratio set by BREF (2006), this
freshwater consumption exceeded the upper limit by 144,900 m3. In
light of these figures, the company undertook to improve its process
by retrofitting modifications to its existing water network so that the
plants usage of freshwater would be minimized.
Production at the brewery plant involves a mixture of water-
using batch and semicontinuous processes. Water-using operations
in the packaging area are operated mainly in batch mode, with the
exception of rinsers for nonreturnable glass bottles and cans.
Wastewater stream from semicontinuous processes can be reused in
batch processes with a lower purity requirement. Hence, the basic
formulation first proposed is designed to enable the efficient
integration of semicontinuous and batch water-using processes.
The continuous wastewater streams are treated as limited
freshwater sources, and the unused wastewater is discharged. In the
next step, the model is extended by including options for installing
intermediate storage tanks for the collection of unused wastewater
streams for reuse over subsequent time intervals. This particular
design modification is motivated by differences in the operating
schedules of the filling lines. The superstructure representation for
water reuse and regeneration reuse (as defined in Section 5.2.4) is
depicted in Figure 5.5.
Opportunities for regeneration reuse were analyzed in the
brewhouse and in the cellar (see Figure 5.6), since these processes
were characterized by a high concentration of contaminants. Here
the basic model is extended by installing a local (on-site) wastewater
treatment unit that can operate in either batch or continuous mode,
thereby enabling water regeneration reuse and recycling. The
scheduling of batch wastewater treatment units is performed
simultaneously so that the treatment schedule will coincide with the
fixed schedule of the batch process. The design includes the option to
install storage tanks before and after treatment; this enables
wastewater and/or purified water to be stored until required by the
treatment schedule.
As reported in Tokos and Novak Pintari (2009), the integration
of the water network in the packaging area made it possible for
wastewater from the can rinser to be reused in the pasteurization
processes. In this way, freshwater consumption could be reduced by
23 percent and the common costs of freshwater and wastewater
treatment by 22 percent. These improvements do not require the
Mass Integration 121
Local treatment unit

Purified water
Purified water Freshwater
Storage n

Operation n Storage ww

Wastewater from continuous operation

Wastewater from central
storage tank for continuous

FIGURE 5.5 Superstructure for water reuse and regeneration reuse in a

brewery plant (Tokos and Novak Pintari, 2009).

Water for pouring

Filtration TR

Cleaning in-place Cleaning in-place

system system

Fresh water
for batch processes
Water for pouring Wort boiling

Cellar Brewhouse

FIGURE 5.6 Water reuse opportunities in a brewery plant.

addition of any storage tanks. The net present value of the proposed
water network reconstruction is positive (at a 15 percent discount
rate), and the payback period is 0.29 years (about 15 weeks).
In the brewhouse and the cellar, the continuous water treatment
unit (nanofiltration) was selected for wastewater purification in the
optimum water network. Purification allows the water from batch
material pouring to be reused in the clean-in-place (CIP) system, and
wastewater from filtration could be reused directly for pouring the
batch material. All told, freshwater consumption could be reduced
122 Chapter Five

by 28 percent and the joint cost of freshwater and wastewater

treatment by 27.9 percent. Investment costs for the modifications,
which require a membrane area of 83 m2, amount to 117,205. The net
present value of the optimal water network is positive at a 15 percent
discount rate, and the payback period is 1.3 years. The price of
freshwater has a significant impact on the optimal water network.
Increasing freshwater prices result in the identification of additional
opportunities for reuse and regeneration reuse. Examples include
water reuse between the rinser for nonreturnable bottles and the
bottle washer, water reuse between the pasteurizer and the bottle
washer, and regeneration reuse between wort boiling and the CIP
Complete implementation of the proposed design could allow
the brewery to reduce its current freshwater demand by about
25 percent and to reduce its costs for freshwater and for wastewater
treatment by about 27 percent. Furthermore, the brewerys ratio of
water consumed to beer sold would decrease to 4.53 : 1 (from 6.04 : 1),
which is important for cleaner production and sustainable
development within the company.

5.7 Summary
Water is used in most process industries for a wide range of
applications. Today, industrial processes and systems that use water
are subject to increasingly stringent environmental regulations
concerning the discharge of effluents. Moreover, the demand for
fresh water continues to increase.
The pace of these trends has increased the need for improved
water management and wastewater minimization. Adopting water
minimization techniques can effectively reduce overall freshwater
demand in water-using processes and also reduce the amount of
effluent generated. These reductions bring reductions also in costs
incurred to acquire freshwater and treat effluents.
The field has been developing rapidly, and every year brings a
number of new and more efficient approaches. This chapter has
reviewed and demonstrated, through selected case studies, current
methodologies that have been applied to minimize water use and
wastewater in the processing industry.
Further Applications of
Process Integration

rocess Integration, also called Pinch Technology, was initially
developed for energy and specifically for Heat Integration.
Details of the origin and development of Heat Integration were
given in Chapters 2 and 4. Its further development resulted in a
methodology for integrating mass transfer and water integration in
particular; this technology was described in Chapter 5. This chapter
(chapter 6) focuses on the additional applications and especially
recent developments that have expanded the generic Process
Integration ideas in various other directions. Given the rapid
development of this methodology, it is not possible to cover all recent
achievements. Nonetheless, this chapter explores several interesting
directions that have considerable potential for future development.

6.1 Design and Management of Hydrogen Networks

The evolution of Pinch Technology has allowed mass integration to
be extended to hydrogen management systems. In one of the earliest
works in this field, Alves (1999) proposed a Pinch approach to
targeting the minimum hydrogen utility. This method was based on
an analogy with process heat recovery. Just as the distribution of
energy resources in a plant can be analyzed and designed via using
Pinch Technology, so can the distribution of hydrogen resources be
handled in refineries, which typically have several potential sources
(each capable of producing a different amount of hydrogen) and
several hydrogen sinks (with varying requirements). However, the
designer has more flexibility in determining the hydrogen loads of
individual units by varying the throughput of units and operating
many processes over a range of conditions. As a result, there is
considerable potential for optimizing refinery performance.
A liquid hydrocarbon feed stream is mixed with hydrogen-rich
gas, heated, and then fed to a reactor. Part of the hydrogen is
consumed by reaction with the feed. Light hydrocarbon compounds
(methane, ethane, and propane), hydrogen sulfide (H2S), and

124 Chapter Six

ammonia are usually formed as part of the reaction products. The

effluent from the reactor is then cooled down and sent to a high-
pressure flash separator. The gas released in the separator is often
treated in an amine scrubber that removes H2S. Part of the gas is
vented from the process through a high-pressure purge to prevent
any buildup of hydrocarbons in the recycle. The remaining hydrogen-
rich gas is recompressed and then returned to the reactor with a
fresh hydrogen makeup stream. The liquid stream removed from the
bottom of the high-pressure separator contains some hydrogen, light
hydrocarbon gases and H2S in the solution, which is lost from the
hydrogen system. This liquid stream is sent to a low-pressure
separator, from which off-gases are taken and typically sent to a flare
or to the fuel gas system.
A two-dimensional plot of total gas flow rate versus purity
represents the mass balance of each sink and source in the hydrogen
network. A plot that combines the profiles for hydrogen demand
(dashed line) and hydrogen supply (solid line) yields the hydrogen
Composite Curve (CC) (Figure 6.1). The sink and source profiles start
at zero flow rate and proceed to higher flow rates with decreasing
purity. The circled plus signs in the figure indicate the surplus
where sources provide more hydrogen than is required by sinks.
Where the sources do not provide enough hydrogen to the sinks, a
circled minus sign appears on CCs to indicate a deficit of supply.
The area beneath the entire Sink Curve is the flow rate of pure
hydrogen that the system should provide to all the sinks. The area
beneath the Source Curve is the total amount of pure hydrogen
available from the sources.
For the hydrogen network to be feasible, there should be no
hydrogen deficit anywhere in the network; otherwise, the sources
will not be able to provide enough hydrogen to the sinks. The
hydrogen utility can be reduced by horizontally moving the curve
toward the vertical (purity) axis until the vertical segment between
the purities of the sink and the source touches the vertical axis,

Source Composite Curve

(a) 1 (b) 1
0.9 0.9
0.8 0.8

0.7 0.7
Purity []
Purity []

0.6 0.6
0.5 0.5
0.4 0.4
0.3 Sink Composite Curve 0.3
0.2 0.2
0.1 0.1
0 0
0 1 2 3 4 5 6 7 8 0 0.5 1
Gas flowrate [106 standard m3/d] Hydrogen surplus [106 standard m3/d]

FIGURE 6.1 Composite Curves and hydrogen surplus diagram (after Alves, 1999).
Further Applications of Process Integration 125
thereby forming the Hydrogen Pinch. Separating the hydrogen
source and sink parts then determines the target value for the
hydrogen utility minimum flow rate.
The procedure for calculating the supply target requires varying
the flow rate of gas supplied to the system until a Hydrogen Pinch
is found. The sources from hydrogen-consuming processes or
from processes generating hydrogen as a secondary product
(dehydrogenation plants) have flow rates that are determined by
normal process operation; these rates are assumed to be fixed for the
purposes of designing a hydrogen network. However, process
hydrogen sources with variable flow rates can be regarded as
imports from external suppliers and from processes (i.e., steam
reformers or partial oxidation units) that generate hydrogen as a
main product. Those sources are hydrogen utilities.
One approach to minimizing hydrogen utility consumption is to
increase the purity of one or more sources. A hydrogen purification
system introduces an additional sink (feedstock for purification) and
two sources (purified stream and residue stream), resulting in new
targets. By employing Hydrogen Pinch Analysis, an engineer can
make the best use of hydrogen resources in order to meet new
demands and improve profitability.

6.2 Oxygen Pinch Analysis

Another extension of Process Integration is Oxygen Pinch Analysis
(Zhelev and Ntlhakana, 1999). The idea is to analyze the problem so
that targets are derived prior to designing a system for minimizing
oxygen consumption of the micro-organisms used for waste
degradation. The next step is to design a flowsheet that achieves the
targets. In most cases, oxygen is supplied through agitation.
Aeration requires energy, so an analysis based on the Oxygen Pinch
eventually leads back to the original application of energy
conservation. Using the chemical oxygen demand or COD (Monod,
1949) as the baseline range for organic contaminants allows one to
set quantitative targets (for oxygen solubility, residence time, and
oxidation energy load) as well as additional qualitative targets
namely, the growth rate that is a direct indicator of the age and
health of micro-organisms (Zhelev and Bhaw, 2000). Analyzing the
information in Figure 6.2 and then matching the oxygen supply line
to the CC (so they touch at the Pinch point) yields targeting information
on the growth rate of micro-organisms, oxygen solubility, residence
time, and oxidation energy load.
In the Oxygen Pinch approach, the method recurs to energy but
also incorporates extra information concerning environmental
issues. An important contribution of this method is its ability to
targetin parallel with the concentration of oxygen and the total
126 Chapter Six

Process Dissolved Pinch

stream O2 stream


supply line
Dissolved Process Slope = m/S
O2 stream stream Intercept = I/S
m-Specific growth Slope ~ growth rate, O2 solubility,
S-Saturation residence time, oxidation energy load

FIGURE 6.2 Oxygen Pinch method (after Zhelev, 2007).

energy requireda quality characteristic: the micro-organisms

health as assessed by their rate of reproduction.

6.3 Combined Analyses, I: Energy-Water, Oxygen-Water,

and Pinch-Emergy
6.3.1 Simultaneous Minimization of Energy and Water Use
Water savings can be achieved through the strategic implementation
of water reuse between water-using operations. Further minimization
of freshwater usage is possible by regenerating water, which is then
recycled. The design methodology developed by Smith and colleagues
(Wang and Smith, 1994; Kuo and Smith, 1997; Gunaratnam et al.,
2005) has proven to be effective in process industries because it
provides a systematic means of establishing realistic minimum water
requirements for a site as well as conceptual design guidelines for
de-bottlenecking water systems.
In some process industry sectors (e.g., the food industry), water
use is closely linked to energy systems. The diagram shown in
Figure 6.3 explains the basic concept and the importance of
considering water and energy systems concurrently. In Figure 6.3(a),
freshwater is supplied to two water-using operations and then is
discharged in a parallel arrangement. The necessary heating or
cooling (usually through a heat exchanger) is provided according to
process requirements. This conventional practice can be significantly
improved by implementing a design for simultaneous water reuse
and heat recovery, as shown in Figure 6.3(b). Water reuse between
operations reduces water consumption, and the proposed heat
recovery between streams will reduce the need for utilities (e.g.,
steam, cooling water).
When the problem is considered jointly, finding the best energy
recovery options and water reuse schemes is an extremely complex
task because there are strong design interactions between systems
Further Applications of Process Integration 127

Operation 1
Freshwater Wastewater

Operation 2

Conventional practice: non-integrated

Freshwater Operation 1
Heat recovery
Water reuse Wastewater

Operation 2

Improved design with water reuse and heat recovery

Legend Utility Heater Utility Cooler

FIGURE 6.3 Simultaneous energy and water minimization (after Savulescu,

Kim, and Smith, 2005a).

for water and energy. Both the Water Pinch and the Energy Pinch
concepts have been accommodated in separate design frameworks.
However, the methodological procedure is changed when the
interactions between water reuse and energy recovery must be
considered; see Savulescu, Kim, and Smith (2005a, 2005b). Further
interesting applications have been published (Leewongtanawit and
Kim, 2009; Manan, Tea, and Alwi, 2009).
The energy-water methodology of Savulescu and Kim (2008)
follows a two-step approach: targeting and design. During the
targeting phase, theoretical minimum requirements for freshwater
and thermal utilities (hot and cold) are obtained via graphical
manipulation of streams data (i.e., water flow rate, contaminant
levels, and temperature). The purpose of the design phase is to create
a water and heat recovery network that can achieve the established
target. A useful design tool is the two-dimensional grid diagram
(Figure 6.4), which exploits the network arrangement of water
streams subject to energy recovery constraints (Savulescu, 1999;
Leewontanawit, 2005). An industrial case study conducted recently
(Leewongtanawit and Kim, 2008) showed an 18 percent reduction in
annualized cost resulting from the integrated approach (when
128 Chapter Six

Freshwater main Process water main Wastewater main


Process Process
water users water users


Water quality

- Water source
- Water demand/sink

FIGURE 6.4 Two-dimensional diagram of a water quality network (after

Savulescu and Kim, 2008).

compared with operations when only water minimization is

performed). This case study clearly demonstrated the necessity of a
holistic approach to designing water and energy systems, since there
are significant benefits to simultaneous integration.

6.3.2 Oxygen-Water Pinch Analysis

The link between Water Pinch Analysis and Oxygen Pinch Analysis
is the use of COD as the concentration variable in Water Pinch
Analysis. Here two configurations of wastewater treatment are
investigated, centralized and distributed. The analyzed system
includes the centralized biological treatment unit and several
satellite factories (sites) in the surrounding area that send some
portion of their wastes to the centralized treatment unit. The
variables of interest when analyzing wastewater treatment are the
quantity of wastewater treated and the quantity of oxygen required.
The quantity of treated wastewater gives an indication of how
much freshwater is used and thus of the water management level.
The quantity of required oxygen gives an indication not only of the
wastewater quality but also of the energy required by the wastewater
treatment process. Both the quantity and the quality of the
wastewater treated are related to the cost of the treatment process.
Further Applications of Process Integration 129
The configuration of wastewater treatment serves as an aid in
establishing whether or not wastewater treatment costs that are
based on quality and quantity differ significantly from costs based
on quantity only.
The effluent wastewater conditions obtained via the Water Pinch
method are used to plot concentration versus COD flow rate on one
set of axes. The oxygen required is then calculated using Zhelevs
(1998) method of the limiting oxygen supply line, which is constructed
as shown in Figure 6.5. First, the CC must be constructed. This is
done by plotting all the site streams on the same set of axes. These
streams are then summed within each concentration interval, and the
resulting curve is the limiting CC. The limiting oxygen supply line is
then constructed as a line drawn between the origin and the Pinch
Point. The inverse of the gradient of the limiting oxygen supply line
is the flow rate of oxygen required.

(a) Concentration vs Flowrate (b) Concentration vs Flowrate

Curve for Site 1 Curve for Site 2
12000 10000

[mg COD/I]

[mg COD/I]


0 0
0 4000 8000 0 5000 10000 15000
Mass Flowrate [kg COD/d] Mass Flowrate [kg COD/d]

(c) Concentration vs Flowrate

Curve for Site 3
[mg COD/I]


0 4000 8000
Mass Flowrate [kg COD/d]

(d) Composite Curve and Minimum

Oxygen Supply Line

[mg COD/I]

Site 3 Curve
Site 1
4000 Minimum
Site 2 O2 Supply
0 5000 10000 15000 20000 25000 30000
Mass Flowrate [kg COD/d]

FIGURE 6.5 Construction of the oxygen limiting supply line (after Zhelev and
Bhaw, 2000).
130 Chapter Six

6.3.3 Emergy-Pinch Analysis

The concept of emergy (embodied energy) was first developed by
Odum in the late 1980s (Odum, 1996). Along with other definitions
referring to life cycle, it may be defined in terms of solar transformity
(Brown and Ulgiati, 2004). Solar emergy is the solar energy directly
or indirectly necessary to obtain a flux of energy in a process. The
unit of emergy is the solar emergy Joule (seJ), an extensive quantity,
which denotes the available energy of a certain type (heat, electrical,
etc.) that undergoes transformations. Transformity, an intensive
quantity, is defined as the emergy input per unit of exergy (available
energy) output: seJ/J.
The first step in the practical process of emergy analysis is
collecting information for the calculation of solar transformities
ST [seJ/unit] of the chain of activities involved in making a resource
available to the process. This is the most difficult part of the
methodology because transformity databasesalthough rapidly
growing and continuously updated by researchersare not
comprehensive. The second step is the calculation of solar emergy
SE [seJ/y] followed by calculation of the solar emergy investment
SEI [seJ/g]:

SE [seJ/y] = ST [seJ/unit] Amount [units/y] (6.1)

SEI [seJ/g] = SE [seJ/y]/Amount [g/y] (6.2)

The combined Pinchemergy analysis is used in the preliminary,

conceptual design stage. The Emergy Composite Curve (ECC) is
analogous to the Pinch CCs. In the ECC, solar transformity is plotted
against solar emergy; the CC is then matched up with the total
emergy investment (TEI) supply line, which is restricted by the ECC
at the Pinch point. Analyses are based on ECC benefit from using
both emergy and Pinch features.
Each stream in the ECC carries three pieces of information:

1. Transformity: the past emergy investment or history of the

2. The market potential of the stream in terms of usability:
the heat (temperature) potential of a thermal stream or the
concentration limits of a water stream
3. The streams future in terms of further usability (regenerative

In the case of Heat Pinch Analysis, the hot and cold streams will have
different signs for this component of final emergy investment. The
sign on the required emergy investment to heat the cold streams will
be the opposite of the one for available emergy. Hence, at this level of
Further Applications of Process Integration 131

Composite Curve in
coordinates (after
Zhelev and Ridolfi, Pinch

ST [seJ/J or $]

Composite Curve

Total Energy
supply line

SE [seJ]

analysis, it is possible to relax certain constraints (e.g., Tmin) that can

lead to minimization of the usage of expensive hot utilities.
With Emergy-Pinch analysis, as with classical Pinch Analysis,
the processes overlap on the vertical axis (temperature range,
concentration range) or, as here, the transformities range. The emergy
loads (investments) for the different processes are characterized by
relative values, which allows their graphical representation to be
freely shifted left and right in the ST/SE plot; see Figure 6.6.
The TEI is targeted by drawing the line touching the CC and
then calculating its slope. The greater the slope of the TEI line, the
smaller the rate of TEI. This minimizes the supply of combined
resources and their corresponding costs while lifting the emergy
supply line to its maximum. This limit is represented by the point
where the supply line and the CC meetthat is, the Pinch point. The
slope and the Pinch point of the emergy supply line can be used to
help compare alternative design or operational options. Transformity
is viewed as a quality parameter; when plotted against emergy
investment, it allows targeting of TEI and determination of the
maximum total transformity needed to run a given process.

6.4 Combined Analysis, II: Budget-Income-Time,

Materials Reuse-Recycling, Supply Chains,
and CO2 Emissions Targeting
6.4.1 Budget-Income-Time Pinch Analysis
There are substantial benefits to be derived from applying the process
design concept to financial management. The timing, extent, and
allocation of Process Integration for minimizing the financial risk is
132 Chapter Six

the primary goal of such investigations, which also account for

possible uncertainties in model parameters. The concept of combined
resources management can lead to more realistic design solutions
while helping decision makers account for financial investments.
Because time runs in only one direction, the direction of individual
vectors and both CCs is to the right; this is illustrated in Figure 6.7
and Figure 6.8.
Zhelev (2005a, 2005b, 2005c) reports on two aspects of this broad
area: using Pinch principles to choose alternative designs, and
amalgamating financial considerations with the management of
energy and water. Several different stages can be identified in the
processes of investment, design, commissioning, and operation. By
applying traditional targeting procedures to the management of
financial resources, the following data can be obtained prior to
design: maximum investment level, minimum payback period, and
maximum benefit.
As shown in the upper part of Figure 6.9, this targeting is
analogous to other Pinch applications, such as the Water Pinch. First
a CC is constructed, after which a capital (investment) supply line
is drawn against the CC. The steeper the investment supply line,
the shorter the payback period. The steepest slope is constrained by
the CC, which meets the supply line at the Pinch point. Lifting the
Income [$]

Income [$]
Budget [$]

Budget [$]

Time [d] Time [d] Time [d]

FIGURE 6.7 Project budget and income versus time (after Zhelev, 2007).

Composite Curves for
project budget and
income (after Zhelev,
Budget [$/ 5 y]


Time [y]
Further Applications of Process Integration 133
capital supply line up to the maximum allows one to target both the
investment level and the expected annual benefit (Figure 6.9).

6.4.2 Materials Reuse-Recycle and Property Pinch Analysis

The composition of a substance is only one of several chemical and
physical properties that are essential in a chemical process. Other
common properties include acidity and alkalinity (as measured by
pH), density, viscosity, reflectivity, turbidity, color, and solubility.
The process network synthesis associated with these chemical
properties cannot be addressed by conventional mass integration
techniques, so another generic approach has been developed to deal
with this problem (Shelley and El-Halwagi, 2000; El-Halwagi et al.,
2004). For systems that are characterized by one key property,
Kazantzi and El-Halwagi (2005) introduced a Pinch-based graphical
targeting technique that establishes rigorous targets for minimum
usage of fresh materials, maximum recycling, and minimum waste
Foo and colleagues (2006) focused on developing an algebraic
technique to solve the problem of identifying rigorous targets for
property-based recycling and reuse of materials. A key element of
these techniques is the concept of material surplus, which generalizes
the analogous concept developed for tasks of synthesizing hydrogen
and water networks (Alves and Towler, 2002; Hallale, 2002). Foo et al.
(2006) developed an algebraic approach called property cascade analysis
(PCA) to identify various performance targets for a maximum
resource recovery (MRR) network. This paper also introduced
network design techniques for the synthesis of an MRR network as

Investment [$]



20 40 60 80
Savings [$]
Separate stage of design
Composite Curve
Capital (investment) Supply Line


FIGURE 6.9 Targeting and project management (after Zhelev, 2007).

134 Chapter Six

well as a systematic procedure for identifying optimum process

modification strategies. The problem of designing a property-based
material reuse network is formulated as follows (see Figure 6.10):
A process is described as having a number NSK of process sinks
(units) and a number NSR of process sources (e.g., process and/or
waste streams) that can be considered for possible reuse and/or to
replace the use of fresh material. The aim is to design a network of
interconnections among the property sinks and sources such that the
overall flow rates of fresh resource and waste discharge are
minimized without depriving the sinks of adequate quality resources.
Each sink j requires a feed with flow rate Fj as well as an inlet property
pjin that satisfies the following constraints:

p min
j d p in
j d pj
for j 1, 2,! , N SK (6.3)
min max
where pj and pj are the specified lower and upper bounds on
admissible properties of streams to unit j. Likewise, each source i has
a given flow rate Fi and a given property pi. Also available for service
is a fresh (external) resource, with property pF, that can be purchased
to supplement the use of process sources in sinks. Each process
source may be intercepted via design and/or operating changes in
order to modify the flow rate and property of what each sink accepts
and discharges.
The Pinch diagram shown in Figure 6.11 is a convenient tool,
developed by Kazantzi and El-Halwagi (2005), that avoids the
drawbacks of traditional iterative procedures (Alves and Towler,
2002; Hallale, 2002): low visualization insight for targeting and

(back to process)
Sources Segregated
sources Sinks

j=1 i=1




FIGURE 6.10 Graphical formulation of designing a property-based material

reuse network (after Foo et al., 2006).
Further Applications of Process Integration 135
Unused Materials or
Discharged Waste

U1 + U2 + U3

Source 2

U1 + U2
Load [kg/s]


Source 1

Flowrate [kg/s]
Fresh Usage

FIGURE 6.11 Property-based material reuse Pinch diagram that combines

fresh usages to determine minimum fresh consumption (after Kazantzi and El-
Halwagi, 2005).

network design. Another graphical targeting tool that can be used to

determine minimum resource targets is the Material Surplus
Composite Curve (MSCC). The MSCC was developed by Saw et al.
(2009) based on hydrogen and water surplus diagrams (Alves and
Towler, 2002; Hallale, 2002), but it eliminates the latters iterative
steps; see Figure 6.12.
The drawbacks of the graphical approach can be resolved by
using an equivalent numerical tool, the PCA. This technique is
discussed in detail by Foo et al. (2006) for a case study on solvent
recycling in metal degreasing (Kazantzi and El-Halwagi, 2005).
Applying the PCA makes it possible to use the targeted fresh solvent
flow rate to construct a balanced material sink and source composite
diagram. Foo et al. (2006) also suggested a technique for using the
Property Pinch Analysis (graphical or algebraic) to synthesize a
property network that achieves previously established resource
targets. In addition, their paper discusses applicability of the PCA
procedure to process modifications.
136 Chapter Six

(a) (b) Surplus Composite Curve

Resource Deficit


Deficit Composite
Resource Surplus Curve

Minimum fresh Minimum waste

resource discharge
Flowrate Flowrate

FIGURE 6.12 Construction of (a) interval flow rate diagram and (b) the MSCC
(after Saw et al., 2009).

6.4.3 Pinch Analysis of Supply Chains

The power of Pinch Analysis, which combines quality (e.g.,
temperature, concentration) with quantity (e.g., heat duty, mass flow),
has been successfully applied to analyze supply chains. In this case,
(reduced) time is the quality and the amount of material (e.g.,
number of units, mass) is the quantity.
The objective of the aggregate planning is to satisfy demand in a
way that maximizes profit. Demand must be anticipated and
forecasted, and production must be planned in advance for that
demand. Aggregate planning is particularly beneficial to plants
whose products encounter significant fluctuations in demand. Such
planning determines the total production level in a plant for a given
time period, rather than the quantity of each stockkeeping unit
Singhvi and Shenoy (2002) formulated the aggregate planning
problem as follows. Given the demand forecast Dt for each period t in
a planning horizon that extends over T time periods, maximize the
profit over the specified time horizon (t = 1, . . . , T) by determining
the optimum levels of the following decision variables:

Production rate Pt = number of units produced in-house in

time period t
Overtime Ot = amount of overtime worked in time period t
Subcontracting Ct = number of units subcontracted
(outsourced) in time period t
Workforce Wt = number of workers needed for production in
time period t
Machine capacity Mt = number of machines needed for
production in time period t
Further Applications of Process Integration 137
Inventory It = inventory at the end of time period t
Stock out St = number of units stocked out (backlogged) at
the end of time period t

Figure 6.13 illustrates how material is accumulated at the end of a

time period t.
The accumulation of material balances can be expressed
mathematically as

It 1  St 1  Pt  Ct Dt  It  St (6.4)


Previous inventory + Total production = Demand

+ Current inventory (6.5)

These equations are reflected in the Supply Chain CCs used for Pinch
analysis, as shown in Figure 6.14.
Singhvi, Madhavan, and Shenoy (2004) extended the initial
methodology to the case of planning for multiple product scenarios.
Singhvi (2002) proposed the following algorithm for minimizing
inventory cost:

1. List all the products in order of increasing production rates

and produce the products in that order.
2. For products that have the same production rate, first produce
the one whose inventory holding cost is lower.
3. For products that have the same production rate and the
same inventory holding cost, first produce the one for which
demand is lower.

Current Net Inventory

It St
Previous Net Inventory

Demand Dt
IN St1 Time Period t OUT

In-house Pt Subcontract Ct


FIGURE 6.13 Material balance in aggregate planning (after Singhvi and Shenoy,
138 Chapter Six

Ending inventory

Time [months]

4 Composite Curve
Pinch point
3 Ik
2 Composite Curve

1 It1
Pkt + ck
0 5000 10000 15000 20000
Material quantity [units]

FIGURE 6.14 Supply chain Composite Curves (after Singhvi, Madhavan, and
Shenoy, 2004).

6.4.4 Using the Pinch to Target CO2 Emissions

Emission targeting via Pinch analysis was investigated in the 1990s
by Linnhoff and Dhole (1993), Dhole and Linnhoff (1993b), and
Kleme et al. (1997). The applications, which employ the Total Site
concept, address optimization within industrial facilities, not within
extended sites as regional or national energy sectors. However, a later
work (Perry, Kleme, and Bulatov, 2008) included the regional
dimension in a Total Site Analysis of integrating renewable sources
of energy.
Tan and Foo (2007) presented a further application of Pinch
Analysis to energy-sector planning under carbon emission
constraints: the Carbon Emission Pinch Analysis (CEPA). The main
problems addressed by the proposed methodology are (1) identifying
the minimum quantity of zero-emission energy resources needed to
meet the specified energy requirements and emission limits of
different sectors or regions in a system and (2) designing an energy
allocation scheme that meets the specified emission limits while
minimizing use of the energy resources. The sequence of the
proposed Pinch Analysis is as follows (Tan and Foo, 2007):

Tabulate the energy source and demand data. The resulting

table must contain the quantity of the energy sources (Si) and
demands (Dj) and their respective emission factors (Cout,i
and Cin,j).
Arrange the energy sources and demands in order of
increasing emission factors.
Further Applications of Process Integration 139
Calculate the emission levels (SiCout,i) and limits (DjCin,j),
respectively, of the energy sources and demands.
Plot the Demand CC with the energy quantity (Dj ) as the
horizontal axis and the emissions limit (DjCin,j ) as the
vertical axis. Hence the slope of the CC at any given point
corresponds to the emissions factor (Cin,j ).
Plot the Source Composite Curve in the same manner as with
the Demand Composite Curve, but use instead the quantities
Si and SiCout,j. In this curve, the slope at any given point
corresponds to the emissions factor SiCout,j.
Superimpose the two CCs on the same graph.
Shift the source CC horizontally to the right so that it does
not cross the demand CC. In final position, the former should
lie diagonally below and to the right of the latter. The two
curves must touch each other tangentially without crossing;
their point of contact is the Pinch point.
Note the distance from the origin of the graph to the leftmost
end of the Source Composite Curve. This distance gives the
minimum amount of zero-carbon energy needed to meet the
systems specified emissions limits.
Finding the Pinch point yields valuable insights to decision
makersin particular, it identifies the system bottleneck.
The golden rule of Pinch Analysis can then be applied to
the problem: in order to meet all the specified emission limits
for the system, the zero-carbon energy resource is supplied
only to those energy demands below the Pinch point. Any
allocation of this resource above the Pinch point will either
lead to an infeasible solution or require more zero-carbon
energy than the minimum quantity established by Pinch

6.4.5 Regional Resource Management

Regional Resource Management Composite Curve
A novel approach to regional resource management has been
developed that tackles simultaneously the two most important issues
with biomass supply chains: transportation and land use. The
biomass supply chain problem is complex because of the distributed
nature of biomass resources and their low energy density,
which necessitates large transportation capacities. Growing biomass
requires considerable land areas, often leading to competition with
food production. To address these problems, a two-level approach to
biomass supply chain synthesisbased on a novel Regional Energy
Clustering (REC) approachwas proposed by Lam et al. (2009). The
140 Chapter Six

first level of regional resources management consists of forming

clusters of zones for biomass management. The second level involves
building the regional energy transfer cascade (RETC) and the
Regional Resources Management Composite Curve (RRMCC).
The clusters of zones formed in the first level of the methodology
are designed to minimize the environmental impact of biomass
energy exchanges among the zones within the overall supply chain
network (Lam et al., 2009). The carbon footprint (CFP) is used as a
criterion for comparing various magnitudes of this impact; cost is
another obvious criterion. The main goal of clustering is to partition
the area of the considered region into smaller subareas (the clusters)
in order to form more coherent entities. Within each cluster, stronger
and more efficient interactions and biofuel exchanges result in
minimizing the environmental impact of the whole region. Figure 6.15
presents an REC algorithm, whose steps are discussed next:

Step 1. Specify energy sources and demands based on available

system data.
Step 2. Optimize biomass exchange flows between the zones. In this
step, linear programming is used to formulate an objective
function that minimizes total CFP within the overall region.

FIGURE 6.15 Begin

Flowchart of
algorithm for
Regional Energy
Step 1. Specification of energy sources
and demands

Step 2. Optimize biomass

exchange flows between zones

Step 3. Display the optimal

biomass exchange flows

Step 4. Cluster formation

Output: Clusters and

their properties

Further Applications of Process Integration 141
Step 3. Display the optimal biomass exchange flows. A visual mapping
of interzone biomass exchanges provides critical feedback for
the decision maker. The zone centroids are plotted in two-
dimensional Cartesian coordinates.
Step 4. Form the clusters. Mixed integer linear programming (MILP)
has proven to be a convenient tool for this task.

Regional Energy Surplus-Deficit Curves

The formed clusters should be presented visually to help document
and explain the proposed solution. For this purpose, the use of
Regional Energy Surplus-Deficit Curves (RESDCs) (see Figure 6.16
for an example) is suggested.

Regional Resources Management Composite Curve

The RRMCC can be developed based on results obtained from the
REC algorithm. In this graphical method, the main idea of Grand
Composite Curve has been translated to the problem of regional
resource management. Figure 6.17 illustrates two ways of presenting
the RRMCC, where panels (a) and (b) employ different directions of
The RRMCC combines information about energy surpluses and
deficits as well as land use, allowing one to assess possible trade-offs.
The quantity of the energy demand and supply (cumulative energy
balance [PJ/y]) is shown on the X axis, and the cumulative zone area
[km2] is shown on the Y axis. The RRMCC reveals several options for
tackling the problem of resources management in a region in terms
of managing land use and energy surpluses and deficits. A
demonstration case study on constructing and using the RRMCC is
presented in Chapter 11.

10 Total
Cumulative Energy [PJ/y]

8 3
Cumulative supply curve
ter 2
4 Clus

Cumulative demand curve

2 r1

0 20 40 60 80
Cumulative Area [km2]

FIGURE 6.16 Regional Energy Supply-Deficit Curves (after Lam et al., 2009).
142 Chapter Six

(a) Cumulative Area [km2] (b) Cumulative Area [km2]

Zone 5 H A
Zone 1
Zone 4 C
F Zone 2
Zone 3 E
Zone 3
Zone 2 D
Zone 4 E
Zone 1 C
B Zone 5
Cumulative energy balance [PJ/y] Cumulative energy balance [PJ/y]

FIGURE 6.17 Regional Resources Management Composite Curve.

6.5 Heat-Integrated Power Systems: Decarbonization

and Low-Temperature Energy
6.5.1 Decarbonization
Conventional utility systems focus on how to produce and utilize the
steam in a steam distribution network (Varbanov, Doyle, and Smith,
2004). Unlike conventional steam-based utility systems, however,
power-dominated energy systems exhibit different characteristics
because the provision of shaft (driver) power, rather than steam, is of
paramount importance. For such power systems (e.g., in natural gas
liquefaction), a key issue is selection of the most appropriate drivers
to satisfy mechanical shaft demands. The decision factors in driver
selection include the optimal number, type, and size of the drivers,
helper motors or generators, and power plantssubject to a set of
mechanical and electricity demands and relevant economic scenarios.
Zheng, Kim, and Smith (2008a) developed a holistic approach to
account for design interactions in power systems, given that driver
selection entails unique implications for the overall design; these
factors include overall cost, fuel consumption, performance, plant
availability, carbon emissions, and so forth.
Synthesis complexity increases significantly when steam systems
are considered together with power-dominated systems. This case
arises when a process requires a large amount of heat (steam) or
when a steam turbine is preferable (as a direct driver) to a gas turbine
or electric motor. In such cases, additional information is required
about the on-site power supply and the way drivers interact with
generating facilities. Implementation of a CO2 (carbon dioxide)
capture process in the plant requires extra compression duty for the
Further Applications of Process Integration 143
CO2 separation as well as a considerable amount of steam for the
The synthesis of power-dominated energy systems is envisaged
with the aid of superstructure-based mathematical optimization.
The proposed superstructure (Figure 6.18) includes all possible
design options, and the optimization involves (1) the systematic
screening and evaluation of possible flowsheets and (2) assessing
economic trade-offs between capital costs and operating costs. As
usual, the optimization objective is to minimize the overall cost (i.e.,
capital and operating costs) while accounting for the models
constraints (Zheng, Kim, and Smith, 2008b). This task is typically
formulated as an MILP problem in which piecewise linearization is
used to capture the capital costs.

6.5.2 Low-Temperature Energy

The levels of power required for compression constitute a major
component of energy consumption when cryogenic cooling is applied
to process streams. Thus, the efficient use of such cold energy
contributes to the cost-effectiveness of low-temperature processes.
Heat Integrationin particular, one of its most powerful tools, the
Grand Composite Curvehas a long history of application to saving
energy in cryogenic plants (Linnhoff et al., 1982; Linnhoff and Dhole,
1992; Smith, 2005); see Chapter 4 for details.
Pure refrigerant systems cannot avoid some degree of
thermodynamic inefficiency, since otherwise the heat exchanger(s)
would exhibit large temperature differences and this would push
the system away from thermodynamic reversibility. However, if
mixed refrigerants are used then the refrigeration cycles structure is
simplified, considerably reducing the duty requirements for
compression. The advantage of mixed refrigerants is that they
.... BO-1 BO-N


Process Stream

EM-1 PP-1



FIGURE 6.18 Superstructure for energy system used in a low-temperature

process (after Zheng, Kim, and Smith, 2008a).
144 Chapter Six

facilitate a closer match between hot process and cold refrigeration

streams. Del Nogal et al. (2008) described a methodology for mixed
refrigerant system design based on a superstructure arrangement.
The problem is highly nonlinear and features many local optima, which
behave as unwanted traps for traditional deterministic optimization
methods. This paper therefore suggests that a genetic algorithm (GA)
be used to solve the optimization problem; see Figure 6.19.
The interactions between the GA and the simulator result in a set
of the best solutions found over a discretized solution space. The
preliminary solutions so obtained then serve as starting points for
standard nonlinear programming (NLP) optimization techniques to
fine-tune the results and, finally, report the optimal solution. One of
the important aspects of this model is that it ensures the feasibility of
heat recovery in every exchanger. The design produced during
optimization is then simulated, and cold and hot Composite Curves
are produced. Finally, the CCs are rigorously checked against the
stipulated Tmin.

6.6 Integrating Reliability, Availability, and

Maintainability into Process Design
6.6.1 Integration
Current practice often views reliability as a mere afterthought to the
design process. As a result, systems go through repeated design and
redesign in search of greater process reliability, availability, and
maintainability (RAM). An alternative approach, as described by
Yin and colleagues (2009), is illustrated in Figure 6.20. This new

FIGURE 6.19 Integrated Genetic

design for low-temperature Algorithm
energy systems (after Del
Nogal et al., 2008). Set of operating
condition with structural
Refrigeration Objective
simulator function

Updated power

Driver selection

Integrated design
Further Applications of Process Integration 145


Plan preventive maintenance

interval and tasks to maximize
process availability while
Process meeting risk criteria
Synthesis Stage

Reliability and Risk


Is the life cycle cost No



A process with the best life cycle cost

FIGURE 6.20 New process design methodology with integrated RAM stage.

methodology incorporates the simultaneous consideration of flexible

process design and reliability, and optimal solutions are obtained by
Process Integration.
To integrate RAM into process conceptual design, the optimal
preventive maintenance (PM) interval must be embedded into the
superstructure. The superstructure accommodates a number of
different designs, each with a different operating mode. Each so-
called available design is linked to its associated financial penalty
through its asset usage during the designs life cycle, which is
calculated as the ratio of real throughput to ideal throughput. This
ratio is equal to the sum of the availability across all operating modes.
Thus, Yin et al. (2009) defined the superstructure system availability
(SSA) as

A x
Real throughput
SSA i i (6.6)
Ideal throughput j i
146 Chapter Six

Here Ai is the system availability in operation mode i for the jth

design, and xi is the mode is ratio of actual to maximum capacity for
the jth design.

6.6.2 Optimization
Within the optimization framework for integrating RAM into process
synthesis, the mathematical model aims to minimize the life-cycle
cost. In its general form (Yin and Smith, 2008), the optimization
problem can be summarized as follows:

Minimize: the objective function (expected cost)

Subject to: process model constraints, preventive maintenance
constraints, process system availability constraints

The objective function is usually formulated as

Annual cost = Annualized capital cost + Annualized

operational cost + Annual lost production
penalty + Other costs (6.7)

6.7 Pressure Drop and Heat Transfer Enhancement in

Process Integration
Various factorsincluding flow rate, composition, temperature, and
phasecan affect heat capacity Cp. Another factor that should be
taken into account is pressure. Polley, Panjeh Shahi, and Jegede (1990)
extended the Heat Exchanger Network (HEN) targeting procedure
by considering pressure drop. They used the following relationship
between the pressure drop P, the heat transfer coefficient h, and the
heat transfer area A:

P KAh m (6.8)

where K is a pressure-drop relationship constant and m reflects the

heat exchangers tube- and shell-sidespecific coefficients. The
allowable pressure drop (rather than the heat transfer coefficient) is
specified for each stream. Then the heat transfer coefficients are
calculated iteratively to minimize the total area. Thus, when
approaching area targets the design is modified based on the fixed
pressure drops rather than fixed film coefficients.
Ciric and Floudas (1989) suggested a Mathematical Programming
based, two-stage approach to HEN retrofits that includes a match
selection stage and an optimization stage. The match selection
stage uses an MILP transshipment model to select process stream
matches and match assignments. The optimization stage uses an NLP
formulation to optimize the match order and flow configuration of
Further Applications of Process Integration 147
Nie and Zhu (1999) developed a strategy for considering pressure
drop in HEN retrofits. They assumed that any additional area would
involve only a few heat exchange units in order to minimize the
piping and civil engineering work. The optimization procedure
consists of two stages. The first stage involves selecting a small
number of units that require additional area; the second stage
considers series or parallel shell arrangements for those units. The
topology change options are initially established by applying the
Network Pinch method (Asante and Zhu, 1997). Then a two-stage
optimization procedure is used to determine area distribution and
shell arrangement under pressure-drop constraints. Area distribution
and shell arrangement are the design properties that have the greatest
effect on pressure drop.
Vclavek, Novotn, and Dedkov (2003) analyzed in more detail
the circumstances under which pressure plays a significant role in
Heat Integration. The authors formulated some heuristic heat recovery
rules for combinations of process streams (tracks), not merely
individual streams.
Aspelund, Berstad, and Gundersen (2007) described a new
methodology, called extended Pinch Analysis and Design (ExPAnD),
to account for pressure drops in process synthesis that extends the
traditional Pinch Analysis to incorporate exergy calculations. The
authors focus on the thermo-mechanical exergy, which is the sum of
pressure- and temperature-based exergy. Compared with traditional
Pinch Analysis, the problem that Aspelund, Berstad, and Gundersen
(2007) consider (a subambient process) is much more complex; there
are many alternatives for the manipulation and integration of
streams. The authors also provide a number of (general and specific)
heuristics that complement the ExPAnD methodology. In a further
development, Aspelund and Gundersen (2007) used the concept of
an attainable region in proposing a graphical representation of all
possible CCs for a pressurized, subambient cold stream along with
the cooling effect of the stream expanding to its target pressure. The
attainable region is a new tool for process synthesis, extending Pinch
Analysis by explicitly accounting for pressure and including exergy
calculations. The methodology shows great promise for minimizing
total shaft work in subambient processes.
For designing a de-bottlenecking retrofit (as distinguished from
an energy-saving retrofit), Panjeshahi and Tahouni (2008) suggested
a method for optimizing pressure drop. The technique proceeds in
two main stages as follows. (1) Simulation of the existing process
operating at the desired increased throughput: additional utility is
used to maintain required temperatures in the process. (2) Area
efficiency specification for the existing network after the areaenergy
plot is used to increase throughput: a new virtual area, a pseudonetwork,
is introduced.
Zhu, Zanfir, and Kleme (2000) suggested a heat transfer enhance-
ment procedure for HEN retrofits. The methodology features a
148 Chapter Six

targeting stage and a selection stage. The approach uses the Network
Pinch Analysis to determine whether and where enhancements
should be applied in the conceptual design. One limitation of this
technique is that heat transfer enhancement is used only for taking
the place of additional area.

6.8 Locally Integrated Energy Sectors and

Extended Total Sites
Total Site targeting has established a method for analyzing the heat
sources and sinks of multiple processes and how heat can be
transferred from one process to another via a carrying medium,
such as steam (Kleme et al., 1997). This methodology has also been
used to demonstrate the concept of a locally integrated energy sector
for distributing heat among small-scale industrial plants and
domestic, business, and social premises while integrating renewable
energy sources (Perry, Kleme, and Bulatov, 2008). A conceptual
overall design for an energy sector that involves both heat and power
is illustrated in Figure 6.21. In this scenario, demands for heating/
cooling and electricity in units (e.g., dwellings, offices, hospitals,
schools) can be met locally by renewable energy sources such as
wind, solar cells, heat pumps, and/or excess heat and power from
the local industry. Locally installed boilers that consume traditional
fossil-based fuels, biomass, or waste can also be used to help meet
these requirements when demand is high or other sources are
unavailable. Heating or cooling and power that is not required by
one unit can be fed to a grid system and then passed to another unit

Electricity Grid

Renewables Fossil Fuels Nuclear

Steam Gas Wind Sun Heat pump

turbine turbine

Fossil fuels

Bio-fuels Electricity
(including Steam
waste) Hot water
Cooling utility

Unit 1 Unit 2 Unit 3 Unit 4 Unit 5 Unit 6 Unit 7

Fossil fuels Fossil Fossil

fuels fuels
(including waste)

FIGURE 6.21 Locally integrated energy sector with heat and power.
Further Applications of Process Integration 149
that is unable to meet its demands locally. The grid system can
distribute power (electricity) and heating in the form of hot water or
steam. In geographic locations where air conditioning is required, a
cooling distribution main could also be provided. If local sources
are unable to provide for the demands of all units in the system,
then district renewable sources can be provided. These would
include larger-scale wind turbines, solar-cell systems, heat pumps,
and combustors fed by waste from the units or by biofuels or fossil
fuels. The sources at this level would include power-generating
equipment such as turbines driven by steam or gas.
Varbanov and Kleme (2010) presented a further extension of the
Total Sites methodology that covers industrial, residential, service,
business, and agricultural customers; incorporates renewable energy
sources; and accounts for variability on both the supply and demand
sides. The challenge of increasing the share of renewables in the
energy mix can be met by integrating solar, wind, biomass, and
geothermal energy as well as by integrating some types of waste
with the fossil fuels. The availability of renewables and the energy
demands of the considered sites all vary significantly with the time
of day, period of the year, and location. Some of these factors are
unpredictable and can change quickly. Total Site Combined Heat and
Power energy systems are optimized by minimizing heat waste and
carbon footprint while maximizing economic viability. This
methodology incorporates state-of-the-art techniques of Total Site
Integration (Kleme et al., 1997), batch Heat Integration (Kemp and
Deakin, 1989), HEN sensitivity analysis (Kotjabasakis and Linnhoff,
1986), and time Pinch Analysis (Wang and Smith, 1995); it also applies
the concept of Time Slices (see Figure 6.22) to account for the
variabilities just described.

6.9 Summary
Every attempt has been made to include in this chapter the most
recent research results, but the field is developing so rapidly that

Time Slice 1: 617 h Time Slice 2: 1720 h

T [C] T [C]
200 MP Steam 200 MP Steam
Source SCC
Sink SCC Embed into the source profile LP Steam
Steam (shifted)
100 100 Solar
HW Solar Storage HW Storage
CW Excess
Excess heat Retrieved CW
0 heat
1000 0 1000 0
1000 0 1000
H [kW]
H [kW]

FIGURE 6.22 Time Slice and Site targets for solar heat capture and storage
(CW = cooling water, HW = hot water, SCC = Site Composite Curve).
150 Chapter Six

hardly a month passes without several new and relevant research

results being published. The most frequent publishers of novel
research results are journals covering energy and cleaner technology
for example, Chemical Engineering Transactions, Applied Thermal
Engineering, Energy, the Journal of Cleaner Production, Cleaner Technologies
and Environmental Policy, and Resources, Conservation and Recycling.
New developments in Process Integration are often published by
leading chemical engineering journals, including Computers &
Chemical Engineering, Chemical Engineering Science, and the AIChE
Journal. A conference dedicated exclusively to dealing with related
topics is the Conference on Process Integration, Modelling and
Optimisation for Energy Saving and Pollution Reduction (PRES). The
Thirteenth PRES is scheduled for autumn of 2010 in Prague; the
Fourteenth PRES will be held in Italy during 2011.

7.1 Classic Approach: Mathematical Programming

Process system engineering problems, including process synthesis,
are typically considered as optimization problems. The solution or
solutions of these problems are usually generated by solving the
corresponding mathematical models. However, a review of recent
publications reveals various failures in modeling process synthesis
(Friedler and Fan, 2009). An inappropriate mathematical model may
result in a nonoptimal or even an infeasible solution or the model
may be unsolvable because of its complexity. A mathematical model
should be a valid representation of the process, taking into account
all its significant features, and still be solvable.
Process optimization problems are formulated as mathematical
models, where variables correspond to decisions (e.g., the flow rate of
a stream, the amount of heat provided by high-pressure steam) and
constraints correspond to the conceptual model of the system (e.g.,
material balance). Optimization (or Mathematical Programming)
aims to find appropriate values for the variables in such a way
that (1) constraints involving these variables are satisfied and (2) a
specific functionthat is, the objective functionof these variables
is minimized (or maximized). The constraints define the search
space, while the objective function is to determine the most favorable
point or points in this space.
Mathematical models are classified according to the types of
variables (continuous or integer) and constraints (linear or nonlinear).
Therefore, a mathematical model can be linear in constraints and
objective function with continuous variables (i.e., linear programming,
LPR). Similarly, a mathematical model is viewed as a nonlinear

152 Chapter Seven

programming (NLP) problem if any of the constraints or the objective

function is nonlinear with continuous variables. Models that include
both continuous and integer variables are classified as mixed integer
programming ones; these include mixed-integer linear programming
(MILP) and mixed-integer nonlinear programming (MINLP).
Also, linear optimization (LO) problems are usually referred to as
linear programming or LP. Similarly, NLO, MILO, and MINLO cor-
respond to NLP, MILP, and MINLP.
Linear programming problems appear in a wide range of
applications, including transportation, distribution from sources to
sinks, and management decisions (Kleme and Vaek, 1973; Kleme
et al., 1975; Kleme, 1986; Jeowski, 1990; Williams, 1999; Jeowski,
Shethna, and Castillo, 2003; El-Halwagi, 2006). LPR problems are easily
solved by the simplex method (Dantzig, 1968) and its improvements
(see, e.g., Maros, 2003a; Maros, 2003b). In most cases, NLP is difficult
to solve, and certain limitations on the constraints and objective
function may be necessary for such problems to be practically solvable
by specific methods (Seidler, Badach, and Molisz, 1980; Banerjee and
Ierapetritou, 2003; Sieniutycz and Jeowski, 2009). A general technique
for solving NLP and mixed-integer programming problems is applied
by the branch-and-bound framework (Land and Doig, 1960), where the
original complex problem is solved via systematic generation and
solution of a set of simpler subproblems.
Process synthesis is a creative activity. In fact, it is one of the
earliest actions taken by the process designer when creating the
structure, network, or flowsheet of a process to satisfy the given
requirements in terms of constraints and specifications while attaining
the prescribed objectives.
The relationships among the mathematical model, the process
being modeled, and the solver being deployed are usually complicated,
which makes it difficult to establish the most effective and valid
model. There is only limited discussion of generating mathematical
models in the literature, and the topic is treated in only a few
publications (see, e.g., Grossmann, 1990; Kovacs et al., 2000)
concerning specific areas.
In general, a process synthesis problem is defined by specifying
the available raw materials, candidate operating units, and desired
products. Each of these is given by an individual mathematical
model. The models cannot, by themselves, directly constitute the
Mathematical Programming model for the synthesis problem.
Construction of the mathematical model from these model elements
is not evident with the risk of failure. The major steps of process
synthesis are illustrated in Figure 7.1.
The main emphasis in this chapter is on an integrated framework
for model generation and solutionthat is, the P-graph framework.
Another class of methods for process synthesis is based on
heuristic rules. Implementing heuristic methods is relatively
Process Optimization Frameworks 153

FIGURE 7.1 Major steps of Cost data and constraints for the
process synthesis. operating units.
Prices and constraints for the
products and raw materials.

Generation of the model

Mathematical Programming model


Solution of the
Mathematical Programming model

Optimal network

straightforward, and only moderate computational effort is required.

Yet by their nature, heuristics are effective only at the local level.
This is because human experiences are almost always localized:
they are gained from an often limited number of encounters with (or
observations of) specific instances. For this reason, solutions that
are globally optimal are seldom obtainable via heuristic methods
alone (Feng and Fan, 1996).

7.2 Structural Process Optimization: P-Graphs

There are four good reasons to employ graph-theoretic methods:
(1) the unambiguous representation of decision alternatives, (2) the
algorithmic generation of a mathematical model, (3) the reduced
complexity of the solution procedure, and (4) the derivation of multiple
alternative solutions. The P-graph or process graph framework, as
applied by Friedler and Fan (Friedler et al., 1992a; Friedler et al., 1992b;
Friedler, Varga, and Fan, 1995) to process synthesis, involves novel
structural representations of complex processes coupled with
combinatorial algorithms for generating the superstructure, the
mathematical model, and the models optimal solution.
The P-graph framework is robust, and its algorithms have been
validated as mathematically rigorous in that they are based on a set
of axioms (Friedler et al., 1992b). These axioms express the necessary
154 Chapter Seven

structural properties for process networks to be feasible. The

algorithms are able to guarantee the resultant mathematical models
validity, reduce the search space, and generate the optimal solution.

7.2.1 Process Representation via P-Graphs

In a P-graph, one class of nodes is assigned to operating units or
activities and the other is assigned to their inputs and outputs. Raw
materials, resources (precursors), and preconditions (activating
entities) are inputs to the operating units; products, effects (resulting
entities), and targets are outputs from the operating units. Table 7.1
shows the P-graph representation of process structure elements.
In a process network, functional units that perform operations
(e.g., mixing, reacting, separating) are termed operating units. These
operating units, which correspond to the blocks in a process
flowsheet, alter the physical and/or chemical states of materials being
processed or transported. Such transformations are carried out by
one or more unit operations, and the overall process converts raw
materials into the desired product(s). A process may also generate
by-products, which are either to be recovered for further use or to be
treated as waste.
In process network synthesis, a material is uniquely defined by
its components and their concentrationsin other words, by its
composition, which is identified by a symbol used to mark the
material. Associated with any operating unit are two classes of
materials (or material streams): input materials and output materials.
For example, operating unit O2 in Figure 7.2 consumes raw materials
E and F while producing intermediate material C and by-product B.
Note that a material may consist of more than one component.
The P-graph provides not only a formal description of the process
but also an unambiguous representation of the possibilities for
structural decisions. If an operating unit requires multiple inputs,

Process element P-graph representation

Raw material or precursor

Final product or final target

Intermediate material or entity


Operating unit

TABLE 7.1 P-Graph Symbols That Represent Process Elements

Process Optimization Frameworks 155

FIGURE 7.2 P-graphs (a) E F (b) E F

representing the
process structure of
three operating
units. O2
O2 O3 O3




each provided by a single operating unit, then structural alternatives

cannot be defined. In contrast, if multiple operating units are capable
of providing a particular input, then any combination of these units
may eventually be used. In Figure 7.2(a), for example, materials C and
D are necessary inputs to operating unit O1. Material C can only be
produced by operating unit O2, and material D can only be produced
by operating unit O3. For unit O1 to operate it is necessary that units
O2 and O3 both be included in the process structure. In Figure 7.2(b),
however, material C can be produced by unit O1, unit O3, or both. In
addition to unambiguous structural representation, the P-graph
framework also provides a set of rigorous and effective algorithms
for the synthesis and optimization of process networks.

7.2.2 The P-Graphs Significance for Structural Optimization

The extreme complexity of process network synthesis is due mainly
to the problems combinatorial nature. This complexity grows
exponentially with the number n of candidate operating units,
because the optimal network must be found among 2n possible
combinations of the units (i.e., alternative networks) unless some
possibilities can be eliminated (e.g., by heuristics) in advance. The
factor 2n is derived by simple induction. First observe that a single
additional decision (regarding the inclusion or exclusion of an
operating unit) doubles the number of potential design alternatives:
2n 2 = 2n+1. This means that a designer contemplating a system with
a total of 35 operating units is faced with more than 34 billion
(235 = 3.436 1010) alternative arrangements!
Reducing such large numbers of alternatives requires robust
decision-making tools that are mathematically rigorous (preferably
axiomatic) and effectively implementable on computers. These ends
156 Chapter Seven

have been met largely by employing the well-established mathematics

of graph theory, which can be regarded as a branch of combinatorics.
Thus was developed the graph-theoretic, algorithmic method
described in this section. The method is based on using P-graphs to
extract the universal combinatorial features (properties) inherent in
feasible processes. Such properties can be expressed mathematically
as a set of axioms that characterize the combinatorial feasibility of
processing networks.
A given process network is said to be combinatorially feasible (or to
be a solution structure) if it satisfies the following five structural

(S1) Every final product and target is represented in the

(S2) An entity represented in the structure has no input if and
only if it represents a raw material or precursor.
(S3) Every operating unit represented in the structure is
defined in the problem.
(S4) Any operating unit represented in the structure has at
least one path leading to a final product or a final target.
(S5) An entity belongs to the structure if and only if it is either
an input entity to or an output entity from at least one
operating unit already represented in the structure.

Figure 7.3 illustrates the extreme reduction in the search space

that results from this approach. The universe of all possible networks
is reduced to a much smaller space containing only those networks
that satisfy the axiomsin other words, the combinatorially feasible
(CF) networks. Clearly this reduction will drastically reduce the
required computational effort. Search-space reductions by a factor of
nearly a billion have been reported in some of the real-life process
synthesis tasks performed to date using this axiomatic approach.
Note that each feasible network, including the optimal network, is an
element of the set of combinatorially feasible networks.
Figure 7.4 depicts two process structures that are not combina-
torially feasible. The P-graph in Figure 7.4(a) shows a process structure
in which material F is consumed as an input. Yet because material F is
not a raw material and was never produced, the structure is not
combinatorially feasible according to Axiom (S2). In the P-graph of
Figure 7.4(b), operating unit O3 produces only by-product B. Here O3
does not output any final product or material that is later used to yield
a final product, so the process structure violates Axiom (S4). In short,
the structural properties expressed by Axioms (S1)(S5) are necessary
conditions for process structures to be feasible. This means that
reducing the search space to combinatorially feasible structures does
not result in the loss of any practically feasible or optimal processes.
Process Optimization Frameworks 157

Potential networks
(search space)


Optimal network

FIGURE 7.3 Reduction in the search space effected by combinatorial axioms

(F = feasible networks, CF = combinatorially feasible networks).

FIGURE 7.4 P-graphs (a) E F (b) E F

representing process
structures that violate
(a) Axiom (S2) or
(b) Axiom (S4).
O2 O3 O2 O3


O1 O1


7.2.3 The P-Graphs Mathematical Engine: MSG,

SSG, and ABB
When combined with the structural axioms, P-graph representation
makes it possible to implement effective algorithms for structural
analysis, synthesis, and optimization of process structures. The
maximal structure generation (MSG) algorithm (Friedler et al., 1992a)
generates a superstructure that can be rigorously proved to
incorporate each combinatorially feasible process structure. Then the
solution structures generation (SSG) algorithm (Friedler, Varga, and Fan,
158 Chapter Seven

1995) is used to enumerate all the combinatorially feasible process

structures that satisfy Axioms (S1)(S5) or the accelerated branch and
bound (ABB) algorithm (Friedler et al., 1996) is used to generate the
optimal process structure together with a ranked, finite list of near-
optimal structures.
Figure 7.5 illustrates the connections among the three
algorithms. Algorithm MSG generates the maximal structure, and
it can be followed either by algorithm SSG to generate all
combinatorially feasible process structures or by algorithm ABB to
generate the optimal and near-optimal processes. Algorithms MSG
and SSG require as input the list of candidate operating units, each
defined by the set of its input materials (preconditions) and
products (effects). Algorithm ABB requires, in addition to these
data, quantitative information (e.g., prices of raw materials, costs
and capacity constraints of operating units) relevant to assessing
network optimality.
These algorithms and their description are available online
( The methodology has been demonstrated via
typical engineering decision problems (see Chapter 8 for details). A
considerable advantage of the P-graph framework is its potential for
solving large industrial problems, as indicated by its application in
solving various process systems engineering problems (see, e.g., Liu
et al. 2004; Halasz, Povoden, and Narodoslawsky, 2005; Liu et al.,
2006; Fan et al., 2008; Varbanov and Friedler, 2008).

Process Synthesis Problem

Algorithm MSG

Maximal Structure

Algorithm SSG Algorithm ABB

Combinatorially Feasible Optimal Process

Structures (n-best Processes)
(for further evaluation)

FIGURE 7.5 Inputs to and outputs from the three P-graph algorithms.
Process Optimization Frameworks 159

7.3 Scheduling of Batch Processes: S-Graphs

7.3.1 Scheduling Frameworks: Suitability and Limitations
Scheduling problems arise in many various areas, including chemical
engineering, supply chain management, operating systems design,
or train timetable design. The terminology and details vary among
the different types of applications, but the core of scheduling remains
the same: assigning tasks to resources at time intervals that satisfy
predefined conditions. The goal is usually to find a feasible schedule
that performs betterin terms of a particular objectivethan any
One of the first contributions to scheduling of chemical processes
by Mathematical Programming was the paper by Kondili, Pantelides,
and Sargent (1993). These authors developed the state task network
(STN) representation in order to formulate production scheduling
in multipurpose plants as a MILP problem. Pantelides (1993) also
developed the resource task network (RTN) representation, which
employs a uniform treatment for all available resources. Originally,
the time representation in the formulations are discretein other
words, a certain number of predefined times on the time horizon can
be considered in the schedule. Zhang and Sargent (1996) extended the
formulation to accommodate continuous-time representation, where
the time points may vary, their number still have to be predefined.
Ierapetritou and Floudas (1998a) employed the concept of unit-specific
event points. Majozi and Zhu (2001) reformulated the problem to reduce
the number of variables in certain applications. Cerd, Henning,
and Grossmann (1997) and Mndez and Cerd (2003) developed
precedence-based models suitable for cases where sequence-dependent
changeovers must be considered.
The rest of this section addresses some issues associated with
conventional approaches to scheduling. These issues motivated the
development of a novel, graph-theoretic approach: the S-graph
framework (Sanmart, Friedler, and Puigjaner, 1998; Sanmart et al.,
2002), which is introduced in Section 7.3.2.
In most MILP formulations, the time horizon is divided into time
intervals by so-called time points, which denote the possible starting
and ending times of tasks. The number of time points is one of the
models parameters, so it must be specified prior to optimization.
Even though the quality of the solution depends strongly on this
parameter, the minimum number required for the optimal solution
is not known in advance. Therefore, an iterative approach is applied
to determining the number of time points. First the model is solved
using a small number of time points, after which each subsequent
iteration increases that number by 1 until the same objective value is
obtained for, say, two consecutive steps. However, it is not certain
that this initial convergence necessarily yields the optimal solution.
160 Chapter Seven

Castro, Barbosa-Pvoa, and Matos (2001) published a case study in

which the objective function value increased after this convergence,
as illustrated in Figure 7.6. A serious shortcoming of this approach is
that it may generate a suboptimal solution.
Another modeling issue may arise when one attempts to solve an
MILP model of scheduling with no intermediate storage: according
to Hegyhti and colleagues (2009), the optimal solution that is
generated may be infeasible. In particular, the Gantt chart of a
schedulewhich was reported in two independent journal articles
(Kim et al., 2000; Mndez et al., 2003) as the optimal solution for a
case studyis not a feasible solution; see Figure 7.7.
The figure reveals that, at 30 hours into production, three units
(U2, U3, and Storage) attempt to exchange materials. However, this
is infeasible because there is no intermediate storage and so the
optimal schedule cannot be implemented in practice. The true
optimal solution of the problem is obtained (Hegyhti et al., 2009)
by using the S-graph framework (Holczinger et al., 2002); see
Figure 7.8.

Number of time points

FIGURE 7.6 Increase in value of the objective function after initial convergence
for a maximum throughput problem.

U1 B A C

U2 B C D

U3 C D B A

U4 D B A

Storage C D A

5 10 15 20 25 30 35 40 45 50 55 60

FIGURE 7.7 Infeasible solution generated by an MILP approach as optimal

(Kim et al., 2000).
Process Optimization Frameworks 161

U1 B A C

U2 B C D

U3 D B C A

U4 D B A

Storage D

5 10 15 20 25 30 35 40 45 50 55 60 65 70

FIGURE 7.8 Optimal solution generated by the S-graph algorithm (Hegyhti

et al., 2009).

7.3.2 S-Graph Framework for Scheduling

The problems discussed in Section 7.3.1 motivated the development
of an alternative methodology, the S-graph or schedule graph
framework (Sanmart, Friedler, and Puigjaner, 1998; Sanmart et al.,
2002), which has been successfully applied to the minimization of
time required to complete all tasks (the makespan; see Sanmart
et al., 2002, and Romero et al., 2004) and also to problems of
maximizing throughput (Majozi and Friedler, 2006). Basics of the
S-graph framework are explained in this chapter; Chapter 9
describes a demonstration program for this framework that is
available online ( Once all processing tasks
have been represented in the recipe, the S-graph can be used to
generate an optimal schedule.
A recipe defines the order of tasks in the process, the material
transfers among them, and the set of plausible equipment units for
each task. The recipe is represented as an S-graph by assigning a
node to each task (task node) and one node to each product (product
node). An arc is established between nodes of the consecutive tasks
defined by the recipe, and there is an arc also from each product-
generating task node to the corresponding product node. The weight
of an arc is given as the processing time of the task that corresponds
to the arcs initial node assuming a single equipment unit is available
for the task. If more than one equipment unit can perform this task,
then the arcs weight is given as the shortest processing time of all the
feasible units. In the graph representing the recipe, the set of plausible
units capable of performing the given task is shown in the task node;
see Figure 7.9.
Suppose that two batches of product A and one batch of product
B are to be produced, where product A is produced in two consecutive
steps. Task 1 can be performed by equipment unit E1 and task 2 by
either E2 or E3. Product B is produced in three consecutive steps that
can be performed by any of the elements in sets {E1, E3}, {E1}, and {E1,
E2}, respectively. The recipe is shown in Figure 7.9.
162 Chapter Seven

The equipmenttask assignments and the order of tasks to be

performed by an equipment unit defines the solution of a
scheduling problem. The schedule is given with an S-graph
containing arcs additional to the S-graph representing the recipe.
Moreover, a single equipment unit is assigned to each task. As an
example, Figure 7.10 depicts an S-graph representing a recipe and
Figure 7.11 a corresponding solution.

1 2
{E1} {E2,E3} 8 A
6 9

3 4
{E1} {E2,E3} 9 A
6 9

5 6 7
{E1,E3} {E1} {E1,E2} 10 B
14 16 8

FIGURE 7.9 S-graph representing the recipe for two batches of product A and
one batch of product B.

FIGURE 7.10 Example recipe. 1 2 3 10

{E1} 6 {E3} 9 {E2} 7

4 5 6 11
{E2} 9 {E3} 15 {E1} 17

7 8 9
{E1} 14 {E2} 16 {E3} 8 12

Schedule #1

FIGURE 7.11 Solution for 1 2 3

the recipe under NIS policy. {E1} {E3} {E2}

4 5 6
{E2} {E3} {E1}

7 8 9
{E1} {E2} {E3}
Schedule #1
Process Optimization Frameworks 163
The algorithm generating the optimal schedule depends on the
storage policy to be considered: nonintermediate storage (NIS), finite
intermediate storage (FIS), or unlimited intermediate storage (UIS).
In this chapter we assume a NIS policy, so an equipment unit
becomes available only after finishing a task and transferring its
intermediate product to the subsequent task in the recipe. On an
S-graph representing a schedule under NIS policy, an arc leads from
the node subsequent in the recipe to the node of the task to be
performed next by the same equipment unit. For example, equipment
E1 first performs task 6, then moves to task 1 and finally to task 7,
which is represented by arcs drawn from node 11 (subsequent to
task 6) to node 1 and from node 2 (subsequent to node 1) to node 7 in
Figure 7.11.
The advantage of the S-graph framework over conventional
Mathematical Programming lies in its ability to exploit the problems
structure to effect a drastic reduction on computational intensity
without requiring unknown information, such as the number of time
points. (Visit for further information.)
This page intentionally left blank
Process Integration
and Optimization

8.1 The Role of Optimization in Process Synthesis

Process Integration (PI), as defined in Chapter 2, is a family of
optimization methodologies for reducing resource or emissions
intensity of the analyzed processes and Total Sites. As such, it is
tightly related to optimization. In fact, PI and optimization
complement each other by their functionality. First, PI sets out the
strategy for designing and/or operating industrial processes. This
gives engineers some direction regarding how processes can be
designed or changed, answering the questions of where we can go?
and what is to be done? in order to achieve the business goals at
hand. In addition, PI provides quantitative targets for designers and
engineers; it does this by exploiting the physical (in the case of Heat
Integration, thermodynamical) background to answer the question
how much is it possible to improve or achieve? The targets in most
cases are upper bounds on the process performance or lower bounds
on the extent of resource use or emissions. In many cases the targets
are practically achievable, as in the case of designing Heat Exchanger
Networks (HENs) or water networks. The most obvious example is
heat recovery targets established by the Pinch point when analyzing
HEN problems. Since they are based on the Second Law of
Thermodynamics, it is proven that better heat recovery cannot be
achieved by any feasible system. If a realistic value for Tmin is
specified for their evaluation, the targets will be also practically
achievable. Of course, the additional factor of the capital costs for
implementing the heat exchangers generally tends to shift the
economic optimum of the designed HEN away from the maximum
recovery network.

166 Chapter Eight

The issues tackled by PI are essentially complex optimization

problems. As a result, optimization is used by PI to answer the
question of how should the task be performed? The general goals
and specific targets are usually achieved by employing optimization
tools at various stages. For instance, process performance targets are
typically evaluated by employing a numerical technique that involves
a cascade of some sorta heat cascade in the case of heat recovery
targeting. One way to implement such cascades is by using the
transshipment optimization formulations, where the external utility
use, resource intake, or emissions rate is set as the objective function
to be minimized. Once the PI goals are established, engineers strive
to achieve the best possible performance. In the case of grassroots
design or network synthesis, the criterion is minimization of the total
annualized cost; in the case of a retrofit, the main criterion may be
minimizing the investments necessary to achieve a certain
performance improvement or minimizing the payback period for a
given investment. For operational improvements, the criteria include
minimizing operating costs or maximizing marginal financial or
performance gains. In all cases, a certain system modelincluding
the appropriate objective functionis formulated. The model is then
subjected to optimization toward the end of achieving (or maximally
approaching) the PI targets. Another function of targets is to partition
complex optimization problems into sets of simpler problems that are
easier to solve. This approach exemplifies the problem decomposition
principle, applied for decades in the world of software development,
also known as the divide and conquer strategy.

8.2 Optimization Tools for Efficient Implementation of PI

For optimizing process models, a wide variety of linear programming
(LPR), nonlinear programming (NLP), and mixed-integer programm-
ing (MIP) methods can be used, depending on the nature of the
problem being solved. Some of these methods were described in
Chapters 3 and 7. Special tools and software (see Chapter 9)
incorporating optimization methods have been developed to exploit
PI possibilities when performing process synthesis, accounting for
the interactions between process operating conditions and the
networks for resource recovery (energy and water). There are four
main groups of optimization tools applied for PI. First, the Pinch
Analysis (Linnhoff et al., 1982) enabled industrial engineers to
obtain better results with the simple Pinch Design Method than
with Mathematical Programming methods in applications to
industrial Heat Integration; see Chapter 4. Second, the graph-
theoretic method is based on process graphs (P-graphs), which were
originally developed for Process Network Synthesis (PNS) (Friedler
et al., 1992b; Friedler, Fan, and Imreh, 1998); see Chapter 7. Third,
Papoulias and Grossmann (1983) introduced linear constraints in
Combined Process Integration and Optimization 167
their transshipment model for Heat Integration within a mixed-
integer linear programming (MILP) formulation for structural
process optimization. This work had been further extensively
developed (Duran and Grossmann, 1986; Floudas and Grossmann,
1987a; Floudas and Grossmann, 1987b). Fourth, stochastic optimization
has become popular in recent years, applying genetic optimization
(Shopova and Vaklieva-Bancheva, 2006) and especially simulated
annealing (Kirkpatrick, Gelatt, and Vecchi, 1983; Faber, Jockenhvel,
and Tsatsaronis, 2005; Hul et al., 2007; Tan, 2007).
Optimization methods can be classified according to the
characteristics of the objective function, the decision variables, and
the problem constraints (Guinand, 2001). A simplified classification
scheme for optimization methods is illustrated in Figure 8.1.

8.3 Optimal Process Synthesis

A process network uses a given set of operating units to create desired
products from specific raw materials. The objective of PNS is to
identify the most favorable (optimal) network for accomplishing the
given tasks. The P-graph methodology is a graph-theoretical approach
to solve PNS problems.

8.3.1 Reaction Network Synthesis

Every reaction is a material transformation, which corresponds to an
operating unit when mapped on a P-graph. Similarly, the maximal

Constraints and Objectives Decision Variables


Linear Programming Integer Programming
(LPR) (IP)

Integer Linear Mixed-Integer Linear Mixed-Integer Nonlinear

Programming (ILP) Programming (MILP) Programming (MINLP)


Incorporation of Incorporation of
Probability Functions Time Domain

FIGURE 8.1 Classification of optimization methods (after Guinand, 2001).

168 Chapter Eight

reaction network can be generated using the Maximal Structure

Generation (MSG) algorithm. From that, all feasible reaction networks
can be generated as solution structures. Historically, it has been
extremely difficult to construct an exact maximal reaction network.
The problem has become solvable only since the arrival of the
combinatorial approach based on P-graphs. Reaction network
synthesis is completely combinatorial in nature because all chemical
species participating in the reactions are defined discretely.
The application of PNS to Reaction Network Synthesis (RNS) is
illustrated by a chemical process for the manufacturing of vinyl
chloride (C2H3Cl). Because of its structural simplicity, the balanced
synthesis of vinyl chloride (C2H3Cl) leads to only a single complex
route containing one loop. It is a trivial example from the standpoint
of constructing the maximal reaction network and the feasible
reaction networks, but it will serve to illustrate the RNS methodology.
The balanced process aims to produce vinyl chloride (C2H3Cl) and
water (H2O), where the former is the desired product and the latter is
the by-product, from the three starting reactants: ethylene (C2H4),
chlorine (Cl2), and oxygen (O2). This process involves the following
three unit reactions: R1, R2, and R3.

R1: 2C2H4 + 2Cl2 = 2C2H4Cl2

R2: 2C2H4 + 4HCl + O2 = 2C2H4Cl2 + 2H2O
R3: 4C2H4Cl2 = 4C2H3Cl + 4HCl

The unit reactions yield the following overall reaction:

4C2H4 + 2Cl2 + O2 = 4C2H3Cl + 2H2O

Note that C2H3Cl is the final target and that C2H4, Cl2, and O2 are the
starting reactants. From the perspective of materials (M), we have:
Starting reactants Final products

M { C 2H 4 , Cl 2 , O 2 , C 2H 3 Cl , H N O , C 2 4 Cl 2 , HCl }


Raw materials Product By-product Intermediate materials

One or more of the feasible paths and valid vertices in the input
structure may disappear if some of the invalid vertices are eliminated.
Thus, the final maximal structure is composed (or reconstructed)
from the remaining skeleton of the input structure after the
elimination. This is accomplished step by step, linking alternately
the vertices of the M-type (for materials) to the vertices of the O-type
(for operating units) and vice versa. At each step, the vertices linked
are assessed in view of the appropriate axioms (see Section 7.2.2):
vertices of the M-type must satisfy axioms (S1), (S2), and (S5); and
vertices of the O-type must satisfy axioms (S3) and (S4). The execution
is initiated from the structures shallowest layerthat is, the final,
desired product end. The stepwise procedure for the composition is
illustrated in Figure 8.2.
Combined Process Integration and Optimization 169

Cl2 C2H4 O2

R1 R2

Step 5 C2H4Cl2 H2O


C2H3Cl HCl

R1 R2 R1 R2

C2H4Cl2 H2O C2H4Cl2 H2O

R3 R3

C2H3Cl HCl C2H3Cl HCl

Step 3 Step 4


R3 R3

C2H3Cl HCl C2H3Cl HCl

Step 1 Step 2

FIGURE 8.2 Steps for the composition of the maximal structure representing
the maximal reaction network for the manufacture of vinyl chloride.

The maximal reaction network serves as the input to algorithms

SSG and Accelerated Branch-and-Bound (ABB). Algorithm SSG
generates all combinatorially feasible reaction networks, and algorithm
ABB identifies a set of the most favorable reaction networks.

8.3.2 Optimal Synthesis of Heterogeneous Flowsheets

Synthesis of Optimal Workflow Structures
Workflow technology has become a general tool for a wide range of
business and management applications (Tick, Kovcs, and Friedler,
170 Chapter Eight

2006), which is mainly due to its ability to increase the efficiency of

business and management systems. The mathematical foundation of
this technology has been developed mainly based on Petri net theory
(Kiepuszewski, ter Hofstede, and van der Aalst, 2003; van der Aalst
and ter Hofstede, 2005). Even though this mathematical methodology
provides a basis for determining the optimal operation of workflows,
it cannot be used to derive an optimal workflow structure.
The structural component of a workflow synthesis problem can
be identified by the sets of products, resources, and (plausible)
activities on materials. The cost of a workflow process that generates
a particular quantity of product is given as the sum of (1) the cost of
the raw materials and (2) the cost related to the activities appearing
in the synthesized workflow process. The cost of an activity is the
sum of its running cost and the investments assigned to the period of
time examined. Both the running cost and investment cost depend
on the size of the activitythat is, its output volume. The common
objective for synthesizing workflow processes is to minimize the
total cost under the assumption of unlimited intermediate storage
capacities for any activity.

Example 8.1: Workflow Synthesis (after Tick, Kovcs, and Friedler, 2006)
As an example, a set of activities is given by its inputs and outputs in Table 8.1
and represented by P-graph in Figure 8.3.
The P-graph contains the interconnections among the activities. Each feasible
activity network corresponds to a subgraph of the P-graph in Figure 8.3.
A product document represented by A and B can be generated by an appropriate
network of the activitiesprovided that the problem has at least one feasible
solution. It is important to note that a product can usually be generated by
different types and numbers of activities. When determining the optimal
network for a workflow, all possible networks of each product must be taken
into account.

Activities Input Output

1 C A, F
2 D B
3 E, F C
4 F, G C
5 G, H D
6 H B
7 J F
8 K G
9 K G
10 L H

TABLE 8.1 Plausible Activities for a

Combined Process Integration and Optimization 171

7 8 9 10


3 4 5 6


1 2


FIGURE 8.3 P-graph, where A, . . . , L are the materials and 1, . . . , 10 are the

The number of feasible networks is usually large, so a systematic procedure

is needed to determine the optimal network. For this purpose, the P-graph
framework is proposed. Algorithm MSG verified that the P-graph in Figure 8.3
corresponds to the maximal structure, and algorithm ABB provided an optimal
set of several alternative structures containing the strict optimum and other
mathematically suboptimal networks. One of the 50 solution networks for this
problem is given in Figure 8.4.

8.3.3 Synthesis of Green Biorefineries

P-graph is also employed to examine the contribution that process
synthesis can offer to the development of production based on
renewable resources (Halasz, Povoden, and Narodoslawsky, 2005).
These resources face formidable competition from fossil raw materials.
Because their production is usually decentralized, viable processes
can result only if the structure of the complete value chain is optimized.
In contrast to conventional chemical processes, those using renewable
resources have to account for logistic operations affecting the value
chains structure. Hence, using process synthesis in this application
calls for using the highest-quality method for synthesis. The P-graph
method, as described in Chapter 7, has proven to be extremely efficient
and flexible, so it is well suited for this purpose.

Example 8.2: Green Biorefinery Synthesis (Kromus, 2002)

The components of this example are summarized by the flowsheet shown in
Figure 8.5. The goal of the considered system is the (decentralized) production
172 Chapter Eight


7 9 10


4 5


1 2


FIGURE 8.4 A solution network for the workflow synthesis problem.

of silage on farms. By starting from silage, two crucial steps are combined:
storage (thus enabling the downstream processes to operate continuously) and
conversion of the carbohydrates in green biomass to lactic acid. In addition,
silage production transforms many proteins into amino acids or peptides
(Povoden, 2002; Koschuh et al., 2004).

The synthesis method requires a comprehensive list of raw

materials, intermediates, and possible products. Note that transport
is treated like a processing step: it uses trucks (or tractors), together
with the raw materials (or partially processed juice or press cake) and
available time, in order to derive a realistic logistics pattern.
Consequently, there has to be a plant-specific intermediate material
flow that leaves this process step. The steps listed in Table 8.2 reflect
the necessary logistical handling (i.e., the local and central
converters for various materials) involved in the process.
Once the cost function (including investment and operating
costs) is defined, the synthesis yields the optimal solution to process
silage as a part of the maximal structure; see Figure 8.6 (black lines
show the optimal solution, while the other options are grayed).
One major advantage of process synthesis is that it allows the
designer to apply sensitivity analysis not only to the process itself but
also to the entire value chain. In this example, sensitivity analysis
reveals a remarkable stability of the central biorefinery structure
Combined Process Integration and Optimization 173

Double Pressing
5289 t/y Water added
for second pressing

Silage Presscake
Feed Matter 35511 t/y Feed Matter 13825 t/y
Dry Matter 10000 t/y Dry Matter 6636 t/y
Dry Matter 28.16 % Dry Matter 48 %
Lactic Acid 1000 t/y Lactic Acid 215 t/y
Raw Fibre 2825.7 t/y Raw Fibre 2824 t/y
Org. Dry Matter 8992 t/y Org. Dry Matter 6483 t/y
Flux-out 1.9 t FM/h
Silage Juice
Feed Matter 26976 t/y
Dry Matter 3364 t/y
Dry Matter 13 %
Lactic Acid 785 t/y
Org. Dry Matter 2509 t/y
Flux-out 3.6 t FM/h

Heat: Drying: Heat to increase Energy for Drying:

183442 603924 concentration: 6451609 kWh/y
kWh/y kWh/y 167033 kWh/y

Lactic Acid 80% D/L

Amino Acids
Conc. 80 %
Feed Matter 755 t/y Fibres
Feed Matter Product 884 t/y
Dry Matter 679 t/y Feed Matter 7373 t/y
Dry Matter 707 t/y
Dry Matter 90 % Dry Matter 6636 t/y
Org. Dry Matter 706 t/y
Lactic Acid 3.08 t/y Dry Matter 90 %
Pure Lactic Acid 706 t/y
Crude Prot. 666 t/y Lactic Acid 215 t/y
Flux-out 0.12 t/h
Flux-out 0.1 t FM/h Raw Fibre 2824 t/y
Crude Protein 98 % DM Org. Dry Matter 6483 t/y
Flux-out 1 t/h

FIGURE 8.5 Flowsheet of a green biorefinery: Base-case mass flows and

concentrations (DM = Dry Matter; FM = Feed Matter).

despite variations in several factors that affect the optimal solution

for silage fractionation and biogas plants within the value chain,
including prices of key products such as electricity and press cake.
Further details on the method and the case study are provided in
(Halasz, Povoden, and Narodoslawsky, 2005).

8.3.4 Azeotropic Distillation Systems

Azeotropic distillation is common in chemical and allied industries.
Many of the existing distillation processes were designed and
developed through extensive trial-and-error efforts. The thermo-
dynamic pinches or boundaries (e.g., azeotropes), distillation
boundaries, and boundaries of liquidliquid equilibrium envelopes
are of critical importance for azeotropic distillation. Moreover, the
compositions of the feed and product streams must be specified in
order to define the synthesis problem for an azeotropic distillation
174 Chapter Eight

Operating unit Acronym Operating unit Acronym

Mobile press MP Central converter, juice CCJ
Central press CP Local converter, rest of LCRF
Local fibers LF Central converter, rest CCRF
production of fibers
Central fibers CF Central converter, rest CCRJ
production of juice
Green biorefinery GBR Local transport, silage LTrS
Local biogas LBG Central transport, CTrS
Central biogas CBG Local transport, cake LTrC
Local converter, LCS Central transport, cake CTrC
Central converter, CCS Local transport, juice LTrJ
Local converter, cake LCC Central transport, juice CTrJ
Central converter, CCC Central transport, rest CTrRF
cake of fibers
Local converter, juice LCJ

TABLE 8.2 List of Process Steps Incorporated into the Synthesis of the Base

system. Such information can be represented by a residue curve map

(RCM). The RCM of an ethanol-water-toluene system is shown in
Figure 8.7. The points E, W, and T represent the pure components
ethanol (product), water (by-product), and the entrainer, respectively,
while points F and H denote the feed and the ternary azeotrope,
respectively. The whole RCM is partitioned into materials
corresponding to these points; then the lines L1, . . . , L13 demarcate
the areas A1, . . . , A6.
A set of operating units for this process can be represented in a
P-graph; see Figure 8.8. The maximal structure and each solution
structure can be generated by algorithms MSG and SSG, respectively.
The Mathematical Programming model of a process network includes
constraints on operating units (i.e., mathematical models of those
units) as well as constraints on materials (e.g., mass balance
constraints). Mathematical models of mixers, separators, and decanters
are linear, including their mass balances, because they are based on
component flow rates. Thus, the Mathematical Programming model
of process networks involving mixers, separators, and decanters
gives rise to an LPR problem. Each solution structure satisfying
Combined Process Integration and Optimization 175






E1 E2

FIGURE 8.6 Maximal and optimal structure for base-case synthesis.

FIGURE 8.7 Residue E

curve map (RCM) of
the ethanol-water- L11 L1 A=Area
toluene system. L=Line

L12 A2 L2
A5 L6
L9 A4 A3 L3

W L7 L4 T
176 Chapter Eight

FIGURE 8.8 P-graph

representing the F
structure of an L12
distillation system.



W L13




E L11


combinatorial constraints has been generated by algorithm SSG and

evaluated by linear programming (Feng et al., 2003).

8.4 Optimal Synthesis of Energy Systems

8.4.1 Simple Heat Integration
A methodology for combining PNS and HENs for integrated
synthesis has been presented by Nagy et al. (2001). It employs the
hP-graph, a special graph that incorporates both operating units and
heat exchangers. Figure 8.9 shows an example flowsheet and its
hP-graph representation.
Combined Process Integration and Optimization 177
M10 M8 6
7 5
M11 M2 M6(1) M6(6) M10 M11
M4 2
M7 7
M5 M7
M1 M8
4 M6(4)
363 3 4 5
M9 328 343
6 M6 M3 1 M3(3) M3(4) M4
353 M3(1)
M5 333
1 2
M1 M2

FIGURE 8.9 Flowsheet (left) and hP-graph (right) representations of a heat-

exchanging system.

In a hP-graph, the node for a heater is indicated by a bar with a

solid lower half and that for a cooler by a bar with a solid upper half.
If the content of an operating unit is heated or cooled by latent heat,
then its mode is extended to the left by an appropriate heat
exchanger. For example, the flowsheet in Figure 8.9 shows that
operating unit 3 is heated with hot latent heat. Suppose that part of a
material stream fed to an operating unit requires temperature
modulation but that the remaining part, which feeds another
operating unit, does not. In this case, the former is diverted through
a heat exchanger and the latter is directed to another operating unit
(as with the two streams of M6 in the figure).
The ABB algorithm determines whether any of the operating
units should be included in the optimal structure. Applications of
the PNS algorithms extended to the synthesis of combined process
and HENs amply demonstrate the efficacy of the P-graph

8.4.2 Optimal Retrofit Design

A novel holistic method based on the P-graph approach has been
proposed by Halasz, Povoden, and Narodoslawsky (2002) and Liu
et al. (2006) for process retrofitting. Unlike conventional approaches,
the proposed methodology resynthesizes the entire process by
considering the enhanced performance of the operating units. As
such, this approach can account for all possible outcomes, including
the networks inevitable restructuring. The method employs a
P-graph implementation originally devised for synthesizing
grassroots processes. With the combinatorial feasibility of most
operating units largely predetermined, the approach detects any
178 Chapter Eight

necessary retrofit changes in the network structure. For example, the

retrofit of a conventional downstream process for the biochemical
production of butanol is accomplished by incorporating newly
identified adsorbing units, whose characteristics are summarized in
Table 8.3. Note that each of the processing units 24 and 25 are
represented by two operating units in the P-graphthat is, 241,
242 and 251, 252, respectively.
The P-graph of the problems maximal structure is presented in
Figure 8.10. It includes a gas stripper, an extractor, 27 simple
distillation columns, two azeotropic distillation units, a centrifuge,
and four adsorption columns.
The objective function is to minimize the operating cost (in terms
of its present values) for all operating units in the process flowsheet.
The cost for the three operating units is estimated heuristically. The
optimal flowsheet is illustrated in Figure 8.11.
This approach to retrofitting accounts for the fact that any change
implemented in downstream processing systemsespecially at an
upper segmentwill tend to propagate throughout the system as a
function of each systems unique sequence of structural features. The
approach is also applicable to many other chemical processes that
share the same structural features. With this methodology, a set of
optimal and near-optimal retrofit flowsheets can be generated and
ranked in terms of their costs. One (or more) of the options in this set
may be seen as infeasible or unsustainable when the retrofitted
system is further assessed by taking into account additional criteria
and constraints, including stability and controllability as well as
environmental, societal, and regulatory constraints.

Annual cost
Operating units and subunits Cost [103 $]
[103 $/y]
No. of No. of Equipment Type Capital Annualized Operating Total
units sub- desig- capital
units nation
23 C1 Centrifuge 9,240 3,080 1,168 4,248
24 241 B1 Adsorption 25,107 8,369 871 9,248
24 242 B2 Adsorption
25 251 B3 Adsorption 3,806 1,269 132 1,401
25 252 B4 Adsorption

TABLE 8.3 Characteristics, Including Costs of Operating Units, Identified by the

P-graph Method (after Liu et al., 2006)
Combined Process Integration and Optimization 179

C1 E1 G1
S52 S51 S11 S16 S03
B1 B2 S1 B3 B4 D2 D1

S53 S54 S07 S06 S13

S43 S46 S43
D5 D7 D9 D11 D13 D14 D21 A2 D3 A1
D15 S34 D17 S36 D19 D25 D27
S15 S17 S31 S32 S33 S35 S37 S44 S48 S49
S38 S39 S40 S45 S02
D20 D28 D29
D6 D8 D10 D12 D16 D18 D22 D26




FIGURE 8.10 P-graph representation of the maximal structure for producing

butanol, ethanol, and acetone with the inclusion of adsorption.

S09 S20
[A 7] [E 2]
S08 S39 S00
D21 D22
E2 3(G1)
W 26 S40 S19
[B 26] S05 S01
A 11
E3 B3 B4
W 35 25-1(B3) 25-2(B4)
B 26
S08 S54
S54 20-1(D21)
S00 G1
S01 E1 S09 S40
W 35 S39
A 11 Legend:
E4 20-2(D22)
W 1773 E1 Acetone: A kg/h
B 27 W 1738 S20 S19
Butanol: B kg/h
B1 Ethanol: E kg/h
Water: W kg/h

FIGURE 8.11 Optimal flowsheet for the example considering adsorption.

8.5 Optimal Scheduling for Increased Throughput,

Profit, and Security
8.5.1 Maximizing Throughput and Revenue
Majozi and Friedler (2006) developed an effective search algorithm
for determining the globally optimal throughput, revenue, or profit
over a predefined time horizon in multipurpose batch plants on the
basis of the S-graph framework. To demonstrate its performance, a
180 Chapter Eight

case study from a real-life multipurpose batch facility is presented


Example 8.3: Optimal Scheduling

The case study is taken from a multinational pharmaceuticals facility that
produces lotions, shampoos, conditioners, and various creams. The problem
features nonintermediate storage policy (NIS), and the processes involve
mixing and packaging. Mixing occurs in four mixing vessels (V1, V2, V3, and
V4), and packaging occurs in three packing lines (P1, P2, and P3). Because the
stirrers in mixing vessels are of different designs, mixing times vary according
to the vessel used. Table 8.4 shows the duration of mixing for each product
in each vessel, which have a capacity of about 3 t each. The table also lists
the economic contribution made to the company revenue or profit by selling
a unit of each product; shampoos have the highest economic contribution.
The packing duration for each product is 12 h, regardless of which packing
line is employed. The objective in this case study is to maximize the overall
economic result for a 24-h period. The S-graph for the recipe for the products
manufactured in this facility is given in Figure 8.12, where the sets of candidate
equipment units for performing tasks 1, . . . , 15 are defined by sets U1 = {V1, V2,
V4}, U2 = {P1, P2, P3}, U3 = {V1, V2, V3}, U4 = {V3}, U5 = {V2, V3}, and U6 = {V1,
V2, V4}.
The global optimal solution corresponds to two batches of Cream 2 and
one batch of Shampoo, which yields revenue of 9.5 cost units. The schedule
corresponding to the global optimum is shown in Figure 8.13.
Two advantages that this approach has over its Mathematical Programming
counterparts are: (1) it guarantees global optimality and (2) no manipulation
of the time horizon is requiredin particular, it is unnecessary to presuppose
time points that will discretize the time horizon into equal (or unequal) time
intervals. For this reason, the technique qualifies as a true continuous-time

8.5.2 Heat-Integrated Production Schedules

Many algorithmic and heuristic methods have been developed for
solving Heat Integration problems in continuous processes: Pinch
Technology (Linnhoff et al., 1982), superstructure-based mixed

Product Economic Production time in mixing vessel [h]

contribution [cost
unit/batch] V1 V2 V3 V4
Cream 1 2 10 5 N/A 5
Cream 2 3 12 10 7 N/A
Conditioner 1 N/A N/A 12 N/A
Shampoo 3.5 N/A 8 13 N/A
Lotion 1.5 10 6 N/A 9

TABLE 8.4 Scheduling Data for the Case Study of Example 8.3
Combined Process Integration and Optimization 181

FIGURE 8.12 1 2
11 Cream 1
S-graph U1 5 U2 1
representing the
recipe for the case 3 4
study of U3 7 U2 1
12 Cream 2
Example 8.3.
5 6
13 Conditioner
U4 1 U2 1

7 8
14 Shampoo
U5 8 U2 1

9 10
15 Lotion
U6 6 U2 1

P3 Task 8 (Batch 1)
P2 Task 4 (Batch 2)
Equipment Units

P1 Task 4 (Batch 1)


V3 Task 3 (Batch 1)

V2 Task 7 (Batch 1)

V1 Task 3 (Batch 2)
7 8 12 19
Time [h]

FIGURE 8.13 Globally optimal schedule for the case study of Example 8.3.

integer programming (Douglas, 1988; Biegler, Grossman, and

Westerberg, 1997), and integration with PNS (Nagy et al., 2001). In
contrast, scheduling and Heat Integration of batch processes are
both complex optimization problems and quite different in nature.
The problems of scheduling and Heat Integration could be solved
sequentially (in either order). Yet because the solution of one problem
will affect the other, the result of this simplistic, sequential approach
is usually poor. A better solution may result from an integrated
consideration of scheduling and Heat Integration. Since there were
not too many methods for solving this integrated problem, the
effective design and operation of integrated batch systems required
development of a new method (Adonyi et al., 2003). The goal was to
operate simultaneously those tasks that involve potential heat
182 Chapter Eight

exchange without compromising the quality of the scheduling


Example 8.4: Heat-Integrated Production Schedules

The recipes of products A, B, and C are shown in Table 8.5, and the corresponding
S-graph of the recipe is given in Figure 8.14. The parameters of the heat streams
are listed in Table 8.6. The available hot and cold utilities are 200C and 10C.
The heat transfer coefficient between a hot and a cold stream is 1500 W/m2C,
and the area of a heat exchanger unit is 3 m2.

Product A Product B Product C

Task Eq. Time [h] Eq. Time [h] Eq. Time [h]
1 E1 5 E1 5 E1 6
E2 5 E2 5 E2 6
2 E3 4 E5 4 E5 3
E4 4 E6 4 E6 3
3 E5 4 E7 4 E7 4
E6 4 E8 4 E8 4
4 E1 5
E2 5

TABLE 8.5 Recipes for the Products in Example 8.4

1 2 3
31 A
U1 5 U2 4 U3 4

4 5 6
32 A
U4 5 U5 4 U6 4

7 8 9
33 A
U7 5 U8 4 U9 4

10 11 12
34 A
U10 5 U11 4 U12 4

13 14 15 16 35
U13 5 U14 4 U15 4 U16 5 U2

17 18 19 20 36
U17 5 U18 4 U19 4 U20 5 U2

21 22 23 24 37
U21 5 U22 4 U23 4 U24 5 U2

25 26 27
38 C
U25 6 U26 3 U27 4

28 29 30
39 C
U28 6 U29 3 U30 4

FIGURE 8.14 S-graph of the recipe for the products in Example 8.4.
Combined Process Integration and Optimization 183

Name Type Initial temp. Final temp. Heat Product/

[C] [C] [MJ] Task
c1 Cold 40 120 400 A/2
h1 Hot 140 50 200 B/3
c2 Cold 80 130 100 B/4
h2 Hot 150 40 300 C/2

TABLE 8.6 Parameter Values for the Heating and Cooling Requirements in
Example 8.4

E1 1 4 13 21 10 20

E2 25 28 17 7 16 24
Equipment Units

E3 2 5 8

E4 11
E5 3 14 6 9
E6 26 29 18 22 12

E7 19

E8 27 30 15 23

10 20 30 36
Time [h]
Product A Product B Product C

FIGURE 8.15 Gantt chart of the optimal solution for Example 8.4.

If the heating and cooling duties are satisfied by utilities, then the minimal
makespan is 33.1 h with 3100 MJ utility. Extending the upper bound for the
makespan to 36 h reduces the required utility to 1100 MJ. Figure 8.15 displays
the Gantt chart of the optimal solution.

8.6 Minimizing Emissions and Effluents

The task of designing a complete energy system involves significant
combinatorial complexity. For this, integer programming procedures
are not efficient. The P-graph framework and its associated algorithms
are capable of efficiently handling exactly the type of complexity that
is inherent to network optimization, and they appear to be some of
the best tools for solving this task. The P-graph approach can readily
evaluate technologies in their early stages of development, such as
fuel-cell combined cycles (FCCCs) based on molten carbonate and
solid oxide fuel cells (Varbanov and Friedler, 2008).
184 Chapter Eight

Example 8.5: FCCC Systems for Reducing Carbon Footprint

This example presents a procedure for evaluating energy conversion systems
involving FCCC subsystems that use biomass and/or fossil fuels; see Figure 8.16.
The procedure provides a tool for evaluating trends in CO2 emission levels
and the economics of such systems. The significant combinatorial complexity
involved is efficiently handled by P-graph algorithms. Promising system
components are evaluated by a methodology for synthesizing cost-optimal
FCCC configurations that account for the carbon footprint of the various
technology and fuel options using the P-graph framework.
The efficiency of an FCCC system varies with the fuel-cell (FC) operating
temperature, the type of bottoming cycle, and the degree of cycle integration
(see Varbanov et al., 2007). High-temperature fuel cells can be combined with
gas turbines, steam turbines, or both; however, combining all three yields only
marginal improvements. The main reason is that energy in the FC exhaust can
only be shared by the bottoming cycles, and typically this potential for energy
generation is most fully utilized by a steam or gas turbine alone. Hence the
involvement of more than one bottoming cycle cannot substantially increase
overall efficiency, although it can present capital cost trade-offs. Figure 8.17
shows an FCCC system represented by a conventional block-style diagram and
a P-graph fragment.
The synthesis of a processing network, such as the energy conversion system
considered here, requires that the designer choose the best solution from a

Fossil fuels

Biomass Biofuel Power

Processing: Energy Conversion:
Gasification or FCCCs
Digestion Boilers Heat

FIGURE 8.16 FCCC system boundary and processing steps.



Block-style flowsheet P-graph
F: Fuel; FCCC: Fuel Cell Combined Cycle unit; Q: Heat; W: Power

FIGURE 8.17 Flowsheet and P-graph representations of an FCCC system.

Combined Process Integration and Optimization 185
number of options. This optimization task may have several different objectives.
The most obvious are maximizing the system profit (minimizing its cost) and
minimizing the amount of CO2 emissions. Although it is mathematically
possible to define a multiobjective criterion to be optimized, using profitability
alone seems most coherent with the logic of a market economy because profit
drives the behavior of companies. Therefore, in this discussion the system profit
is used as the objective (to be maximized); CO2 emissions are then used as an
additional criterion during the analysis stage.
The materials and streams for the system considered in this example are
listed in Table 8.7. The waste products are assigned negative prices, denoting
that they generate costs for the system rather than revenue. Other performance

Stream Type P-graph Description Price

BM Biomass Raw material Agricultural residues Varies
BG Clean biofuel Intermediate Biogas suitable for
utilization as a fuel
BR Waste / Product / Output Biomass residues 10 /t
By-product (solid remainder from
the biomass after
CO2 Waste, Product / Output CO2 emissions Varies
greenhouse gas
FRT Useful Product / Output Fertilizer obtained as 50 /t
by-product a by-product from the
anaerobic digester
NG Fossil fuel Raw material Natural gas 36.8 /
PR Waste / Product / Output Particulates left from 10 /t
By-product cleaning the synthesis
Q40 Steam Intermediate Steam at P = 40 bar(a)
Q5 Steam Product / Output Steam at P = 5 30 /
bar(a) to satisfy user MWh
RSG Intermediate Intermediate Raw synthesis gas
SG Clean biofuel Intermediate Clean synthesis gas
suitable for use as a
W Power product Product / Output Electrical power to 100 /
satisfy user demands MWh

TABLE 8.7 Materials and Streams for Example 8.5

186 Chapter Eight

and economic data specifications, which provide the basis for appropriate
economic evaluation of the designs, are given in Varbanov and Friedler (2008).
The P-graph tools can be used to generate different solutions based on the
objectives of interest (operating cost, CO2 emissions) and market conditions.
The results show that systems of this type that employ renewable fuels are
economically viable for a wide range of economic conditions; this finding is due
mainly to the high energy efficiency of the FC-based systems. Figure 8.18 shows
the P-graph for the base case in which operating cost is minimized.

8.7 Availability and Reliability

The significance of waste management systems has increased in
recent years because of the growing problems of waste management
chains affecting not only the environment, but also the daily lives of
millions of people. Several promising approaches have appeared,
including the RAMS (reliability, availability, maintenance, and
safety) software for modeling waste management systems. This
approach was analyzed and evaluated thoroughly by Sikos and
Kleme (2009a, 2009b, 2010).
In todays technological world, most waste management plants
depend on the continuous operation of a wide array of complex
machinery and equipment to sustain development, safety, human
health, and economic welfare. Plant operators expect a wide variety

55.2 MW

4.2 t/h
1.4 t/h

0.17 t/h 16.9 MW 15.1 MW

(MCFC+ST) 12.8 MW

2.2 MW
15.0 MW


W 10.0 MW Q5 15.0 MW

FIGURE 8.18 P-graph solution for the energy system of Example 8.5.
Combined Process Integration and Optimization 187
of appliances to function without unexpected problems or major
breakdowns. If these equipment units fail, the consequences can be
catastrophic: contamination, smog, acid rain, injury, loss of life,
production cutbacks, amassed garbage heaps, energy losses, and so
on. Catastrophic failures would also entail substantial added costs.
For these reasons, solid waste management is a matter of serious
concern, which in some cases (waste collection in Naples, Italy) has
even led to a change in government. The models that have been
developed to manage waste-producing processes are of two types:
optimization models deal with specific aspects of waste-related
problems; in contrast, integrated waste management models focus on
sustainability. The latter type can be subdivided into three main
subcategories: models based on costbenefit analysis, models based
on life-cycle inventory, and multicriteria models (Morrissey and
Browne, 2004).
However, there is an element of uncertainty or risk associated
with most environmental decisions. Multicriteria techniques can
be extended to consider reliability issues along the entire waste
management chain and need not be limited to comparing the environ-
mental impacts of different waste treatment methods. As the
complexity of unit arrangements increases, risk assessment becomes
more complicated. Risk is a measure of the plants ability to carry
out its specific operating mission reliably. The expected return on
related investments is a function of the plant equipments capacity,
which is defined in terms of reliability, availability, durability, and
Reliability engineering in waste management addresses all
aspects of the waste life cycle, from its collection and treatment
processes and through the energy generation lifetime, including
maintenance support and availability. The concepts of reliability,
maintainability, and availability can be quantified with the aid of
reliability engineering principles and life data analysis (Kececioglu,
2002). A significant fraction of any systems operating cost is due to
unplanned system stoppages for unscheduled repair of components
or the entire system. One method of mitigating the cost (and impact)
of such failures is to improve the systems reliability and availability.
Of course, improvements in reliability that are made by the supplier
early in the equipments life cycle may well result in additional
development cost being passed on to the customer in the form of
higher equipment acquisition cost. However, this cost increase can be
more than offset by the operational cost reduction associated with
improved reliability and increased uptime, which also improve
productivity. Note that, in the context of waste management,
reliability, availability, and maintenance have specialized meanings.
Reliability is the probability that a system will perform satisfactorily
for at least a given period of time t when used under stated conditions
(Kuo and Zuo, 2003).
188 Chapter Eight

Availability is the probability of the successful operation of a

system in a determined period of time. It can be calculated as the
ratio between an equipments lifetime and total time between failures
(de Castro and Cavalca, 2006):
Lifetime Lifetime MTBF
A (8.1)
Total time Lifetime  Repair time MTBF  MTTR

where MTBF is the Mean Time Between Failure (the inverse of the
failure rate) and MTTR is the Mean Time To Repair (the inverse of the
repair rate).
In addition, there are three frequently used terms defined
by Ireson, Coombs, and Moss (1996) and elsewhere: inherent,
achieved, and operational availability. The expression for inherent
availability is
Ai (8.2)

for achieved availability,

Aa (8.3)

and for operational availability,

Ao (8.4)
Further specifications also existfor example, those given by
Hosford (1960):

Pointwise availability is the probability that a system will be

able to operate within tolerances at a given instant of time.
Interval availability is the expected fraction of a given interval
of time that a system will be able to operate within
A special type of interval availability, called limiting interval
availability, was defined by Barlow and Proschan (1996); it is
the expected fraction of time in the long run that a system
operates satisfactorily.

There are even more specific definitions. For example, the availability
of a redundant system represented by a series of parallel systems is
formulated by de Castro and Cavalca (2006) as

As [1  (1  A )
i 1
] (8.5)
Combined Process Integration and Optimization 189
where Ai is the availability of the components of subsystem i and yi is
the number of redundant components in subsystem i. Comparing
downtimes is another, intuitive way to express availability.
Maintenance covers those activities undertaken after a system is
in the field in order to keep it operational or restore it to operational
condition after a failure has occurred (Ireson, Coombs, and Moss,
1996). There are several classifications of maintenance, the most
important of which are listed as follows:

Breakdown maintenance: An item of the system would be

repaired each time it breaks down (Mechefske and Wang,
Condition-based maintenance (CBM): The critical components
are monitored for deterioration, and maintenance is carried
out just before the failure occurs (Mechefske and Wang,
Preventive (scheduled) maintenance: The plant is stopped at
intervals, often annually, and is partly stripped and inspected
for faults (Mechefske and Wang, 2003).
Reliability-centered maintenance (RCM): A procedure to identify
preventive maintenance requirements of complex systems
(Cheng et al., 2008).

Maintainability is the measure of an items ability to be retained in

(or restored to) a specified condition when maintenance is performed
by personnel having specified skill levels, using prescribed
procedures and resources, at each prescribed level of maintenance
and repair (Ireson, Coombs, and Moss, 1996). De Castro and Cavalca
(2006) defined it as the ability to renew a system or component in a
determined period of time, enabling it to continue performing its
designed functions.
For further information on reliability, availability, and mainten-
ance of waste management systems, see Sikos and Kleme (2010a).
Another difficulty with large systems is that troubleshooting
usually requires several problems to be solved, often simultaneously.
Data collection is also difficult because of the variety of input data;
the characteristics (e.g., type) of waste; production that changes
seasonally, weekly, and also randomly or unpredictably (due, e.g., to
weather changes, price changes that lead to different consumption
priorities, unpredictable natural disasters such as volcanic eruptions);
and changes related to a special venue, where a gathering or migration
of a mass of people can cause substantial changes as at a football
match or rock concert. The associated danger and hazards mean that
waste materials have to be appropriately cared for. A wide variety of
possible failure causes have to be identified. There are many other
issues to consider, too. Waste management systems are quite complex,
190 Chapter Eight

containing both serial and parallel subsystems. They include several

equipment units, each with its own degree of availability and
reliability. Scheduled outages need to be differentiated from the
downtimes caused by unexpected faults. Maintenance actions
including scheduled outagesshould be updated as needed to cope
with changes that occur within the system. The recommendations of
reliability engineers inform the designers task of effectively modeling
and optimizing all the specified objectives.
Many things need to be improved in waste management. These
include policy focus and popular opinion, which should both pay
more attention to the importance of this process and its effects.
Landfill space is decreasing while solid waste is increasing, so public
and private landfills compete for municipal clients to ensure the
capital required to extend landfill life while coping with new permits.
The situation varies in different countries and geographic regions, so
investigation is needed to devise appropriate solutions for specific

8.6 Summary
This chapter presented several examples of combined PI and
optimization. The main focus was on exploiting the advantages of
graph-theoretic (P-graph and/or S-graph) frameworks. These
methods are well tested and have demonstrated their efficiency
across many applications.
Software Tools

9.1 Overview of Available Tools

Process Integration, modeling, and optimization problems in
chemical engineering are complex in terms of scale and relationships.
Solving these problems requires the application of information
technology and computer software, which provide fast and as much
as possible accurate solutions via a user-friendly interface. Software
tools have been widely used for process simulation, integration, and
optimization, and this has helped process industry companies
achieve their operational goals. There is a large number of efficient
tools available, each with its particular advantages. The aim of this
chapter is to describe these tools based on comprehensive experience
with them and their application to Process Integration, modeling,
and optimization.
The online encyclopedia Wikipedia (2009) presents a comprehensive
list of available software tools for the simulation of material and energy
balances of chemical processing plants. Examples include: (1) ASCEND;
(2) Aspen HYSYS by Aspen Technology; (3) ASSETT and D-SPICE
by Kongsberg Process Simulation; (4) CHEMCAD; (5) COCO simulator;
(6) Design II by Winsim; (7) EcosimPro; (8) Environment for Modeling,
Simulation and Optimization (EMSO); (9) Dymola; (10) GIBBSim;
(11) gPROMS by PSE Ltd; (12) OLGA by SPT Group (Scandpower);
(13) Omega by Yokogawa; (14) OpenModelica; (15) Petro-SIM; (16) ProMax;
(17) SimSci-Esscor DYNSIM & PRO/II by Invensys; (18) SysCAD;
(19) UniSim Design & Shadow Plant by Honeywell; and (20) VMGSim.
However, this selection cannot be fully comprehensive and is limited
just to process simulation software. It should cover a wider field and be
studied in more detail, which is the target of this chapter.

9.2 Graph-Based Process Optimization Tools

9.2.1 Process Network Synthesis Solutions
Process network synthesis (PNS) Solutions is a software package
designed to solve problems in PNS. Process synthesis involves

192 Chapter Nine

finding the optimal structure of a process system, and this includes

determining optimal types, configurations, and capacities of the
units that perform various operations within the system. The details
of the process graph (P-graph) approach were presented in Chapters 7
and 8. Process network synthesis is sometimes called flowsheeting
or flowsheet optimization because it involves the creation of a
flowsheet for the industrial process under consideration.
In order to solve a PNS problem, the designer must examine all
feasible structures and select the best among them. The structures
optimality can be assessed in terms of cost, profit, efficiency, and so
on. When designing an optimal process network, both structural
information (which processing units are connected, and how) and
sizing information (how much is produced from a given material)
are needed.
The questions addressed by PNS Solutions are as follows:
(1) How are the building blocks of a process network best represented?
(2) What are the possible solution structures of the problem? (3) What
is the maximal structure (which includes all solution structures)?
(4) What is the optimal structure?
The maximal structure comprises all the combinatorially feasible
structures capable of yielding the specified products from the
specified raw materials. Certainly, the optimal network or structure
is among these feasible structures. The maximal structure generation
(MSG) algorithm produces a P-graph (see Figure 9.1) in which each

FIGURE 9.1 Starting state of the MSG algorithm (PNS Solutions).

Sof tware Tools 193
material and operating unit appears exactly once. In the composition
phase, the nodes are linked stepwise, layer by layer, starting from the
shallowest end (i.e., the final-product end) of the remaining input
structure. The algorithm proceeds by assessing whether any of the
linked nodes violates any of the axioms described in Chapter 7.
The structure generation is performed transparently, and the
maximum structure that results is the input for the Solution Structures
Generation (SSG) algorithm. If a material has to be produced, then
the SSG algorithm generates all possible ways for its production. For
example, if M1 can be produced by O2 or by O3 then the possibilities
include production by O2 alone or by O3 alone or by using both.
Once an operating unit is included, its input materials must be
produced as well, and so forth. Materials are selected in a specific
order. The parentchild relation between steps ensures that the
materials are selected by the process according to this order.
The SSG algorithm yields all the combinatorially feasible solution
structures of a given problem. Unfortunately, the number of feasible
structures at this stage is often too large to be enumerated explicitly.
Therefore, the Accelerated Branch-and-Bound (ABB) algorithm is
used to determine the optimal structure without generating all the
possible solutions. Input to this algorithm includes not only the
structural relationships between materials and operating units but
also such additional information as the costs of each raw material,
the fixed and proportional costs of operating units, and the constraints
(if any) on the quantity of materials and the capacity of operating

9.2.2 S-Graph Studio

S-Graph Studio is a software package that enables the user to design
batch processes and to optimize them via various optimization
methods (S-Graph, 2009). The program also allows scheduling
problems to be defined using graphical tools. It has a modular
architecture, so different solvers can be used with the program.
S-Graph Studio uses the industrys standard file format: BatchML (as
defined by the ISA-88 standard of the World Batch Forum), which is
used to exchange information between industry sites and plants as
well as for other purposes. The Excel file format can also be used for
both input and output. The software includes a solver that utilizes
the S-graph methodology developed at the University of Pannonia
(S-Graph, 2009).
One of the main goals of batch process optimization is to
minimize makespanthat is, finding the shortest time in which a
process can be completed using available resources. S-Graph Studio
can be used to define batch processes in terms of the tasks to be
performed, the available equipment units, and task completion times
(as a function of the equipment units used). This information is
necessary and sufficient for minimizing the makespan of a process.
194 Chapter Nine

The results are presented graphically (by Gantt chart or by schedule

graph) but can be exported to various formats for further use. Issues
addressed by S-Graph Studio include how to represent a scheduling
problem and how to generate the optimal schedule structure in the
cases of unlimited and no intermediate storage. The schedule
associated with the selected solution is displayed as a Gantt chart
(Figure 9.2) or a schedule graph, both of which clearly show the unit
task assignments and their associated timing.

FIGURE 9.2 The Gantt chart and schedule graph of a solution (S-Graph Studio).
Sof tware Tools 195

9.3 Heat Integration Tools

9.3.1 SPRINT
SPRINT is the software package used to design energy systems for
individual processes on a site (SPRINT, 2009). This software tool
provides energy targets and optimizes the choice of utilities for an
individual process. It also performs Heat Exchanger Network (HEN)
design automatically for the utilities that are selected. Both new
design and retrofit can be carried out automatically, though the
designer maintains control over network complexity. For example,
several retrofit modifications can be generated automatically and
then presented to the designer one at a time, so that the number of
modifications is minimized, and the final decision on each
modification is left to the designer. SPRINT can also be used for
HEN operational optimization tasks. The SPRINT and the STAR
programs (see Section 9.3.3) are linked by common data structures,
which facilitate their interaction (e.g., the same files can be used, and
no manual data transfer is required between the programs).
SPRINT is used in both academic and industrial settings for the
following applications: (1) optimizing the choice and load of utilities
for individual processes; (2) automatic design of new HENs;
(3) automatic retrofit design of HENs while using a minimum number
of modifications; (4) automatic design for multiple utilities (new
design and retrofit); (5) interactive network design; (6) simulation of
networks via simple models; (7) targeting minimum energy
consumption; (8) network optimization; and (9) assessing network
operability. User interaction with the network structure is through
the graphical editor shown in Figure 9.3.

9.3.2 HEAT-int
HEAT-int (2009) is a product of Process Integration Ltd. This program
is used to improve the energy performance of individual processes
on a site. It is the next-generation development of SPRINT software
(by a related team of developers) to a commercial standard, and it
offers more user-friendly interface features. The areas in which
HEAT-int is applied are similar to those listed previously for

9.3.3 STAR
STAR is a software package for the design of site utility and
cogeneration systems (STAR, 2009); see Figure 9.4 for an example
user interface. This program analyzes all the interactions between
site processes and the steam system, steam turbines, gas turbines
(with auxiliary firing options), the boiler house, local fired heaters,
and cooling systems. The analysis is used to reduce energy cost and
196 Chapter Nine

FIGURE 9.3 SPRINT software interface (SPRINT, 2009).

FIGURE 9.4 STARs graphical user interface (STAR, 2009).

Sof tware Tools 197
to plan infrastructure investment when operational changes are
anticipated, changes that may include the replacement of energy
equipment. STAR can also be used to investigate flue gas emissions,
which should often be reduced to meet tighter environmental
regulations. The STAR package incorporates several tools, as
described next.
Utility system optimization: A given utility system configuration
incorporates important degrees of freedom for optimization.
Multiple boilers with different efficiencies and different fuels in
addition to multiple back-pressure steam turbines, condensing
turbines, gas turbine heat recovery steam generators, and letdown
valves provide optional heat flow paths that can all be exploited for
significant cost reduction. STAR has a utility system optimization
facility that allows existing utility systems to be optimized. It can
also be used to plan infrastructure de-bottlenecking and investment
Top-level analysis: When studying an existing site, it is important
to understand how its infrastructure influences the degrees of
freedom to make changes as well as the economic consequences of
those changes. These considerations are addressed by STARs top-
level analysis, whose results ensure that the designer does not waste
time and costs pursuing changes that are not viable (structurally or
economically) in the overall site context.
Process energy targets: Even though the primary function of STAR
is the analysis of utility systems, it includes tools for setting energy
targets and selecting utilities for individual processes. Using these
tools allows the picture of the Total Site to be built up from the
individual processes within STAR.
Total Sites: STAR can produce profiles that represent the heating
and cooling requirements of the Total Site. This allows targets to be
set for fuel consumption in the boilers, cogeneration potential, and
energy costs. The Site Profiles can be based either on the full heat
recovery data or, more simply, on data for the utility exchangers
Boiler systems and steam turbine systems: Using STAR allows the
designer to establish optimal targets for the amount of steam
generated by boilers and gas turbines (with auxiliary firing options).
A gas turbine model enables the study of different gas turbine
arrangements. Steam turbines are a part of most utility systems,
serving to generate power or as allocated drivers for process
machines. STAR software incorporates the design of steam turbine
networks and the analysis of their operability.
Emissions: By relating process energy requirements to the supply
of utilities, it is possible to target the amount of fuel required for the
utility system. Such targets can be combined with information on the
198 Chapter Nine

fuel and type of combustion device to generate targets for emissions

of CO2, SOx, NOx, and particulates. It is then possible to explore the
various options for reducing these emissions.

9.3.4 SITE-int
Like HEAT-int (see Section 9.3.2), SITE-int (2009) is a product of
Process Integration Ltd. SITE-int is a state-of-the-art software package
for the design, optimization, and integration of site utility systems in
process industries. Its main features include methods to: (1) model
and optimize site utility systems; (2) minimize operating costs for
existing systems without modification; (3) target cogeneration
potential; (4) optimize site steam pressures and loads; (5) minimize
site energy costs through system modifications; (6) determine the
true benefit from saving energy in the individual processes; (7) reduce
greenhouse gas emissions from the site; and (8) create partial-load
models of utility system components from plant operating data using
regression functions and data reconciliation functions.

9.3.5 WORK
WORK is the software package used for the design of low-temperature
(subambient) processes (WORK, 2009). Low-temperature processes
require heat rejection to refrigeration systems. As a result, the
operating costs for such processes are usually dominated by the cost
of power to run the refrigeration system. Complex refrigeration
systemsincluding cascade and mixed refrigerant systemscan be
analyzed using WORK. For mixed refrigerants, WORK can be used
to optimize refrigerant composition. The software enables the user
to: (1) understand complex refrigeration systems; (2) target minimum
shaft work for low-temperature cooling duties; (3) optimize the
number and temperatures of refrigeration levels; (4) target minimum
shaft work for cascade refrigeration systems; (5) target minimum
shaft work for mixed refrigerant systems; and (6) determine the
optimum composition for mixed refrigeration systems. Three of
these features are discussed in more detail next.
Targeting low-temperature systems: WORK can target minimum
shaft work for simple and complex refrigeration cycles. Targets are
based on rigorous thermodynamic calculations that are highly
accurate even when compared with the results of rigorous simulation.
When multiple refrigeration levels are used, trade-offs arise between
temperature levels and shaft loads. Adjustments to each temperature
level affects not only its own shaftwork requirement but also that of
the other levels. Therefore, all levels of refrigeration must be
optimized simultaneously. This task is facilitated by WORKs
extremely accurate shaftwork predictions.
Simulating refrigeration systems: WORK enables the simulation of
simple and complex refrigeration systems (see Figure 9.5), which may
involve multiple heat levels and multiple compressors. The refrigerant
Sof tware Tools 199

FIGURE 9.5 Refrigeration composition options and ideal composition profiles

(WORK user interface).

heat loads and temperature levels can be optimized relative to the

background process in order to minimize overall shaftwork.
Optimizing mixed-refrigerant systems: WORK can optimize the
composition of mixed refrigerants to minimize shaftwork
requirements. This goal is achieved by matching the composition of
the mixed refrigerant to the cooling profile (see Figure 9.5). The
software outputs a visual representation of the shaftwork losses in
refrigeration cycles, including both mechanical and thermal losses.

HEXTRAN is a steady-state simulator that provides a view of heat
transfer systems (IPS, 2009a). It is used to design new systems,
monitor current systems, optimize existing operations, and solve (or
prevent) heat transfer problems. The program simulates integrated
processes and allows engineers to monitor the performance of
individual exchangers or an entire heat transfer network. It also
offers superior postprocessing displays, plotting Grand Composite
Curves as well as the results from network targeting and zone
analysis. HEXTRAN provides new efficiencies in all types of design
and operational analysis work, such as individual exchanger and
network designs, Pinch Analysis, exchanger zone analysis, split
flows, area payout, and optimal cleaning cycles.
HEXTRAN analyzes factors that can make the difference between
profits and losses. These factors include: (1) improved process heat
transfer, product yield, and quality; (2) increased energy efficiency
and significantly reduced operating costs; (3) increased plant flexibility
and throughput; (4) optimized cleaning schedule for exchangers;
(5) optimal antifouling selection and usage; and (6) improved process
designs and revamps. The HEXTRAN simulator for process heat
transfer offers features that facilitate straightforward evaluations of
complex design, operational, and retrofit situations; in particular, it:
(1) enables the design of both simple and complex heat transfer
systems that result in cost-effective, flexible processes; (2) allows the
200 Chapter Nine

designer to retrofit existing equipment and revamp HENs to yield

optimum performance; and (3) identifies cleaning incentives and
predicts future performance.

9.3.7 SuperTarget
SuperTarget is mainly used to improve Heat Integration in new
design and retrofit projects by reducing operating costs and optimally
targeting capital investment (Linnhoff March, 2009). SuperTarget is
also a tool for day-to-day application by novice or occasional users,
and it makes Pinch Analysis a routine part of process design. The
software features an intuitive user interface that makes the technology
accessible to users at all levels of expertise, and advanced tools are
available for expert applications. Many of the most time-consuming
tasks traditionally associated with Pinch Analysis have been partially
or fully automated.
SuperTarget takes data directly from most popular process
simulation programs through interfaces to Aspen Plus, HYSYS, and
PRO/II. Its automatic data extraction system converts raw process
data into Pinch data, although the user has the option of overriding
the extraction defaults. SuperTarget consists of three program
modules: (1) Process is the core program, which is used to optimize
energy use within a single process unit; (2) Column performs a
thermal analysis of the heat distribution in distillation columns; and
(3) Site is used to establish heat and power targets across a Total Site.

9.3.8 Spreadsheet-Based Tools

Pinch Analysis provides a comprehensive and systematic approach to
maximizing the plant energy efficiency and minimizing the use of
utilities. The Pinch technique is amenable to use with commercial
spreadsheets, which display a grid of rows and columns made up of
multiple cells, each containing alphanumeric text or numeric values.
Kemp (2007) developed an Excel spreadsheet for Pinch Analysis that
incorporates targeting calculations and plots (see Figure 9.6). The main
components of this spreadsheet are: (1) input of stream data; (2) calcu-
lation of Composite Curves (CCs), the problem table, energy targets, and
the Pinch temperature; (3) plots of CCs and the GCC; (4) plots of the
stream population over the temperature range of the problem and the
basic grid diagram; and (5) tables and graphs of the variation in energy
and Pinch temperature over a range of Tmin values.
Neither area targeting nor cost targeting is included in the
spreadsheet because doing so would add considerable complexity.
Suitable data on heat exchanger coefficients is often lacking and most
plots of cost against Tmin could look fairly flat, which is not always
the case when the appropriate cost scale is set. However, topology
can still be identified from the graphs of utility use and Pinch
temperature against Tmin (Kemp, 2007).
Sof tware Tools 201

FIGURE 9.6 Spreadsheet user interface for Pinch Analysis (Kemp Pinch
Spreadsheet, 2006).

9.4 Mass Integration Software: WATER

WATER is a software package for the design of water systems in
process industries (WATER, 2009). Water is used for a wide variety
of operations for mass transfer, washing operations, steam systems,
cooling systems, and so forth. WATER targets and designs for
minimum water consumption by identifying opportunities for
water regeneration, reuse, and recycling. The cost of effluent
treatment systems is minimized through design methods that lead
to distributed systems. Networks involving water use and effluent
treatment are designed automatically, with the designer in full
control of network complexity. This software tool is capable of
handling multiple contaminants. The six principal issues addressed
by WATER are described next.
Water minimization: WATER minimizes freshwater use by
identifying reuse opportunities. The program works using data on
the water quality constraints of each operation of the process. In
addition to constraints for multiple contaminants, constraints on
maximum and minimum flow rate and on water losses and gains
can be specified, as well as forbidden matches.
Multiple sources of freshwater: It is often the case that there are a
number of different sources of freshwater available, each featuring
different qualities and different costs. WATER is able to optimize the
use of multiple sources of freshwater.
202 Chapter Nine

Automatic design of water reuse networks: All constraints relating to

maximum and minimum flow rates, forbidden matches, and water
losses or gains in individual operations can be accounted for. The
designer also has control over network complexity.
Regeneration of water: Once water reuse has been maximized,
further reduction in water consumption can only result from
regenerating wastewater. WATER enables a network designer to
examine and compare the effects of regeneration reuse and
regeneration recycling.
Automatic design of water reuse and effluent treatment networks:
WATER can automatically generate not only water reuse networks
but also effluent treatment networks at a minimum cost. The design
engineer maintains control over network complexity and over the
relevant constraints (e.g., forbidden matches, flow-rate ranges, water
losses and gains during treatment operations).
Pipe work and sewer costs in network design: In addition to the capital
cost associated with regeneration and treatment processes, WATER
also incorporates the cost of connecting the operations that involve
running new pipes and sewers. These factors are included with the
freshwater cost and other capital cost when assessing trade-offs
between cost and overall performance. It is important to consider
pipe work and sewer cost because they have a profound effect on
network structure and complexity.

9.5 Flowsheeting Simulation Packages

9.5.1 ASPEN
Aspen Technology, Inc., provides integrated software applications
for a variety of industriesoil and gas, petroleum, chemicals, and
pharmaceuticalsthat manufacture products using chemical
processes. Aspen ONE (AspenTech, 2009d) is an application suite
that enables process manufacturers to implement best practices for
optimizing their engineering, manufacturing, and supply chain
operations. This software package addresses inefficiencies
throughout the plant, resulting in significant cost savings. Aspen
Plus (Aspen-Tech, 2009c) is a core element of the Aspen ONE process
engineering suite. It is a process modeling tool used in conceptual
design, optimization, and performance monitoring for the chemical,
polymer, specialty chemical, metals and minerals, and coal power
industries. Aspen Plus includes a large database of pure component
and phase equilibrium data for conventional chemicals, electrolytes,
solids, and polymers. This information is updated regularly using
data from the U.S. National Institute of Standards and Technology.
Aspen Plus is well integrated with Aspens software for cost analysis
(AspenTech, 2009a) and heat exchanger design (AspenTech, 2009b).
These software applications enable rigorous sizing and rating of
Sof tware Tools 203
key equipment, such as heat exchangers and distillation columns,
within the simulation environment.

9.5.2 HYSYS and UniSim Design

The HYSYS software was initially created by Hyprotech for
simulating both steady-state and dynamic processes. It includes tools
that can be applied to: (1) estimating physical properties, including
liquidvapor phase equilibrium; (2) establishing heat and mass
balances; (3) designing and optimizing oil and gas processes; and
(4) evaluating and selecting process equipment. HYSYS technology
was acquired and modified by Aspen (see Section 9.5.1) and later by
Honeywell, where it is known as UniSim Design (Honeywell, 2010).
Aspen HYSYS and UniSim Design are similar in terms of application
and the working interface. Both include: (1) a library of physical
properties of many chemical substances; (2) a set of subroutines for
estimating the behavior of several types of plant equipment (heat
exchangers, reactors, etc.); and (3) a graphical user interface for
inputting case specifications and displaying results.
Once the designer describes the process in terms of equipment
units interconnected by process streams, the program solves all the
equations for mass, energy, and equilibrium while taking into
consideration the units specified design parameters. The program
is built upon proven technologies that for two decades have supplied
process simulation tools to the oil and gas industry. Another
advantage of Aspen HYSYS and UniSim Design is their interactive
and flexible process modeling, which allows engineers to design,
monitor, and troubleshoot as well as to make operational
improvements and perform asset management. Employing these
features leads to decision making that enhances the productivity,
reliability, and profitability of a processing plants life cycle
(Ebenezer, 2005). The HYSYS fluid package module requires
information on the characteristics of unit components and the
physical properties of relevant streams. Also, accurate simulation of
processes requires that appropriate thermodynamic models be
selected as a framework for the simulation. Note that a process that
is otherwise fully optimized in terms of equipment selection,
configuration, and operation is of no use whatsoever if the process
simulation was based on an incomplete or inaccurate fluid package
or on an inappropriate thermodynamic model.
HYSYS requires minimal input data from the user; the most
important input parameters are the streams temperature, pressure,
and flow rate (Ebenezer, 2005). The software includes an assortment
of utilities that can be attached to process stream and unit operations.
These tools interact with the process to provide valuable additional
information. For example, the flowsheet used within the HYSYS
simulation environment can be manipulated by the designer to
estimate desired output.
204 Chapter Nine

9.5.3 gPROMS
An advanced modeling environment for process industries is offered
by gPROMS (PSE, 2009), which provides advanced custom modeling
capabilities within a flowsheeting environment combined with an
object-oriented modeling language. The process modeling, process
simulation, and optimization capabilities of gPROMS are used to
generate accurate process behavior predictions and information for
decision support in product and process innovation, design, and
operation. Because gPROMS is a flowsheeting environment, the user
can optimize complex units within the context of an entire process.
The software employs synchronized graphical and text views, which
makes it easy to develop, maintain, and assure the quality of the
models, and archive them. It is capable of assessing all phases of the
process life cycle, from laboratory experimentation to process design
and detailed engineering to online operations.
For modeling problems and deriving solutions, the environment
provided by gPROMS ModelBuilder is employed. ModelBuilder is a
flexible environment in which engineering experts can perform custom
modeling, process engineers can generate graphical flowsheets, and
process operators can run execution-only routines. Figure 9.7 shows
an example user interface. In summary, gPROMS is an equation-
oriented modeling system used for building, validating, and executing
first-principles models within a flowsheeting framework.

FIGURE 9.7 Flowsheeting environment (ModelBuilder) with dual graphical and

text views (PSE, 2009).
Sof tware Tools 205

The CHEMCAD software tool for simulating chemical processes
includes libraries of chemical components, thermodynamic methods,
and unit operations (Chemstations, 2009). Its purpose is to facilitate
the simulation of steady-state continuous chemical processes from
laboratory-scale experiments to full-scale operations; see Figure 9.8
for the user interface. This software package has recently been
upgraded to allow for the dynamic analysis of flowsheets. It offers
operability assessment, proportional integral derivative (PID) loop
tuning, and operator training as well as online process control and
soft-sensor functionality. Models for nonstandard unit operations
can simulate the behavior of a process under varying feed rates,
product rates, temperatures, pressures, and compositions.
The program contains the 1500-component Design Institute for
Physical Property (DIPPR) database as well as a separate, user-
defined database. Components are selected by ID number, formula,
synonym, or class. The program also includes routines for predicting
the properties of components not included in the database. Crude oil
feeds may be characterized by American Society for Testing and
Materials (ASTM) or true boiling point (TBP) curves and then
represented in the simulation by a series of pseudocomponents (cuts)
with different boiling points.
The flowsheet representing the plant layout is input graphically,
and data describing each feed stream and unit operation is entered
via pull-down menu options. Automatic error checking is used to
preclude overspecification or missing data entries. The interactive
interface also permits individual unit operations to be run separately

FIGURE 9.8 CHEMCAD software interface.

206 Chapter Nine

from the complete flowsheet for the purpose of quick what if

analyses (Chempute Software, 2001).
Results output by the program include a full heat and material
balance; thermodynamic and physical properties of all streams;
component flow rates; stream temperatures, pressures, and vapor
fractions; and process equipment parameters. The user may view the
results graphically or obtain a printout of either summary or detailed
results. CHEMCAD includes additional modules; some of them are
(1) CHEMCAD-THERM for the design and rating of shell-and-tube
heat exchangers (including air coolers), (2) CHEMCAD-BATCH for
simulation of batch distillation processes, and (3) CHEMCAD-REACS
for the dynamic simulation of stirred tank reactors. The software
package also includes a subroutine, called CONVERT, which
translates the process flow diagram generated by the program into a
series of drawing exchange format (DXF) files for incorporation into
AutoCAD software routines.

9.5.5 PRO/II
The PRO/II computer simulation system is used by process engineers
in the chemical, petroleum, natural gas, solids processing, and
polymer industries (IPS, 2009b). It includes a large chemical component
library and multiple thermodynamic property prediction methods as
well as advanced and flexible techniques for evaluating unit
operations. The software tool can perform mass and energy balance
calculations for modeling steady-state processes. Expert systems,
extensive input processing, and error checking are included to help
inexperienced users. Typically, PRO/II simulation is applied to the
following tasks: (1) process design; (2) evaluating alternative plant
configurations; (3) modernizing and revamping existing plants;
(4) assessing, documenting, and complying with environmental
regulations; (5) troubleshooting and de-bottlenecking plant processes;
and (6) monitoring and improving plant yields and profitability.

9.6 General-Purpose Optimization Packages

9.6.1 GAMS
The General Algebraic Modeling System (GAMS, 2009) is a high-level
modeling system for Mathematical Programming and optimization.
GAMS is designed for modeling linear, nonlinear, and mixed integer
optimization problems. The system is well suited to complex, large-
scale modeling applications, and it allows the user to build and
archive large models that can later be adapted to new situations. A
particular advantage of GAMS is its ability to handle large, complex,
and/or unique designs that may require many revisions before an
accurate model is established.
Sof tware Tools 207
The system models problems in a natural and highly compact
way. The package includes an integrated development environment
and a group of integrated solvers. GAMS was the first algebraic
modeling language, and it is formally similar to several common
programming languages. Models are described in algebraic
statements that are easy for humans and machines to read. The
system is capable of handling models of many different types, so
switching between model types can be done with a minimum of
effort. For instance, the same data, variables, and equations can be
reused for a linear and a nonlinear model by simply converting a
small number of parameters to variables. GAMS also includes a
variety of solvers for different classes of models.

9.6.2 MIPSYN
The MIPSYN (short for Mixed Integer Process SYNthesizer) is a
user-friendly computer package for the integrated synthesis of new
plants and for the innovative reconstruction of existing plants at
different levels of complexity. The tasks to which it can be applied
range from simple nonlinear programming (NLP) solutions for
plant optimization problems to the mixed integer nonlinear
programming (MINLP) optimization of heat-integrated, flexible
plants. MIPSYN is the successor to the PROSYN synthesizer
(Kravanja and Grossmann, 1990; Kravanja and Grossmann, 1994).
As such, it is based on the most advanced modeling and optimization
techniques: those rooted in disjunctive MINLP. The MIPSYN
software can simultaneously address both discrete optimization
(e.g., selection of process units, their operating status, their
connectivity, and ranges of operation) and continuous optimization
(of temperatures, flows, pressures, etc.). The package integrates the
following methods and related components: (1) GAMS with a variety
of different NLP and MILP solvers; (2) different versions of the outer
approximation (OA) algorithmincluding the modified OA/ER
algorithm (ER denotes equality relaxation) and a new logic-
based OA/ER algorithmwhich are supervised by MIPSYN
command files; (3) a simple simulator that serves as an initializer to
provide NLP subproblems with feasible (or nearly feasible) starting
points; (4) a library of models pertaining to process units,
interconnected nodes, and the simultaneous integration of heat and
mass; (5) a database of the physical properties of the most common
chemical components; and (6) a hybrid modeling environment, with
a link to external FORTRAN routines, for solving the implicit part of
synthesis models.
Execution of the NLP and MILP steps in OA algorithms is
performed through the use of GAMS saving and restart capabilities,
which enable the user to execute MIPSYN in automated or interactive
modes of operation. The synthesizer features many important
208 Chapter Nine

capabilities: initialization of NLP subproblems; calling different NLP

and MILP solvers in a sequence with different option files (text files
containing specifications of solver options to be applied); the efficient
modeling of different formulations and strategies (e.g., multilevel
MINLP); the capacity to solve feasibility problems whose objective
functions are augmented by penalties; multiobjective optimization;
integer-infeasible path optimization; multiperiod optimization; and
flexible synthesis for cases where the true parameters are uncertain.
Some of these applications were described in Kravanja (2009).
MIPSYN can be comprehended and used at different levels of
problem abstraction because it includes: (1) an MINLP solver for
problems of a general nature; (2) a process synthesizer for generating
process flowsheets; and (3) a synthesizer shell for accommodating
applications from different engineering domains.
A number of case studies have been performed using MIPSYN.
In these studies, the synthesis was applied to all basic process
systems and subsystems. Examples include: (1) heat-integrated
reactor networks in overall process schemes; (2) heat-integrated and
flexible separator networks; (3) Heat Exchanger Networks, including
retrofits and networks that use more than one exchanger type;
(4) mass exchanger networks; (5) heat-integrated overall process
schemes based on a sustainable, multiobjective approach; and
(6) flexible and heat-integrated flowsheets, together with their
HENs, for cases involving as many as 30 uncertain parameters.
Note that the MIPSYN synthesizer shell also enables applications
in the area of mechanics (Kravanja, Kravanja, and Bedenik, 1998a;
Kravanja, Kravanja, and Bedenik, 1998b; Kravanja, ilih, and
Kravanja, 2005). These mechanical applications range from simple
NLP optimizations to complex, multilevel MINLP syntheses of
structures in which topology, material use, and dimensions are
optimized simultaneously.

9.6.3 LINDO
LINDO is a tool for solving linear, integer, and quadratic program-
ming problems (Lindo Systems, 2009). It provides an interactive
modeling environment that facilitates the simulation and solution of
optimization problems. LINDO has the speed and capacity to solve
large-scale linear and integer models. The dynamic link library
(DLL) version of LINDO allows users to seamlessly integrate the
LINDO solver into Microsoft Windows applications that are written
in Visual Basic, C/C++, or any language that supports DLL calls.
Workstation users can exploit the linkable object libraries to hook
the solver engine to applications written in FORTRAN or C. The
latest LINDO version (ODC, 2009) offers a number of enhancements,
including: (1) significantly expanded nonlinear capabilities; (2) global
optimization tools; (3) improved performance on linear and integer
Sof tware Tools 209
problems; and (4) enhanced interfaces to other systems, such as
MATLAB and Java.
LINDO API was the first full-featured solver with a callable
library to offer general nonlinear and integer nonlinear capabilities.
This feature allows developers to incorporate a single, general-
purpose solver into their custom applications. The softwares linear
and integer capabilities provide the user with a comprehensive set
of routines for formulating, solving, and modifying nonlinear
models (although a separate, nonlinear license must be purchased to
access these nonlinear capabilities). The global solver combines
techniques for range bounding (e.g., interval analysis, convex
analysis) and range reduction (e.g., linear programming, constraint
propagation) within a branch-and-bound framework to find proven
global solutions to nonconvex nonlinear programs or mixed-integer
nonlinear programs.

9.6.4 Frontline Systems

Solvers (optimizers) are software tools that help users find the best
way to allocate scarce resources. Frontline Systems (,
2009) offers products that cover several problem types and that
allow model definition via an Excel spreadsheet or via a program
written in any of several common programming languages and
environments (e.g., Visual Basic, C/C++/C#, VB.NET, Java, MATLAB).
Models designed using these solver products include decision
variables for quantities of resources as well as calculated results
(constraints) that are subject to limits (e.g., budget, capacity, and/or
time constraints).

9.6.5 ILOG ODM

The ILOG system is geared toward optimizing decisions and thereby
finding the best solutions to complex planning and scheduling
problems. It provides an application environment that supports the
flexible exploration of all the trade-offs and sensitivities. Thus, the
ILOG optimization decision manager (ODM) makes optimization
easier (ILOG, 2006); see Figure 9.9. A unique aspect of ODM-based
applications is that they provide all the features that business people
need to take full advantage of optimization technology. Applications
built using ILOG ODM allow users to create, visualize, and compare
planning or scheduling scenarios, to adjust any of the models inputs
or goals, and to comprehend the relevant binding constraints, trade-
offs, sensitivities, and business options.
With ODM-based applications, overconstrained problems are
automatically relaxed during runtime by ILOG CPLEX, which is
programmed to relax the least important (and fewest possible)
constraints. This ensures that a solution will always be found, and
solutions are presented along with clear information about any
210 Chapter Nine

FIGURE 9.9 User interface for an ILOG ODM-based application (ILOG, 2006).

relaxed preferences or constraints. The optimized solutionwhich

includes a recommended plan or schedule and the attendant
metricscan easily be further explored, allowing users to
understand the optimization models dynamics and perhaps
identify better solution scenarios.

9.7 Mathematical Modeling Suites

9.7.1 MATLAB
MATLAB (short for matrix laboratory) is an interpreted language
for numerical computation (MathWorks, 2009). It allows performing
numerical calculations and then visualizing the results without
the need for complicated and time-consuming programming.
MATLAB allows users to solve problems accurately, to produce
graphics easily, and to generate code efficiently. It also enables
matrix manipulation, the plotting of functions and data,
implementation of algorithms, creation of user interfaces, and
interfacing with programs written in other languages. For technical
problem solving, MATLAB has many advantages over conventional
computer languages, as described next (see also Chapman, 2009).
Ease of use: Programs can be easily written and modified under
the built-in integrated development environment, and they can be
debugged using the MATLAB debugger.
Sof tware Tools 211
Platform independence: MATLAB is supported on many different
computer platforms, which provides a large measure of platform
independence. The language is supported on Windows, Linux,
several versions of UNIX, and the Macintosh. This means that
programs written in MATLAB can migrate to different platforms.
Predefined functions: MATLAB comes complete with an extensive
library of well-tested, predefined functions that generate solutions
to many basic technical tasks. The arithmetic mean, standard
deviation, median, and hundreds of other mathematical functions
are built in to the MATLAB language, which makes the users job
much easier.
Device-independent plotting: Unlike most other computer languages,
MATLAB has many commands for imaging and integral plotting.
The images and plots can be displayed on any graphical output device
supported by the computer that is hosting MATLAB.
Graphical user interface: MATLAB includes tools with which a
programmer can interactively construct a graphical user interface for
any program. Given this capability, programmers can design
sophisticated data-analysis programs that can be operated by
relatively inexperienced users.
The built-in functions of MATLAB allow users to perform basic
minimization and maximization routines. However, compiling and
executing a proper optimization program may require the use of
add-on packages. One popular add-on is TOMLAB (TOMLAB, 2010),
a powerful optimization platform and modeling language for
solving applied optimization problems in MATLAB. The TOMLAB
environment includes a wide range of features, tools, and services
for optimization analyses.

9.7.2 Alternatives to MATLAB

There are two free alternatives to MATLAB software: SCILAB
(Scilab, 2009) and OCTAVE (Octave, 2009). Both provide number-
crunching power similar to MATLABs but at an advantageous
cost/performance ratio (since they are free). In essence, SCILAB and
OCTAVE are interpreted, matrix-based programming languages.
They have strong similarities to MATLAB: (1) the use of matrices as
a fundamental data type; (2) built-in support for complex numbers;
(3) powerful built-in math functions and extensive function libraries;
and (4) extensibility in the form of user-defined functions and macro

9.8 Other Tools

9.8.1 Modelica
Modelica is an object-oriented, declarative, multidomain language
for the component-oriented modeling of complex systemsthat is,
212 Chapter Nine

systems containing mechanical, electrical, electronic, hydraulic,

thermal, control, power, and/or process-oriented subcomponents
(Modelica, 2009a; Modelica, 2009b). The free Modelica language is
developed by the nonprofit Modelica Association. In this language,
models are described by classes, which may contain differential,
algebraic, and discrete equations alongside properties and algorithms.
The language can be used for hardware in the loop simulations
and for embedded control systems (Modelica, 2009a). Modelica
supports high-level modeling by composing complex models from
detailed, component models. Models of standard components are
typically available in model libraries. A model can be defined by
using a graphical model editor offered by the various language
implementations (Modelica, 2010) to draw a composition diagram:
positioning icons that represent the model components, drawing
connections between the components; and providing parameter
values in dialogue boxes (Modelica, 2009b). Constructs for including
graphical annotations in Modelica render the icons and composition
diagrams portable between different platforms. Typical composition
diagrams from various domains are shown in Figure 9.10.
In addition to the basic language elements mentioned previously,
Modelica also supports arrays (via a MATLAB-like syntax). Array
elements may consist of basic data types (e.g., real, integer, boolean,
string) or, more generally, of component models. This flexibility
allows for convenient descriptions of complex models containing
repetitive elements.

9.8.2 Emerging Trends

Lead times for the development of new energy technologiesfrom
initial idea to commercial applicationcan run into years. Reducing


3D mechanics

b1={0,0,0} b4={0,0.73,0}



b0 r6={0,1,0}

state machines
b2={0,0.5,0} b5={0,0,0}


control systems

motor gear load


1 hydraulics

C Rp1
J=1 q

C=C R=200



power trains
power systems
T1 Line1

p1 p2 p3 k=k 3rd
electrical systems Line2

FIGURE 9.10 Composition diagrams produced using Modelica (2009b).

Sof tware Tools 213
this lead time is a primary objective of the European Commissions
Directorate-General for Transport and Energy (DGTREN), which has
funded two related projects, EMINENT (Early Market Introduction
of New Energy Technologies) and EMINENT2 (Kleme et al., 2005b).
The principal features of these projects are a software tool and an
integrated database of new technologies and sectoral energy supplies
and demands. The software tool is for analyzing the potential
impact of new and underdeveloped energy technologies emerging in
different sectors from different countries. This tool has also been
used in case studies that illustrate the new technologies.
The aim of the EMINENT software is to assess the market
potential of early-stage technologies (ESTs) in various energy supply
chains by evaluating their performance in terms of: (1) CO2 emissions,
(2) costs of energy supply, (3) use of primary fossil energy, and
(4) effects on different subsectors of society. Technology developers
and financial supporters are frequently not aware of all the potential
applications or the relative market attractiveness of such technology
across different countries and sectors of society. Thus, the EMINENT
project provides insight into the market potential that can accelerate
the development of technologies; this benefits research and development
efforts by targeting them more effectively.
The EMINENT tool that evaluates ESTs makes use of two databases:
(1) national energy infrastructures, which contain information on the
number of consumers per sector, type of demand, typical quality of
the energy required, and consumption and installed capacity per end
user and (2) other ESTs and technologies that are already commercial
including key information on new energy technologies currently
under development and proven energy technologies now available
and in use. The availability, price, and geographical conditions of
primary energy resources differ significantly worldwide, so the impact
of ESTs can be evaluated only within the context of a particular
(national) energy supply system (Kleme et al., 2007b). The EMINENT
package consists of an integrated resource manager, a demand
manager, and an EST manager as well as databases on resources and
demand. The methodology, as shown in Figure 9.11, can be briefly
summarized as follows:

Resource manager selects, enters, and modifies data on country

resources (electrical, fuel-based, geothermal, hydro, ocean tidal,
wave, and wind energy).
Demand manager describes energy demands per subsector in a
given country; it selects data for the technology assessment, enters
new data, and modifies old data.
Technology manager stores key data on existing technologies and
User input: (1) The sectoral energy demands whose potential supply
by EST is being evaluated; (2) any other peripheral technologies
214 Chapter Nine

Solar Coal
Hydro Gas/Oil


Biomass Heat

Apply EMINENT tool to assess the

new technology potential

Energy supply system

Intermediate products

Coal treat- Electricity
ment- Transpor- Conver-
Oil sys- tation and sion
tems storage processes
Natural gas Distribu- Heat
tion and
EST storage
Wind to be fuel
Solar energy


FIGURE 9.11 Methodology employed in EMINENT toolbox (after Kleme et al.,


needed to establish full energy supply chains; and (3) resources

the EST would require to satisfy the full energy supply chain.
Output: (1) aggregate numbers; (2) application potential of ESTs per
(sub)sector; (3) annual cost of energy delivery per consumer and
per (sub)sector; and (4) annual CO2 emissions.
Performance indicators: (1) efficiency of energy supply chain;
(2) usage of primary fossil energy; (3) CO2 emissions per MWh;
and (4) costs of delivered energy [/MWh].

Most of the ESTs analyzed were not yet able to achieve the cost
levels of existing technologies. Some of the ESTsfor instance,
molten carbonate fuel cells (MCFCs)could become competitive
with relatively small efforts aimed at cost reduction. Various
promising future trends located with the help of EMINENT have
been supported by policy makers. Some near-term prospects are:
(1) diverse energy systems that encompass supplies, management,
and control of demand; (2) market-based grids with large power
stations, including wind farms of different types; (3) local distributed
generation, including biomass, waste, and wind; (4) micro generation,
Sof tware Tools 215
including Combined Heat and Power (CHP), fuel cells, and
photovoltaic technology; new homes built with nearly zero carbon
emissions; (6) natural gas; (7) coal-fired generation combined with
carbon capture and storage; and (8) mixed fuels (e.g., coal mixed
with biomass, natural gas mixed with hydrogen). Longer-term
prospects include nuclear power stations, fuel cells, hydrogen
obtained from nonelectricity sources (e.g., biomass, low-carbon
biofuels), and nuclear fusion.

9.8.3 Balancing and Flowsheeting Simulation for

Energy-Saving Analysis
Tools for balancing, reconciliation, and flowsheeting simulation are
frequently used for energy-saving analysis and have become an
essential item in the process engineers toolbox. These tools help
designers develop complete mass and energy models based on actual
measurements and/or design values and mathematical models. As a
result, these simulation tools play an important role in the technical
and economic decision making related to the planning and design
stages of processes under development and to the operation of
existing equipment. Several computer-based systems have been
developed over the years in order to assist process engineers with
energy and mass balance calculations. However, ongoing development
costs have resulted in a limited number remaining on the market,
and these have been secured only by a substantial volume of sales.
An early overview of the field was presented by Kleme (1977).
The technology used for balancing and for data validation and
reconciliation consists of a set of procedures incorporated into a
software tool. Process data reconciliation has become the main
method for monitoring and optimizing industrial processes as well as
for performing component diagnosis, condition-based maintenance,
and online calibration of instrumentation. According to Heyen and
Kalitventzeff (2007), reconciliation has three main goals: (1) detect
and correct deviations and errors of measurement data so that all
balance constraints are satisfied; (2) exploit the structure of and
knowledge about the process system by using measured data to
estimate unmeasured data whenever possible, in particular as
regards key performance indicators (KPIs); and (3) determine the
postprocessing accuracy of measured and unmeasured data,
including KPIs.
A comprehensive system called DEBILwhich included
balancing, flowsheeting simulation, and optimizationwas created
several decades ago (see Kleme, Lucha, and Vaek, 1979) and has
since been further developed by Belsim into the balancing and
reconciliation tool VALI (VALI III User Guide, 2003). This tool has been
applied to various energy-efficiency tasks, as reported with respect
to nuclear power plants (Langenstein, Jansky, and Laipple, 2004) and
regenerative heat exchangers (Minet et al., 2001).
216 Chapter Nine

9.8.4 Integrating Renewable Energy into

Other Energy Systems
There are many different computer tools now available that can be
used to analyze the integration of renewable energy. Connolly and
colleagues (2010) have reviewed these tools in considerable detail.
Their study analyzed 37 computer tools designed for this purpose, as
summarized (in alphabetical order) in Table 9.1.

Tool (availability) Applications Reference

AEOLIUS (commercial) Power plant dispatch Universitt Karlsruhe
simulation tool (2009)
BALMOREL (free Open-source electricity Balmorel (2009)
download) and district heating tool
BCHP Screening Tool Assesses CHP in buildings Oak Ridge National
(free download) Laboratory (2009a)
COMPOSE (free Single-project techno- Aalborg University
download) economic assessments (2009a)
E4cast (commercial) Tool for energy projection, ABARE (2009)
production, and trade
EMCAS (commercial) Creates techno-economic Argonne National
models of the electricity Laboratory (2009a)
EMINENT (registration Early-stage technologies EMINENT2 (2009)
required) assessment
EMPS (commercial) Electricity systems with SINTEF (2009b)
thermal/hydro generators
EnergyPLAN (free User-friendly analysis of Aalborg University
download) national energy systems (2009b)
energyPRO Single-project techno- EMD International A/S
(commercial) economic assessments (2009)
ENPEP-BALANCE (free Market-based energy- Argonne National
download) system tool Laboratory (2009b)
GTMax (commercial) Simulates electricity Argonne National
generation and flows Laboratory (2009c)
H2RES (internal use Energy-balancing models Instituto Superior
only) for island energy systems Tcnico (2009)
HOMER (free Techno-economic HOMER (2009)
download) optimization for stand-
alone systems

TABLE 9.1 Software Tools for Analyzing the Integration of Renewable Energy into
Other Energy Systems (after Connolly et al., 2010)
Sof tware Tools 217

Tool (availability) Applications Reference

HYDROGEMS (free for Renewable and H2 stand- Institute for Energy
TRNSYS users) alone systems Technology (2009)
IKARUS (earlier Bottom-up cost Forschungszentrum
versions are free) optimization tool for Jlich Institute for
national systems Energy Research (2009)
INFORSE (free to Energy-balancing models INFORSE-Europe (2009)
nongovernmental for national energy
organizations) systems
Invert (free download) Simulates promotion Vienna University of
schemes for renewable Technology (2009)
LEAP (free for User-friendly analysis for Stockholm Environment
developing countries national energy systems Institute (2009)
and for students)
MARKAL/TIMES Energyeconomic tools for International Energy
(commercial) national energy systems Agency (2009)
MESAP PlaNet Linear network models of Schlenzig (2009)
(commercial) national energy systems
MESSAGE (free except Medium- and long-term International Institute
for the simulators) assessment of national or for Applied Systems
global energy systems Analysis (2009)
MiniCAM (free Simulates long-term, MiniCAM (2009)
download) large-scale global changes
NEMS (free except for Simulates the U.S. energy Energy Information
the simulators) market Administration (2009)
ORCED (free Simulates regional Oak Ridge National
download) electricity dispatch Laboratory (2009b)
PERSEUS (sold to large Family of energy and Universitt Karlsruhe
European utilities) material flow tools (2009)
PRIMES (projects are Market equilibrium tool National Technical
undertaken for a fee) for energy supply and University of Athens
demand (2009)
ProdRisk (commercial) Optimizes operation of SINTEF (2009a)
hydro power
RAMSES (projects are Simulates the electricity Danish Energy Agency
undertaken for a fee) and district heating sector (2009)
RETScreen (free Renewable analysis for National Resources
download) electricity and/or heat in Canada (2009)
systems of any size

TABLE 9.1 Software Tools for Analyzing the Integration of Renewable Energy into
Other Energy Systems (Continued)
218 Chapter Nine

Tool (availability) Applications Reference

SimREN (projects are Bottom-up supply and Institute for Sustainable
undertaken for a fee) demand for national Solutions and
energy systems Innovations (2009)
SIVAEL (free download) Electricity and district (2009)
heating sector tool
STREAM (free Overview of national Ea Energy Analyses
download) energy systems to create (2009)
TRNSYS16 Modular structured TRNSYS (2009)
(commercial) models for community
energy systems
UniSyD3.0 (contact National energy systems Unitec New Zealand scenario tool (2009)
WASP (free to IAEA Identifies the least-cost IAEA (2009)
member states) expansion of power plants
WILMAR Planning Tool Increasing the use of Ris National
(commercial) wind in national energy Laboratory (2009)

TABLE 9.1 Software Tools for Analyzing the Integration of Renewable Energy into
Other Energy Systems (Continued)
Examples and
Case Studies

his chapter provides a selection of problems that have been
collected by the authors over twenty years of teaching and
consulting. The problems include step-by-step solutions,
which provide guidance for mastering the methodology. Space
limitations make it impossible to cover all aspects or to provide fully
comprehensive solutions, for which an entire book would be required.
The representative problems selected for presentation here are
divided into five groups: Heat Pinch Technology, Total Sites,
integrated placement of processing units, utility placement, and
Water Pinch Technology.

10.1 Heat Pinch Technology

The first group of problems involves heat Process Integration, known
also as Heat (and, in Australia and South Africa, as Thermal) Pinch

10.1.1 Heat Pinch Technology: First Problem

Problem 1: Task Assignment
A chemical process is served by a Heat Exchanger Network (HEN),
as displayed in Figure 10.1. The figure shows the supply and the
target temperatures at the start and end of each stream. For this
problem, the minimum allowed temperature difference Tmin = 10C.
The following tasks should be performed:

(a) Completion of the network description. (1) Convert the network

representation to a grid diagram. (2) Calculate the missing
temperatures and heat loads for the recovery heat exchangers,
heaters, and coolers.
(b) Identification of the Network Pinch. (1) Find and write down a
path that connects a heater and a cooler through a recovery
heat exchanger. Derive expressions for the heat loads and

220 Chapter Ten

140C 164C 125C 170C 300C

CP [kW/C]
337C 190C 40C
100 H1 C

160 H2 220C

220C 60C
50 H3 C

160C 45C
190 H4 C

100C 35C 80C 60C 140C

C1 C2 C3 C4 C5

CP [kW/C] 100 70 175 60 200

FIGURE 10.1 Existing Heat Exchanger Network (Problem 1).

temperatures that will vary with shifting amounts of heat

X [kW] through the path in order to increase the heat
recovery. (2) Shift the maximum amount of load through the
path while accounting for the minimum allowed temperature
difference. Write down the maximum shifted load, the
pinching exchanger, and the stream temperatures for the
exchanger. Using the Pinch method yields the Maximum
Energy Recovery (MER) targets listed in Table 10.1 and the
Grand Composite Curve (GCC) shown in Figure 10.2.
(c) Identification of the scope for improvement. (1) Calculate the
scope for improvement in heat recovery in terms of the
networks total heating requirement. (2) Find the heat
exchangers, implementing cross-Pinch heat transfer, and
write them down.

Problem 1: Solutions
Answer to (a)(1) and (a)(2). The HEN is represented as a grid diagram in
Figure 10.3, which also shows the missing HEN parameters (i.e., temperatures
and loads).
Answer to (b)(1). A path between a cold and a hot utility is called a utility path.
Heat duty can be shifted along a utility path, which provides a degree of freedom
in the HEN retrofit. Figure 10.4 shows a utility path connecting a heater and a
Examples and Case Studies 221

Interval T * [C] Enthalpy [MW]

1 332 17,780
2 305 20,480
3 215 2,480
4 175 2,880
5 169 2,580
6 155 900
7 145 0 (Pinch)
8 130 1,650
9 105 25
10 85 725
11 65 4,925
12 55 7,625
13 40 10,925
14 35 11,425

TABLE 10.1 Calculated MER Heat Cascade

for Problem 1

Tmin = 10C



T* [C]




0.0 5.0 10.0 15.0 20.0
H [MW]

FIGURE 10.2 Grand Composite Curve for Problem 1.

222 Chapter Ten

CP [kW/C]
40.0 99.7 190.0 337.0
C3 E2 E1 H1 100
5970 kW
160.0 220.0
E4 H2 160
60.0 88.0 220.0
C6 E5 H3 50

45.0 118.55 160.0

C10 E7 H4 190

100.0 300.0
C1 H8 100
35.0 14700 kW 5300 kW 164.0
C2 70

80.0 9030 kW 125.0

C3 175
7875 kW 170.0
60.0 60
6600 kW 300.0
140.0 188.0 200
C5 H9
9600 kW 22400 kW

FIGURE 10.3 Calculated grid diagram for Problem 1.

Load-shift path
CP [kW/C]

40.0 TH12 TH11 337.0

C3 E2 E1 H1 100
160.0 220.0
E4 H2 160
60.0 220.0
C6 E5 H3 50
1400 kW 118.55
45.0 160.0 190
C10 E7 H4
13984 kW
100.0 TC11 300.0 100
C1 H8
TC2,in=35.0 TC2,out=164.0 70
80.0 9030 kW 125.0 175
60.0 7875 kW 170.0
C4 60

140.0 6600 kW
188.0 300.0
C5 H9 200
9600 kW 22400 kW

FIGURE 10.4 The load-shift path identified for Problem 1.

Examples and Case Studies 223
cooler through a heat recovery exchanger. The temperatures, which will vary
with the amount of heat shifted, are also indicated in the figure.
The expressions for the variation in the affected heat loads and temperatures
in response to a shifted load of X kW are displayed in Eqs. (10.1)(10.8) (notation
as in Figure 10.4):

Cooler C3 load: QC3 = 5970 - X (10.1)

Temperature of stream C1: TC11 = 247 + X /100 (10.2)

Heater H8 load: QH8 = 5300 - X (10.3)

Exchanger E1 heat load: EE1 = 14700 + X (10.4)

Temperature 1 of stream H1: TH11 = 190 - X /100 (10.5)

Output temperature of C2: TC2,out = 164 (10.6)

Input temperature of C2: TC2,in = 35 (10.7)

Temperature 1 of stream H1: 99.7  X /100 (10.8)

Answer to (b)(2). The temperature differences should be checked at the affected

heat exchangers, which are E1 and E2. As can be seen, the heat capacity flow
rate (CP) values for both streams in E1 are equal to 100 kW/C. Exchanger E2s
hot stream has a higher CP value than that for the cold stream, so the smaller
temperature difference will be at its hot end. This allows one to calculate the
maximum amount of the load X to shift by solving a few inequalities. For
process exchanger E2, the temperature difference should not be less than
Tmin, and this determines the value of the maximum heat load that can be

TH11 TC2,out Tmin (10.9)

190 X/100 164 10 (10.10)

X 2600 kW (10.11)

For this value of load shift, the exchanger E2 will be pinched at its hot end as
follows: TH11 = 174.0C and TC2,out = 164.0C. The temperatures at the cold end
will be TH12 = 83.7C and TC2, in = 35.0C.
Answer to (c)(1). Total heating requirement for the existing network:
Q H = Q H8 + Q H9 = 27,700 kW. Total cooling requirement for the network:
Q C = Q C3 + Q C6 + Q C10 = 21,345 kW. There is no inappropriate use of utilities.
Compared with the targets, both total utility heating and total utility cooling
are higher by 9920 kW, which is due to cross-Pinch heat transfer.
Answer to (c)(2). The Process Pinch is at 150C for hot streams and at 140C for
cold streams. Comparing the network temperatures with the Pinch location
yields a list of exchangers that violate the Pinch; see Table 10.2.
224 Chapter Ten

Exchanger Hot side violation Cold side violation

E1 No Yes
E2 Yes Yes
E4 No No
E5 Yes Yes
E7 Yes No

TABLE 10.2 Exchangers That Violate the Pinch (Problem 1)

Stream Tin [C] Tout [C] CP [kW/C]

1 Cold 38 205 11
2 Cold 66 182 13
3 Cold 93 205 13
4 Hot 249 121 17
5 Hot 305 66 13

TABLE 10.3 Process Stream Data for Problem 2

10.1.2 Heat Pinch Technology: Second Problem

Problem 2: Task Assignment
The stream data for a process are as listed in Table 10.3. The hot utility
is steam at 200C, the cold utility is water at 38C, and Tmin = 24C.

(a) Plot the Composite Curves (CCs) for this process.

(b) Determine Q H,min, QC,min, and the Pinch temperatures.
(c) Assuming that the cost of cooling water and steam is 18.13
and 37.78 $ kW1 y1, plot the minimum annual cost for the
utilities as a function of Tmin in the range 5154C (in 1
(d) Design a network that features a minimum number of units
and maximum energy recovery for Tmin = 24C.

Problem 2: Solutions
Answer to (a). The Composite Curves for the process stream data are shown in
Figure 10.5.
Answer to (b). The position of the CCs in Figure 10.5 indicates that, for
Tmin = 24C, this is a threshold problem at the cold end: Q H,min = 0 kW. On the
cold end there is excess of hot streams, which means that some cold utility is
required: Q C,min = 482 kW.
Examples and Case Studies 225

FIGURE 10.5 Composite

Curves for Problem 2. T [C] Tmin = 24C
heating utility


482 kW
0 2000 4000 H [kW]

Cost [$/y]





51 52 53 54 Tmin [C]

FIGURE 10.6 Minimum allowed temperature difference Tmin versus annual

utility cost.

Tmin [C]
51 52 53 54
QH [kW] 0 0 27 57
Cost of heating [$/y] 0 0 1,020 2,153
QC [kW] 482 482 509 539
Cost of cooling [$/y] 8,739 8,739 9,228 9,772
Total utility cost [$/y] 8,739 8,739 10,248 11,926

TABLE 10.4 Utility Requirements and Cost for Various Tmin Values

Answer to (c). The utility cost plot, which is given in Figure 10.6, is based on the
data in Table 10.4.
Answer to (d). A network that features the minimum number of units and
maximum energy recovery is shown in Figure 10.7.
226 Chapter Ten

(a) CP [kW/C]
121C 163.4C 249C
E4 E1 4 17

CP=9.1 kW/C
66C 103.1C 305C
C E3 5 13
CP=3.9 kW/C
482 kW
38C 205C 11
1 E2
1837 kW
66C 121.43C 182C 13
2 E4 E3
720 kW 788 kW
93C 205C
3 E1 13
1456 kW

One possible design

(b) CP [kW/C]
CP=5.22 kW/C
E1 17
121C 249C
E2 4
CP=11.78 kW/C

66C 103.1C 193C 305C

C E4 E1 5 13
482 kW
38C 144.3C 205C 11
1 E4 E3
1169 kW 668.16 kW
66C 182C 13
2 E2
1508 kW
93C 205C
3 E1
1456 kW

A second possible design

FIGURE 10.7 Optimal Heat Exchanger Network.

10.2 Total Sites

10.2.1 Total Sites: First Problem
Problem 3: Task Assignment
Suppose a Total Site incorporates two processes, A and B. The stream
data for process A (assuming TA,min = 10C) are given in Table 10.5,
and the GCC for this process is shown in Figure 10.8. The stream
data for process B (assuming TB,min = 10C) are shown in Table 10.6.
Examples and Case Studies 227

Stream name Supply temp. [C] Target temp. [C] H [kW]

A1 50 140 450
A2 100 30 420
A3 100 140 80

TABLE 10.5 Data for Process A (Problem 3)

Tmin = 10C

Sink A1
Sink A2
T* [C]

100 (59;105)
Source A1

(40;55) Source A2
0 100 200 300
H [kW]

FIGURE 10.8 Grand Composite Curve for process A (Problem 3).

Stream name Supply temp. [C] Target temp. [C] CP [kW/C]

B1 190 120 6
B2 100 240 4
B3 80 60 2

TABLE 10.6 Data for Process B (Problem 3)

High-pressure (HP) steam from a boiler is available at 240C

saturation temperature, and cooling water is available at 2030C.
It is necessary to analyze these two processes (and hence the
Total Site) as follows:

(a) Find the Pinch temperatures and minimum utility demands

for processes A and B.
(b) Construct the GCC for process B.
228 Chapter Ten

(c) Construct the Total Site Profiles (TSPs).

(d) Assuming that the site manager is willing to accept just one
more steam main in addition to the HP, identify this steam
level on the Site Profiles. What saturation temperature will
yield the most energy savings?
(e) Determine the load target for this steam level.

Problem 3: Solutions
Answer to (a). For process A, Tmin = 10C, the minimum hot utility is 330 kW,
and the minimum cold utility is 220 kW (see Table 10.7).
For process B, Tmin = 10oC, the minimum hot utility is 240 kW, and the minimum
cold utility is 140 kW (see Table 10.8).
Answer to (b). Given the data in Table 10.8, the GCC for process B is drawn as
shown in Figure 10.9.
Answer to (c). Using the GCC plots from Figures 10.8 and 10.9, the heat source
and heat sink segments have been extracted. These segments are listed in
Tables 10.9 and 10.10 for process A and process B, respectively.
The data reported in Tables 10.9 and 10.10 have been combined into
composite heat source and sink profiles. Figures 10.10 and 10.11 illustrate the
composition procedure, which is analogous to the one for constructing the
process CCs. Figure 10.12 shows the resulting TSPs.

Interval Temperature [C] Enthalpy [kW]

1 145 330
2 105 50
3 95 0 (Pinch)
4 55 40
5 25 220

TABLE 10.7 Problem Table for Process A (Problem 3(a))

Interval Temperature [C] Enthalpy [kW]

1 245 240
2 185 0 (Pinch)
3 115 140
4 105 100
5 75 100
6 55 140

TABLE 10.8 Problem Table for Process B (Problem 3(a))

Examples and Case Studies 229
Tmin = 10C

Sink B1
Source B1
T* [C]


100 Source B2
0 50 100 150 200 250
H [kW]

FIGURE 10.9 Grand Composite Curve for Process B (Problem 3(b)).

Segment T *start [C] T *end [C] H [kW] T **start [C] T **end [C]
Sink A1 105 145 280 110 150
Sink A2 95 105 50 100 110
Source A1 95 55 40 90 50
Source A2 55 25 180 50 20

TABLE 10.9 Heat Source and Sink Segments from the GCC for Process A
[Problem 3(c)]

Segment T *start [C] T *end [C] H [kW] T **start [C] T **end [C]
Sink B1 185 245 240 190 250
Source B1 185 135 100 180 130
Source B2 75 55 40 70 50

TABLE 10.10 Heat Source and Sink Segments from the GCC for Process B
[Problem 3(c)]

Answers to (d) and( e). By analyzing the TSPs (Figure 10.12), one can see that
there is an opportunity to match heat source and heat sink requirements
in the steam temperature interval between 100C and 180C. The proposed
steam saturation temperature is in the interval from 117.1C to 130C. Within
this interval, the maximum heat recovery through the steam system (equal to
100 kW) is achieved. This results from matching the values for steam generation
and steam use (here both are equal to 100 kW).
230 Chapter Ten

Source Source Source Source

TB [C]
A1 A2 B1 B2
CP = 2 H = 100
CP = 0 H = 0
CP = 1 H = 20
CP = 3 H = 60
CP = 6 H = 180

H [kW] 40 180 100 40

CP [kW/C] 1 6 2 2

FIGURE 10.10 Combined Site heat sources for Problem 3(c).

FIGURE 10.11 TB [C]

Combined Site Sink A1 Sink A2 Sink B1
heat sinks for 250
Problem 3(c). CP = 4 H = 240
CP = 0 H = 0
CP = 7 H = 280
CP = 5 H = 50
H [kW] 280 50 240
CP [kW/C] 7 5 4


Temperature interval with

100 kW heat recovery via
the steam system

T = 130C

T = 117.1C


400 200 0 200 400 600

H [kW]

FIGURE 10.12 Total Site Profiles for Problem 3(c).

Examples and Case Studies 231

10.2.2 Total Sites: Second Problem

Problem 4: Task Assignment
Two processes (A and B) are operating on a site. The stream parameters
for process A and process B are given in Tables 10.11 and 10.12,
respectively. Medium-pressure (MP) steam from a boiler is available
at 220C saturation temperature, and Tmin = 10C for both processes.
The cooling water supply temperature is 18C and the return
temperature is at most 35C.

(a) Use the Problem Table Algorithm (PTA) to identify the

minimum heating/cooling duty for each process, and plot
the GCCs.
(b) Create the TSPs including the GCC pockets.
(c) Select the saturation temperature for a second (low-pressure)
steam level, and construct the Total Site composites, so as to
maximize heat recovery via the steam system.

Problem 4: Solutions
Answer to (a). Pinch Analysis has been applied to the two processes just
described, and the targets thus identified are given in Table 10.13.
The GCCs for process A and process B are shown in Figures 10.13 and 10.14,

Stream Tsupply [C] Ttarget [C] CP [MW/C] H [MW]

A1 143 160 0.15 2.55
A2 75 116.01 0.15 6.16
A3 124 90 0.412 14.0
A4 90 84 0.35 2.1
A5 84 37.5 0.101 4.7

TABLE 10.11 Stream Data for Process A (Problem 4)

Stream Tsupply [C] Ttarget [C] CP [MW/C] H [MW]

B1 158.3 159.9 1.33 2.08
B2 70 158.3 0.05 4.42
B3 10 70 0.03 1.8
B4 180 135.6 0.075 3.33
B5 135.6 105 0.105 3.21

TABLE 10.12 Stream Data for Process B (Problem 4)

232 Chapter Ten

Q H,min [MW] Q H,min [MW] Pinch (hot/cold) [C]

Process A 2.852 14.955 124 / 114
Process B 1.800 0 None (threshold problem)

TABLE 10.13 Heat Recovery Targets for Problem 4(a)

T [C]
(2.9; 165.0)
140 (0.3; 148.0)
(0.3; 121.0)
(8.9; 85.0)
100 (0; 119.0) (10.3; 79.0)
(9.9; 80.0)
20 (15.0; 32.5)
0 2 4 6 8 10 12 14 H [MW]

FIGURE 10.13 GCC for process A [Problem 4(a)].

T* [C]
180 (1.8; 175.0)
(0.5; 163.3)
160 (2.6; 164.9)

120 (1.4; 130.6)
(3.1; 100.0)
60 (1.8; 75.0)

(0; 15.0)
0 0.5 1 1.5 2 2.5 3 H [MW]

FIGURE 10.14 GCC for process B [Problem 4(a)].

Answer to (b). The TSPs for the combination of the two processes are plotted in
Figure 10.15. The apparent temperature overlap of the profiles indicates a good
opportunity for heat recovery.
Answer to (c). The heat recovery on the site can only take place via the steam
system. Any steam raised from process cooling has to be utilized to the greatest
Examples and Case Studies 233
extent possible in order to maximize the recovery. Usage may exceed generation;
in this case, the difference would be made up by MP steam. The profiles in
Figure 10.15 were used to balance steam generation and use by the processes
by varying the LP level saturation temperature. This has been achieved for LP
generation and use steam loads of 6.9 MW at 109C saturation temperature,
which corresponds to 1.39 bar(a) steam pressure. The Total Site composites are
shown in Figure 10.16

T** [C]




30 25 20 15 10 5 0 5 10 15 20
H [MW]

FIGURE 10.15 Total Site Profiles for Problem 4(b).

T** [C]


6.9 MW



30 25 20 15 10 5 0 5 10 15 20
H [MW]

FIGURE 10.16 Total Site composites for Problem 4(c).

234 Chapter Ten

10.3 Integrated Placement of Processing

Units and Data Extraction
Problem 5: Task Assignment
The process flow sheet for a base-case design is shown in Figure 10.17
with some additional data. The CCs and GCC of the process (with
appropriate data) for Tmin = 20C are shown in Figures 10.18 and
10.19, respectively.


FR m=0.4

C 900
m=1 1300 505 180C CA
50C 130C
F1 H 180C
R1 A
50C 180C T=180C Column A
m=0.8 HM
H m=1.4 220C
1050 H
CB 200C
980 C
70C C Heat exchanged
with cold utility
m=1.3 H Heat exchanged
with hot utility
Column B Q
RB Heat exchanged [kW]

1928 m=[kg/s]
m=0.5 270C
70C 150C 270C
P1 C H

FIGURE 10.17 Process flowsheet for the base-case design (Problem 5).

T [C] Tmin = 20 C 2642.6


Th = 240 C
Tc = 220 C


1958.6 10316.6
0 2000 4000 6000 8000 10000 H [kW]

FIGURE 10.18 Composite Curves for the process in Problem 5.

Examples and Case Studies 235
T (C)






0 500 1000 1500 2000 2500 Q (kW)

FIGURE 10.19 GCC for the process in Problem 5.

(a) Extract the appropriate stream data for performing a Pinch

Analysis. Tabulate the extracted data where appropriate. If a
stream involves phase changes (several segments), then give
the relevant details (e.g., stream F1 has three segments:
liquid phase, vaporization, and vapor phase).
(b) In the base-case design, what is the hot and cold utility
consumption and how much is the process heat recovery?
Compare the utility usage in the base case with the targets
shown on the CCs. Why are they different?
(c) Explain how the heat recovery system can be improved over
the base case without modifying any conditions (e.g., reactor
pressure, column pressure) of the main process.
(d) Refer to the GCC in Figure 10.19.
(1) If a heat pump is proposed, what operating temperature
levels will best fit the heating and cooling needs of the
(2) What qualitative suggestions can be made concerning
alternatives for utility saving and appropriate utility
(e) If process modifications are allowed, comment on whether
they could help to improve the process Heat Integration.
Your answer should address the reactors, the distillation
columns, feed vaporization, and soft data.

For this problem, the following assumptions are made. Utilities

generated can be sold, and the unit cost of high-temperature utilities
236 Chapter Ten

is greater than that for low-temperature utilities. The temperature of

flue gas for direct heating ranges from 800C to 160C. Steam usage
and generation are isothermal utilities (up to four levels between
100C and 340C). A heat pump or heat engine can be introduced.
Additional data for the problem:

Condensers and reboilers of distillation columns A and B

involve only phase changes.
In the condenser of distillation column B, total condensation
is assumed (i.e., the outlet is in the liquid phase only).
An endothermic reaction takes place in reactor R1. A heating
medium (HM) is necessary to keep the isothermal condition
in R1.

Some properties of the streams are as follows:

F1: Boiling point = 120C, Hvap = 990 kJ/kg,

Cp,vap = 4.5 kJ/kgC
F2: Boiling point = 125C, Hvap = 1250 kJ/kg,
Cp,vap = 7.5 kJ/kgC,
Cp,liq = 15 kJ/kgC
FR: Cp,vap = 5.0 kJ/kgC
HM: Cp,liq = 25 kJ/kgC
P1: Boiling point = 270C, Hvap = 1556 kJ/kg
P2: Boiling point = 200C

Problem 5: Solutions
Answer to (a). Stream data needed to perform a Pinch Analysis is reported in
Table 10.14.
Answer to (b). For the base case, Q H = 4828 kW, Q C = 3366.4 kW, and process heat
recovery = 3530 kW. The targets are Q H = 2642.6 kW and Q C = 1958.6 kW, and
process heat recovery = 5715.8 kW. The base-case design requires much more
utility consumption. This is because some heat is transferred across the Pinch
and also because there is some unnecessary utility use. For example, some
amount of heat is needed in the column B reboiler for heating up P1, and this
amount must be taken out again when P1 needs cooling.
Answer to (c).
The cold streams below the Pinch are F1, F2, and FR. These streams can
be heated up by P1 (parts below the Pinch), P2, and by the condensers of
columns A and B. In the base case, stream P1 is used to supply heat to
stream F2, resulting in heat transfer across the Pinch. Instead, to heat up
stream F2 at its higher temperature range, heat exchange with stream P2
can be employed; for the lower-temperature part of F2, the heat from the
condenser of column A can be recovered.
Above the Pinch, P1 can be used to heat up parts of the column A
reboiler or of the heating media of reactor R1.
Examples and Case Studies 237

Stream Type Ts [C] Tt [C] m Cp Hvap CP H

[kg/s] [kJ/kgC] [kJ/kg] [kW/C] [kW]
F1-liq Cold 50 120 1 11 11 770
F1-evap old 120 120 1 990 990
F1-vap old 120 180 1 4.5 4.5 270
F2-liq Cold 50 125 0.8 15 12 900
F2-evap Cold 125 125 0.8 1250 1000
F2-vap Cold 125 180 0.8 7.5 6 330
HM Cold 220 250 1.4 25 35 1050
CA Hot 120 120 900
RA Cold 220 220 1000
FR Cold 120 180 0.4 5 2 120
CB Hot 200 200 980
RB Cold 270 270 1928
P1-cond Hot 270 270 0.5 1556 778
P1-liq Hot 270 70 0.5 37.17 18.58 3716
P2 Hot 200 70 0.8 12.5 10 1300

TABLE 10.14 Stream Data for Pinch Analysis [Problem 5(a)].

Answer to (d)(1). Using the GCC in Figure 10.19, from the viewpoint of a heat
pump, two good candidates for its integration are as follows:
Across the Process Pinch. Heat rejection from the heat pump to the
process T* = 230C (real temperature T = 240C) and heat absorption
from the process at T* = 190C (T = 180C), would result in temperature
lift Tlift = 60C. The main challenge is that the heat available for
absorption by the heat pump is relatively smaller than what can be
delivered by the heat pump above the Process Pinch.
Across a potential Utility Pinch. Heat rejection level T* = 130C and
absorbtion heat level T* = 110C. The corresponding real heat pump
temperatures are: heat rejection level T = 140C and heat absorption
level, T = 100C. This results in temperature lift Tlift = 40C. At this
level, a large amount of heat is available to be absorbed from the process
and an even larger amount can be rejected to the process.

Answer to (d)(2).
From the GCC, the heat pump integration can be placed at the levels of
140C and 100C with only a small amount of shaft work required. This
could save a large part of the cold utility required.
Above the Pinch, a large and quite flat pocket is present in the CCC,
providing an opportunity to generate steam at the maximum level
of 180C.
Above the Pinch, steam is preferred over flue gas for a hot utility because
the process stream temperatures are not too high.
238 Chapter Ten

A heat engine could also be a candidate for the pocket in the

GCC below the Pinch. However, since T is smallabout 80C, or
[(190 + 10) (130 10)]only a small amount of work may be produced.

Answer to (e).
Reactors. The reaction is endothermic and, to save utility, its heating
medium heat demand should be placed below the Pinch. It is now placed
above the Pinch. If the pressure and temperature of reactor R1 are
reduced, then an alternative heating media with lower temperature levels
could be used. As a result of decreased R1 pressure, the boiling points of
feeds F1 and F2 would also decrease. This would result in different
targets. For instance the GCC could be shifted more to the left and the
Pinch point could be changed, resulting in more process heat recovery.
Distillation columns. The reboilers of columns A and B currently
evaporate the entire flow from the column bases. The appropriate
arrangement would be to move the reboilers to evaporate only the
branches of these flows that are intended for the columns.
Feed vaporization. In the base case, the vaporization of F1 and F2 are both
placed below the Pinch. However, as mentioned, the current vaporization
of F2 results in cross-Pinch heat transfer. Also, the vaporization of F1
employs utility heating below the Pinch, which should be eliminated
and substituted for process-to-process heat recovery.
Soft data. The P1 and P2 target temperatures are soft data that can be
changed. The temperatures could be increased to reduce cold utility
required, but this would not change the CC much.

10.4 Utility Placement

10.4.1 Utility Placement: First Problem
Problem 6: Task Assignment
A set of data for a part of a crude distillation process has been
extracted, and the stream data are shown in Table 10.15. For this
problem, the minimum temperature difference is Tmin = 10C. The
available utilities are presented in Table 10.16.
A targeting procedure was performed for the initial data
summarized in Tables 10.15 and 10.16. Figure 10.20, which exhibits
two Pinch points, shows the balanced CCs for the problem.

(a) Complete the missing entries in Table 10.15.

(b) Identify the Pinches and explain their significance.
(c) Draw the design grid for the problem, including the hot utilities.
Position the streams relative to the location of the Pinches.
(d) Design the MER HEN below the Process Pinch.
(e) Design the MER HEN above the Process Pinch.
(f) Design the MER HEN above the Utility Pinch.
Examples and Case Studies 239

Stream Name Ts [C] Tt [C] H [kW] CP

1 OH_Naphta_1 100 41 27.5113
2 OH_Naphta_2 132 60 23.1850
3 OH_HKD 224 65 11.2400
4 LCT 268 30 363.12 1.5257
5 Residue 283 45 336.25 1.4128
6 Crude 30 146 2031.50 17.5132
7 Denaphta_1 117 204 977.88
8 Denaphta_2 176 305 1641.45 12.7244

TABLE 10.15 Process Streams for Problem 6

Stream Ts [C] Tt [C] Cost

HP steam 320.1 320.0 100
MP steam 260.1 260 60
Cooling water 20.0 40.0 8

TABLE 10.16 Available Utilities for Problem 6

HP steam = 654.2 kW
320.0C 320.1C
MP steam = 874.4 kW
300 305.0C
T* [C]

132.0C 146.0C
100.0C 122.0C
5 .0
41.0C 4 60.0C
30.0C 20.0C Cooling water = 2657 kW

0 2000 4000 6000 8000

H [kW]

FIGURE 10.20 Balanced Composite Curves and heat recovery targets for
Problem 6.
240 Chapter Ten

Problem 6: Solutions
Answer to (a). Table 10.17 shows the missing parameters of process streams
described in Table 10.15.
Answer to (b). There are two Pinches: the Process Pinch (located at 132C/122C)
and the Utility Pinch (located at 260.1C/250.1C; see Figure 10.21. The Utility
Pinch is caused by the placement of the MP steam. More significant is the
Process Pinch, which divides the design area into two different parts: a net
heat sink above the Pinch temperatures and a net heat source below the Pinch
The Utility Pinch further subdivides the area above the Process Pinch. In
the interval between the Process Pinch and the Utility Pinch, the only utility
allowed is the MP steam. Above the Utility Pinch, the only utility available
remains the HP steam.
Answer to (c). See Figure 10.22.

No Name Ts [C] Tt [C] H [kW] CP [kW/C]

1 OH_Naphta_1 100 41 1623.17 27.5113
2 OH_Naphta_2 132 60 1669.32 23.1850
3 OH_HKD 224 65 1787.16 11.2400
4 LCT 268 30 363.12 1.5257
5 Residue 283 45 336.25 1.4128
6 Crude 30 146 2031.50 17.5132
7 Denaphta_1 117 204 977.88 11.2400
8 Denaphta_2 176 305 1641.45 12.7244

TABLE 10.17 Process Streams for Problem 6(a) with Missing Data Filled In

HP steam = 654.2 kW
320.0C 320.1C
MP steam = 874.4 kW 305.0C
Utility Pinch
224.0C 250.1C
200 204.0C
T [C]

100.0C 122.0C
Process Pinch
5 .0
41.0C 4 60.0C
30.0C 20.0C Cooling water = 2657 kW

0 2000 4000 6000 8000

H [kW]

FIGURE 10.21 Pinch identification for Problem 6(b), Tmin = 10C.

Examples and Case Studies 241
132.0 260.1
320.0 320.1
10 HP_steam
CP = 6541.63 H = 654.163
260.0 260.1
11 MP_steam
CP = 8743.73 H = 874.373
41.0 100.0
1 OH_Nafta_1
CP = 27.51 H = 1623.167
60.0 132.0
2 OH_Nafta_2
CP = 23.18 H = 1669.32
65.0 224.0
CP = 11.24 H = 753.08 CP = 11.24 H = 1034.08
30.0 268.0
CP = 1.63 H = 165.621 CP = 1.63 H = 195.442 CP = 1.53 H = 12.053
45.0 283.0
5 Residue
CP = 1.41 H = 122.913 CP = 1.41 H = 180.979 CP = 1.41 H = 32.353
30.0 146.0
6 Crude
CP = 17.51 H = 1611.214 CP = 17.51 H = 420.316
117.0 204.0
7 Denafta_1
CP = 11.24 H = 56.2 CP = 11.24 H = 921.68
8 Denafta_2
CP = 12.72 H = 942.878 CP = 12.72 H = 698.589
20.0 40.0
9 Cooling water
CP = 132.83 H = 2656.687

122.0 250.1

Tmin = 10C; All temperatures are in [C], H are in [kW], CP are in [kW/C]

FIGURE 10.22 Preliminary grid diagram for Problem 6(c).

Answer to (d). See Figure 10.23.

1623.17 kW 132.0
41.0 100.0
C1 1

57.71 kW
60.0 62.51 132.0
C3 2 2

696.88 kW
65.0 127.3 132.0
C5 4 3

155.62 kW
30.0 132.0
C6 4

122.91 kW
45.0 132.0
C7 5

30.0 122.0
6 2
1611.21 kW
117.0 122.0
7 4
56.2 kW
9 C1 C7 C6 C5 C3


FIGURE 10.23 HEN design below the Process Pinch [Problem 6(d)].
242 Chapter Ten

Answer to (e). See Figure 10.24.

132.0 260.1

260.0 260.1
12 13 11 MP steam

CP = 11.24 8 9 3

CP = 1.53 10 4

CP = 1.41 11 5
CP = 8.755 10
142.67 195.44 kW 143.49 146.04
CP = 17.51 6 11 13
180.98 kW 43.90 kW
CP = 11.24 7 8
921.68 kW
176.0 185.32
CP = 12.72 8 9 12
112.4 kW 830.48 kW

122.0 250.1

All temperatures are in [C], H is in [kW], CP is in [kW/C]

FIGURE 10.24 HEN design above the Process Pinch [Problem 6(e)].

Answer to (f). See Figure 10.25.


320.0 320.1
16 10
CP = 6541.86

12.05 kW
15 4
CP = 1.53
14 5
CP = 1.41

CP = 3.36 251.99

255.19 253.59 305.0

8 14 16
CP = 3.36 CP = 6.72
32.35 kW 654.16 kW


FIGURE 10.25 HEN design above the Utility Pinch [Problem 6(f)].
Examples and Case Studies 243

10.4.2 Utility Placement: Second Problem

Problem 7: Task Assignment
A process involves the set of process streams described in Table 10.18.
The utilities available to satisfy its heating and cooling requirements
are given in Table 10.19.
For the HEN to be designed, assume Tmin = 30C.

(a) Identify the minimum heating duty, the minimum cooling

duty, and the Process Pinch location.
(b) Plot the GCC.
(c) Perform an appropriate placement of the utilities against the
GCC so as to achieve minimum total cost of the utilities.
(1) Draft the placement on the GCC.
(2) Calculate the total duty for each of the utilities.
(3) Calculate the total utility cost for the problem.
(d) Consider the following ways to reduce the utility costs
further, and comment on their suitability for application to

Stream Ts [C] Tt [C] CP [MW/C] H [MW]

H1 175 75 3.5
H2 100 40 2.4
H3 180 130 1.5
H4 195.1 195 100
C1 50 150 2.0
C2 140 140.1 150
C3 80 140 1.5
C4 20 80 3.0

TABLE 10.18 Process Streams Data for Problem 7

Name Ts [C] Tt [C] Cost [$/MWy]

HP steam 300.1 300 65000
MP steam 200.1 200 50000
Cooling water 15.0 20.0 7000
Chilled water 5.0 10.0 30000

TABLE 10.19 Utilities Available for Problem 7

244 Chapter Ten

this current problem. Wherever appropriate, suggest how

an option can be exploited.
(1) Flue gas heating
(2) Heat pumping
(3) Introduction of a new steam level

Problem 7: Solutions
Answer to (a). The heat recovery targets and the Pinch location can be obtained
with the help of the PTA. The heat cascade intervals and the corresponding
stream population are shown in Table 10.20, and the computed problem table
is given in Table 10.21.

Temperature boundary Interval number Stream population

1 H4
3 H3, C1
3 H1, H3, C1
4 H1, H3, C1, C2
5 H1, H3, C1, C3
6 H1, C1, C3
7 H1, C1, C4
8 H1, H2, C1, C4
9 H1, H2, C4
10 H2, C4
11 H2

TABLE 10.20 Heat Cascade Intervals and Stream Population for Problem 7(a)
Examples and Case Studies 245
Alternatively, the heat recovery targets can be obtained by plotting the CCs,
as shown in Figure 10.26.
Answer to (b). See Figure 10.27.
Answer to (c)(1) and (c)(2). See Figure 10.28.
Answer to (c)(3).
Total utility cost = 37.5 [MW MP steam] 50,000 [$/MWy]
+ 62.5 [MW cooling water] 7,000 [$/MWy]
+ 24 [MW chilled water] 30,000 [$/MWy]
= 3,032,500 [$/y]

Interval number Temperature Enthalpy

[C] [MW]
1 180.1 37.5
2 180 137.5
3 165 137.5
4 160 135.0
5 155.1 149.7
6 (Pinch) 155 0.0
7 115 60.0
8 95 60.0
9 85 45.0
10 65 63.0
11 60 77.5
12 35 62.5
13 25 86.5

TABLE 10.21 Problem Table for Problem 7(a)

FIGURE 10.26 T [C]

Composite Curves for 250
Problem 7(a), QH,min = 37.5 MW
Tmin = 30C.


Pinch: 170C (hot)

140C (cold)

QCmin=86.5 MW
0 200 400 600 800 H [MW]
246 Chapter Ten

FIGURE 10.27 QH,min = 37.5 MW

Grand Composite T [C]
Curve for
Problem 7(b),
Tmin = 30C.


QCmin=86.5 MW
0 50 100 H [MW]

FIGURE 10.28 T [C] MP steam 37.5 MW

placement of
utilities for 150
Problem 7(c).

Cooling Water
50 62.5 MW Chilled Water 24 MW

0 50 100 H [MW]

Answer to (d)(1). The flue gas heating option will probably be more expensive
than heating with steam. The temperature levels in the problem do not suggest
exploitation of furnace heat.
Answer to (d)(2). The appropriate placement of a heat pump is across the Process
Pinch, so the pump will transfer heat from below to above the Pinch. In this
problem it is technically impractical to install a heat pump with water or steam
as a working fluid, since this would require operation at a slight vacuum (the
Pinch is at 170C/140C). The relatively small duties that could be achieved
with this technique indicate that investment in a heat pump here is likely to be
economically unattractive.
Answer to (d)(3). Introducing a new steam level at a steam temperature of 165C
is a good option that can be exploited by passing the steam (taken at 200C)
through a steam turbine. This has the potential of generating additional power
on the site. The steam would be returned to the process at the level of 165C
(the process heating requirement is at 150C, which makes this arrangement
feasible). If the site could use additional power, then this option may reduce the
overall utility cost by providing on-site power generation.
Examples and Case Studies 247

10.5 Water Pinch Technology

10.5.1 Water Pinch Technology: First Problem
Problem 8: Task Assignment
A certain process contains several water-processing operations, as
shown in Table 10.22.

(a) Create the limiting CCs by plotting the contaminant concen-

tration, C, against contaminant load, m [kg/h], [ppm].
(b) Calculate the minimum water flow rate for maximum reuse.

Problem 8: Solutions
Answer to (a). The individual limiting water profiles are graphed in Figure 10.29.
Figure 10.30 shows the result when these water profiles are combined.

Operation Contaminant flow Cin [ppm] Cout [ppm] Limiting flow

rate [kg/h] rate [t/h]
1 2 0 100 20
2 5 50 100 100
3 30 50 800 40
4 4 400 800 10

TABLE 10.22 Data for Water-Processing Operation of Problem 8

C [ppm]


100 2

0 2 7 37 41 40
m [kg/h]

FIGURE 10.29 Limiting water profiles for Problem 8(a).

248 Chapter Ten

Answer to (b). The minimum wastewater is targeted as follows:

Mass pickup at pinch

Minimum flow rate
Pinch concentration
9 [kg/h] 9 10 6 kg
100 [ppm] 100 h
9 10 3 t
h 90 [t/h]

See Figure 10.31.

FIGURE 10.30 Combined limiting water profiles for Problem 8(a).

C [ppm]


ine /h) Minimize

ly L t
S upp ate 90 flowrate
t e r w r
Wa m flo
i n imu
100 (M

1 9 21 41

m [kg/h]

FIGURE 10.31 Minimum wastewater targeting for Problem 8(b).

Examples and Case Studies 249

10.5.2 Water Pinch Technology: Second Problem

Problem 9: Task Assignment
Using the stream data from Problem 8, suggest a design of a network
for maximum reuse of water. The limiting CC is shown in
Figure 10.32.
Problem 9: Solution
Cut off the pockets of the limiting CC. The minimum water requirement for
each pocket can then be defined for each region (Figure 10.33). This enables one
to propose the design strategy shown in Figure 10.34.
Following the design strategy, set up a design grid as shown in Figure 10.35.
Connect streams to water mains and merge operations that cross boundaries;
see Figure 10.36.
Remove intermediate water mains where this is appropriate (Figure 10.37).
Then connect operations directly, as shown in Figure 10.38. Finally, Figure 10.39
illustrates (in conventional flowsheet form) one of the possible resulting designs
of the water system.

C [ppm]



1 9 21 41
m [kg/h]

FIGURE 10.32 Limiting Composite Curve for Problem 9.

250 Chapter Ten

C [ppm]

45.7 t/h


90 t/h

m [kg/h]

FIGURE 10.33 Water Pinch diagram used to target the minimum water flow
rate in Problem 9.

800 45.7 t/h

800 ppm
C [ppm]


45.7 t/h 100 ppm
50 44.3 t/h100 ppm
90 t/h

90 t/h
m [kg/h]

FIGURE 10.34 Design strategy for Problem 9.

Examples and Case Studies 251
Flowrate 90 t/h 45.7 t/h 0 t/h
required for the F.W. 100 ppm 800 ppm
20 t/h 1

100 t/h 2
Limiting 50 ppm
40 t/h 3 3
50 ppm
10 t/h 4
400 ppm
Wastewater (90-90) (90-45.7) (45.7-0)
0 t/h 44.3 t/h 45.7 t/h
100 ppm 800 ppm

Water mains

FIGURE 10.35 Design grid for the water system in Problem 9.

90 t/h 45.7 t/h 0 t/h

F.W. 100 ppm 800 ppm

20 t/h 20 t/h

50 t/h
100 t/h 2

20 t/h 40 t/h
40 t/h 3
20 t/h
5.7 t/h
10 t/h 4

0 t/h 44.3 t/h 45.7 t/h

FIGURE 10.36 Streams are connected with the water mains (Problem 9).
252 Chapter Ten

90 t/h 45.7 t/h 0 t/h

F.W. 100 ppm 800 ppm

20 t/h 20 t/h

100 t/h 50 t/h


20 t/h 40 t/h
40 t/h 3
20 t/h

5.7 t/h
10 t/h 4

0 t/h 44.3 t/h 45.7 t/h

FIGURE 10.37 Removing intermediate water mains and then connecting

sources and sinks (Problem 9).

90 t/h

20 t/h 20 t/h

50 t/h
100 t/h 2

20 t/h
40 t/h 3

20 t/h
5.7 t/h
10 t/h 4

0 t/h 45.7 t/h

44.3 t/h

FIGURE 10.38 Connecting operations directly (Problem 9).

20 t/h

20 t/h 40 t/h
Operation 1 Operation 3
F.W. Wastewater
90 t/h 90 t/h
50 t/h 5.7 t/h
Operation 2 Operation 4

44.3 t/h

FIGURE 10.39 Flowsheet representation of a water system design (Problem 9).

Industrial Applications
and Case Studies

his chapter provides an overview of selected implementations
of the Process Integration methodology for various industrial
case studies. The presentation is somewhat condensed because
of space limitations; more information is available in the works

11.1 Energy Recovery from an FCC Unit

In this case study, the Heat Exchanger Network (HEN) of a Fluid
Catalytic Cracking (FCC) unit process consisted of a main column
and a gas concentration section (Al Riyami, Kleme, and Perry, 2001).
The stream data was made up of 23 hot streams and 11 cold streams.
The associated cost and economic data required for the analysis were
specified by the refinery owners. Incremental area efficiency was
used for the targeting stage of the retrofit design. This was carried
out using the Network Pinch method (Asante, 1996; Asante and Zhu,
1997), which consists of a diagnosis stage and an optimization stage.
In the diagnosis stage, a few promising retrofit steps were generated
using the UMIST (now the University of Manchester) software
package SPRINT (2009). This software was also used to optimize the
initial design by trading off capital cost against energy savings. The
design options were then compared and evaluated, followed by
the final retrofit design proposed for final inspection.
The existing Tmin of the process was identified as 24C, and the
hot utility consumption of the process was 46.055 MW; the area
efficiency of the existing design was 0.805. The potential for energy
saving was then derived from the resultant Composite Curves, which
are shown in Figure 11.1. As seen in the figure, the Composite Curves
are relatively wide apart except in the area around the Pinch.
The capital cost was estimated under the assumption that the
retrofit distribution of area would be the same as that for the
existing network. The resulting optimum minimum temperature
approach was found to be about 11.5C for incremental and about

254 Chapter Eleven

Tmin = 11.50C

Temperature [C]



1 2
Enthalpy [MW]

FIGURE 11.1 Composite Curves of FCC process with optimum Tmin (after
Al Riyami, Kleme, and Perry, 2001).

17.5C for constant . The area efficiency of the existing network

was found to be 0.804. This value indicated that the existing design
was using the area reasonably efficiently. Even so, there was still room
for improvement. Since the constant- targeting produced a
conservative estimate, an incremental value of 1.0 was used to set
the retrofit target, which yielded potential for energy savings of
about 12.117 MW. Analysis of the existing design revealed that there
were four process-to-process heat exchangers that transferred heat
across the Process Pinch (from above to below the Pinch). It was
also found that some heaters supplied utility heat to process streams
below the Pinch and that some coolers removed heat from process
streams above the Pinch. These energy violations of the established
Pinch rules generated the scope of the projects possible energy
The retrofit design using the Network Pinch method allowed a
limit to be set on the structures energy recovery. The next stage
consisted of testing a set of modifications that would result in higher
levels of energy recovery in the process. The increase in energy
recovery would come at the expense of increased heat exchange area.
Therefore, any benefit in energy cost reduction had to be weighed
against the additional capital cost associated with increasing that
area. A number of promising design solutions were generated, which
were then optimized for minimum total cost. The four designs
identified each involved a payback period of less than two years, yet
increases in energy prices rendered the actual payback period closer
to one year. The final design chosen for the retrofit situation was the
one with the shortest payback period and the least additional area
required; it is shown in Figure 11.2.
In this design option, four new heat exchangers were added and
one existing exchanger (number 1 in Figure 11.2) was removed, since
its duty approached zero. In reality the exchanger equipment item
Industrial Applications and Case Studies 255
224 366
6 H1
60 150.8 366
C 3 H2
238 760.5 kW 293
12 H3
52 123 293
C 43 H4
38 481 kW 207.8 210.5 238
54.9 74
C 10 8 7 41 H5
1247 kW
50 109 224
C 2 H6
108 1235 kW 158
42 H7
40 104
C H8
294.25 kW 83.3
38 97.17
C 40 H9
12459 kW
38 62.87 68.11
C 44 H10
3324 kW
38 62.05 63.11
C C H11
1902 kW
38 2704 kW 52.5
C H12
38 244.5 kW 51.5
C H13
46 73.9 kW
C H14
53267 kW
38 46
C H15
359.6 kW 74.3 96.3
38 48.92 193
C C 11 9 H16
563 kW
46 653.3 kW 53
C H17
14125 kW
38 46
C H18
195.83 kW
38 46.44
C H19
856.4 kW
38 53
C H20
32.3 kW
60 67.24
C H21
7336 kW
38 60
C H22
206 kW
38 102
C H23
1815.6 kW 88.2 110 134.8 171.7
40 2 3 4 H 385
22841 kW
100 3802 kW 2381 kW 1803 kW 4282.8 kW
C2 195
58.32 3697.5 kW 192.6
C3 8
38 10198.9 kW 55
C4 10
38 1413.6 kW 50
C5 11
8787 kW
184.5 872.25 kW 193
C6 12
78.56 95 41479.7 kW 101.7
C7 9 7
118.4 119.3 13291 kW
118 4258.8 kW 213.7 kW 124
C8 43 41 H

48.23 1151.8 kW 2402.6 kW 49.68

C9 44
1051.9 kW
51.16 54.3
C10 H
101.78 763.9 kW
95.40 102
C11 42 H
5669.4 kW 200 kW

FIGURE 11.2 Grid diagram of the chosen retrofit option (after Al Riyami,
Kleme, and Perry, 2001). Exchanger number 1 from the original network is
removed and now used repiped as new added area (exchanger number 42);
exchanger number 10 is only repiped.

need not be removed. Instead, it could be used in place of one of the

additional exchangers called for by the new design by repiping either
the hot or cold stream (so in this case, exchanger number 1 becomes
exchanger 42 in Figure 11.2), thus further reducing the additional area
256 Chapter Eleven

required. All the modifications carried out in the diagnosis stage had
the effect of moving heat from below to above the Network Pinch. The
final retrofit design produced energy savings of 8.955 MW, about
74 percent of the potential of the design. The annual utility cost savings
amounted to $2,388,600, a 27 percent decrease in the utility bill. Since
the modified HEN required an investment of $3,758,420, the payback
period was less than 19 months.
The study demonstrated that a combination of targeting and
Pinch Technology (process Pinch and Network Pinch) can yield
substantial improvements in an existing HEN and thereby reduce the
total network cost. The employed method can recognize bottlenecks
in an existing system, and then generate a series of potential
improvements by searching for modifications capable of shifting heat
from below to above the Network Pinch. It was found that targeting
for maximum energy savings at each potential modification usually
produces a good trade-off between area and energy cost.

11.2 De-bottlenecking a Heat-Integrated Crude-Oil

Distillation System
A crude-oil preheating system retrofit problem was studied by
Seikova, Varbanov, and Ivanova (1999). The objective of the study was
to develop a topology modification proposal for the refinery owners
that would feature better energy efficiency than the existing network
under a flexibility requirementnamely, that the distillation plant
be able to work with several alternative feedstocks. The preheating
system of the distillation plant was analyzed, and several topology
modifications were evaluated.
The retrofit aspect of Heat Integration was first considered
systematically by Tjoe and Linnhoff (1986), who proposed a framework
for retrofit assessment and targeting. Later, Asante and Zhu (1997)
developed the Network Pinch methodology. This hybrid approach,
which combines heuristics and Mathematical Programming (MPR),
was implemented through an iterative procedure whose two main
stages are to identify the heat transfer bottleneck of the given HEN
and then to implement modifications for overcoming it.
Another important Process Integration issue involves accounting
for different plant operating conditions stemming from the market
availability of various feedstocks. The case study reported in Saboo,
Morari, and Woodcock (1985) was based on the HEN retrofit
methodologies just described and combined the energy recovery
improvement with handling of the flexibility requirements that are
common to real-life processing plants.
The crude-oil distillation unit (Figure 11.3) consists of a main
distillation column (with 6 m3/h capacity) for fractionating crude oil
into four products: light distillate, heavy gasoline, diesel fuel, and
atmospheric residue. The preheating HEN recovers heat from the
Industrial Applications and Case Studies 257


Light Distillate Steam

Crude Oil

Atmospheric residue
Diesel Oil
Lower pumparound

Heavy Distillate

FIGURE 11.3 Flowsheet of the crude-oil distillation unit.

pump-around and from some of the products before the furnace. To

bring the final crude oil temperature to the columns required target,
the remaining heat duty is provided by the combustion of
noncondensed light gases and of additional natural gas.
The process features the following characteristics: (1) temperature-
dependent heat capacities; (2) continuous partial phase change of the
crude oil; (3) temperature variations, which are small owing to
specifications of the distillation process; and (4) large variations of
the hot stream flow rates due to changing stockfeed composition.
The three utility sources available for HEN operation are light
gases (as a furnace fuel for higher temperature levels), steam at 1 atm
(1.01325 bar), and cooling water at 1835C. The data used to estimate
the utility cost are $68.74/kWy for furnace heating, $103.10/kWy for
steam, and $30.00/kWy for cooling water. The following area cost
law was used:

Capital cost [$ = 25,000 + 680 Area0.81 [m2] (11.1)

The plant processes a range of alternative feedstocks. Three

crude-oil types were selected to represent this range. Feed 1 is a light
crude oil; it contains a significant amount of the lightest fractions,
which require a large amount of cooling in the condensers. However,
the relatively small amount of the heavy fraction means that less
reboiler heating is required. Feed 2 is medium crude oil and feed 3 is
heavy crude oil. These two feedstocks are characterized by a relatively
greater amount of the heavy fractions (diesel and atmospheric
residue), which means that more heat is available for recovery in the
higher temperature range. It was assumed that, over a years worth of
operation, the plant uses equal amounts of the three feedstocks. Each
alternative feedstock is associated with a unique operating point.
258 Chapter Eleven

At all three operating points, the crude oil is preheated within a

large temperature intervalfrom 20C to 310C. The preheating
process consists of three phases. In the first phase, the crude oil is
heated from the starting temperature to its bubble point. The next
phase involves continuous partial evaporation in heat exchangers.
The third and hottest phase is heating to the specified column entry
temperature of 310C (this heating is performed in the furnace). The
upper pump-around features a large enthalpy change split between
condensation and subcooling segments. None of the other process
streams has phase transitions.
The existing HEN for the preheating train is shown in Figure 11.4.
In order to initialize the retrofit procedure, a Pinch Analysis of the
stream data for Tmin = 10C was carried out. The results of this
analysis are given in Table 11.1.
For the first two feeds, the Pinch is located at the temperature
boundary of the crude oil bubble point. For feed 3, the Pinch is
located at a temperature that is close to the beginning of the phase
change. Note that each operating point offers a different potential
for heat recovery.
After looking at the existing network and thermodynamic
targets, one can suggest several retrofit modifications. First, the hot
utility usage below the Pinch leads to extremely poor heat recovery
as well as to substantially increased utility cooling and thus to using
considerable amounts (3038 m3/h) of cooling water. Taking a look
at the coolers in the temperature interval of heater H1 (Figure 11.4),
one can see that the sum of the cooler loads at the outlets of streams
H3, H4, and H5 offers enough heat supply to satisfy the heating
demand in the interval from 20C to 60C. This analysis implies that
steam heating can be eliminated through repiping of the coolers as
recovery matches. It can also be seen that the availability of heat
recovery varies in response to the different temperature levels
characteristic of each type of crude-oil feed. This effect is most
significant for the atmospheric residue stream H5, whose heat
capacity flow rate varies from 0.98 to 1.47 kW/C. To handle this
variation in heat availability temperatures, a cascade of recovery

Pinch Minimum Minimum Maximum

location hot utility cold utility heat recovery
[C] [kW] [kW] [kW]
Feed 1 110120 556.25 560.26 610.64
Feed 2 110120 481.31 428.24 664.33
Feed 3 86.1996.19 507.34 395.66 696.45

TABLE 11.1 Pinch Analysis Results for Operating Points of the Three
Preheating Phases
Industrial Applications and Case Studies 259
CP [kW/C] CPH11 CPH12
CPH11 = 17.68 28 65.5 94.2
C2 C1 H1
CPH12 = 17.68 101.9 kW 507.5 kW
40.0 70.0 111.3
0.38 C3 1 H2
11.3 kW
60.0 75.3 186.0
1.66 2 H3
25.4 kW C4
40.0 129.8 197.0
0.84 C5 3 H4
75.8 kW
100.0 140.3 296.3
0.98 C6 4 H5
39.7 kW
CPC11 = 2.92 20.0 60.0 65.3 119.8 130.3 158.7 310.0
CPC12 = 5.40 C1 H1 1 2 3 4 H2
CPC13 = 4.23 CPC11 CPC1 CPC1
1 2
101.7 kW 11.9 kW 183.6 kW 56.7 kW 153.5 kW 677.3 kW 3
(a) Initial HEN Feed 1

CPH11 = 9.286 28.0 54.7 101.7

C2 C1 H1
CPH12 = 2.086 55.7 kW 436.7 kW
40.0 70.0 111.6
0.28 C3 1 H2
8.6 kW
60.0 74.2 186.3
1.25 C4 2 H3
17.70 kW
40.0 122.1 196.7
0.87 C5 3 H4
21.4 kW

1.47 100.0 C6 137.3 4

54.8 kW
CPC11 = 2.87 20.0 60.0 64.2 112.1 127.3 180.5 310.0
CPC12 = 4.25 C1 H1 1 2 3 4 H2
CPC13 = 4.49 114.8 kW 11.9 kW 140.4 kW 64.7 kW 231.1 kW 582.8 kW
(b) Initial HEN Feed 2

CPH11 = 9.85 28.0 48.5 96.2

CPH12 = 2.27 C2 C1 H1
46.6 kW 469.9 kW
0.27 40.0 70.0 111.2
C3 1 H2
8.0 kW

1.15 60.0 73.2 186.0

C4 2 H3
15.1 kW
0.88 40.0 109.4 197.1
C5 3 H4
61.2 kW
1.41 100.0 127.6 294.8
C6 4 H5
38.7 kW
CPC11 = 3.43 63.2 99.4 117.6 172.2
20.0 60.0 310.0
CPC12 = 4.25 C1 H1 1 2 3 4 H2
CPC13 = 4.46
137.0 kW 10.9 kW 129.2 kW 77.7 kW 235.1 kW 614.1 kW

(c) Initial HEN Feed 3

FIGURE 11.4 Initial HEN for the crude-oil preheating process.

matches is needed. This requirement suggests the repiping of coolers

mentioned previously, which would form three loops for internal
heat transfer.
Another important observation concerns the amount of recovery
from streams H2 to H5. Streams H3 and H4 feature close supply
temperatures. For all three operating points, and especially for that
of feed 1, H3 has substantially greater heat capacity flow rate than
260 Chapter Eleven

H4, which results in a relatively slower decrease in its temperature.

This fact and the heat exchanger cascade already planned suggest
another topology modification: resequence matches 2 and 3 on
stream C1. Such resequencing would yield a larger driving force in
both matches, although some load would then be shifted to the lower
temperature segment of stream C1.
The network was modified according to the strategy and changes
suggested by the analysis. The result is shown in Figure 11.5. Cooler
C3, on the hot stream with the lowest target temperature, is left at its
old location. This, together with the introduced three loops, will
account for variations in composition of the feedstock. When there is
a larger total heat supply from streams H2 to H5, any heat surplus is
removed from the system through cooler C3.
Comparing the initial and new topologies reveals that a
significant improvement in energy efficiency has been achieved.
The heat recovery fraction is increased from 67.03 to 91.22 percent
for feed 1, from 73.37 to 89.77 percent for feed 2, and from 74.12 to
77.86 percent for feed 3. Although the relative increase for feed 3 is
lower, its net increase in heat recovery is the same as that of
feed 2.
Based on the simulations for the initial and modified networks,
the heat transfer area for each match is determined as recorded in
Table 11.2 (where H1 is 4 6.26 m 2). The area requirement of the
recovery matches EC4 (new), 3, EC5, and EC6 increase after the
topology change. Note that only the increase of EC4 is significant,
which is due to the heat load shifted from match 2. Another effect
of the modification is that the four eliminated steam heating

Feed 1 Feed 2 Feed 3

Match Initial Retrofit Initial Retrofit Initial Retrofit
C1 67.69 67.69 61.62 61.62 75.56 75.56
C2 43.42 43.42 30.51 30.51 31.30 31.30
1 10.83 4.74 8.12 3.44 7.39 4.80
C3 3.45 2.00 2.62 1.61 2.42 0.00
2 98.04 38.43 69.49 27.21 57.75 31.57
C4EC4 4.95 59.41 3.49 39.97 3.02 12.99
3 10.67 11.83 11.85 12.30 12.97 15.64
C5EC5 7.99 11.43 2.37 11.17 7.47 5.64
4 14.58 14.44 24.98 23.84 24.20 24.42
C6EC6 1.81 9.68 2.54 12.01 1.89 2.48
H1 21.33 20.98 25.04

TABLE 11.2 Changes in Heat Transfer Area Due to Retrofit, in [m2]

Industrial Applications and Case Studies 261
28.0 101.85 kW 65.5 94.2
C2 C1 H1
52.1 507.47 kW 111.3
C3 1 H2

4.54 kW 130.2 186.0

EC4 2 H3

40.0 115.3 197.0

EC5 3 H4

100.0 147.3 296.3

EC6 4 H5
27.7 49.4 89.4 105.3 120.0 137.0 165.0
20.0 310.0
C1 1 EC5 EC4 EC6 3 2 4 H2

22.3 kW 63.5 kW 116.6 kW 46.6 kW 68.9 kW 92.5 kW 146.6 kW 609.9 kW

(a) Modified HEN Feed 1

28.0 55.7 kW 54.7 111.7

C2 C1 H1
53.2 436.7 kW
40.0 111.6
C3 1 H2

60.0 3.79 kW 134.5 186.3

EC4 2 H3

40.0 117.0 196.7

EC5 3 H4

100.0 149.8 294.6

EC6 4 H5
20.0 25.8 49.1 81.6 107.0 124.0 138.0 187.0 310.0
C1 1 EC5 EC4 EC6 3 2 4 H2

16.7 kW 66.7 kW 93.3 kW 73.2 kW 69.0 kW 64.8 kW 212.7 kW 549.2 kW

(b) Modified HEN Feed 2

28.0 55.70 kW 48.5 96.2

C2 C1 H1

39.8 436.72 kW
40.0 111.2
C3 1 H2

0 kW 99.4 186.0
EC4 2 H3

40.0 65.1 197.1

EC5 3 H4

100.0 122.7 294.8

EC6 4 H5
25.5 31.9 45.2 54.5 89.4 112.7 169.1 310.0
C1 1 EC5 EC4 EC6 3 2 4 H2

18.9 kW 22.1 kW 45.1 kW 32.0 kW 116.5 kW 99.1 kW 241.9 kW 608.1 kW

(c) Modified HEN Feed 3

FIGURE 11.5 Modified HEN for the crude-oil preheating process.

exchangers can be used to satisfy the increased area needs for

matches 3, EC5, and EC6. Thus the investment, which is estimated
to be $42,330, is only for the additional area of EC4. Overall, the
modified heat recovery system yields a significant reduction in
energy cost. The total sum of savings after the retrofit is estimated
262 Chapter Eleven

to be $17,519 per year, which results in a reasonable payback period

of about 2.4 years (or less if energy prices increase).
This case study of a combined problem in heat recovery and
process flexibility within a distillation units preheating system
combined advanced Pinch-based retrofit methodologies with
additional heuristic rules. The result was that the flexibility goals
were better met by the introduction of loops in the network topology.
In short, the new and modified paths enabled a significant improvement
in heat recovery. Another benefit was that the more expensive steam
heating was partially replaced by slightly increased furnace duty in
the case of higher heat demand. This interesting trade-off between
two hot utilities yields an economic benefit owing to the lower cost
of furnace fuel.

11.3 Minimizing Water and Wastewater in

a Citrus Juice Plant
This case study describes a water and wastewater minimization
project designed for a citrus plant located in Argentina (Thevendiraraj
et al., 2003); the study proceeded by applying the Water Pinch
technology (Wang and Smith, 1994). Citrus juiceprocessing plants
consume large quantities of freshwater. The principal objective of
this study was to reduce both the freshwater consumed and the
wastewater produced by the plant. The citrus processing plant housed
the following processes: selection and cleaning, juice extraction, juice
treatment, emulsion treatment, and peel treatment.
Water minimization was achieved by maximizing water reuse
and identifying regeneration opportunities. Water-using operations
were represented by the maximum inlet and outlet contaminant
concentrations, which are functions of equipment corrosion,
fouling limitations, minimum mass transfer driving forces, and
limiting water flow rate through an operation. Targets determined
the minimum freshwater requirement using the Limiting
Composite Curve for the water design network. The graphical
Pinch methods that are based on single contaminants can be
extended to cover multiple contaminants. When dealing with a
number of operations, multiple contaminants, and multiple water
sources, the problem becomes more complex and so algorithms
using the basic Pinch principles have been developed for solving
by MPR-based methods (see Smith, 2005).
The study began with data extraction: each stream had to be
characterized by its contaminant concentration, inlet and outlet
concentration levels, and limiting flow rate through each operation.
The data were provided in a schematic flow diagram of the citrus
plant that incorporated a simplified water distribution network and
the mass balance of the plants water streams. Eleven freshwater-
using operations were identified. Chemical oxygen demand (COD)
Industrial Applications and Case Studies 263
was chosen as a component to proxy for all contaminants for two
reasons: first, COD measures the most significant contaminant
load in the majority of water streams; and second, COD exhibits
significantly high values.
The overall mass balance is closed by using the assumption
that evaporation losses from the steam system amount to 1 t/h
(otherwise, there is an inconsistency of 1 t/h of water that must
be accounted for). The extracted data on contaminant concentrations
and water flow rate made it possible to establish the amount of
water gained or lost by each operation in the process. The total
mass load picked up by the freshwater through each operation was
then calculated. The eleven water-using operationstogether with
the water flow rates entering and leaving each operationcan be
represented in the form of a simplified water network, as shown in
Figure 11.6.
The freshwater COD concentration level for the plant is 30 ppm.
There was an existing water reuse between processes currently in
the plant, and these reuse streams were left unchanged. The
simplified water network presented in Figure 11.6 shows the
freshwater-using operations with existing water reuse streams built
in to each identified operation. The current total freshwater consumed
and wastewater generated by this citrus plant were, respectively,
240.3 and 246.1 t/h.
The existing water network provided a base starting point for the
Water Pinch Analysis. The freshwater target was evaluated by using
the Composite Curves. The maximum concentration levels were
based on the constraints and limitations dictated by process
conditions and requirements. This data were represented in the
WATER software tool (2005) with identified constraints. The process
restrictions on water type permissible for each operation indicated
that operations 2, 4, 5, and 10 can use only freshwater as input. Hence,
the minimum freshwater required by plant operations was
164.4 t/h.
Operation 1 is a batch process for which the analysis assumed
freshwater must always be available. This increases the plants total
minimum freshwater requirement [t/h] to 164.9 = 164.4 + 0.5 for
operation 1. The current total freshwater feed to the plant is 240.3 t/h.
These figures can be used to calculate the maximum theoretical
freshwater reduction (MTFWR) that is achievable:

240.3  164.9 (11.2)

MTFWR 100 31.4 percent
The Water Pinch Analysis was then carried out for the existing
water network with maximum concentration levels. The overall
freshwater target was calculated using the maximum reuse analysis.
Figure 11.7 shows the modified water network represented as a
conventional diagram, and the Limiting Composite Curve is plotted
264 Chapter Eleven

0.5 t/h 0.5 t/h

Packing (1)
Loss to
Loss to Process 6.265 t/h 1 t/h Environment
7.265 t/h 0 t/h
Treatment Plant
(Potable) (2) Blowdown+
1.000 t/h Condensate
Lossess Neglected
26.15 t/h
26.15 t/h
Selection / Cleaning (3)

Overflow as WW 47.558 t/h 28.093 t/h Gain from Process

137.558 t/h 118.093 t/h

APV Condenser & Screen 2&3 &
Green Tank (4) Vincent Press

90.0 t/h
240.318 t/h 3.385 t/h 3.385 t/h 246.054 t/h
Vacuum Pump (5)
Waste Water
Fresh Water to
1.62 t/h from plant
Plant 1.62 t/h
Screen Taca-Taca (6)

Loss to Process 4.8 t/h

4.8 t/h
Finisher (7)
Condensate from
1.5 t/h
distiller stream
7.322 t/h Condensate from
juice vapour
Gain from Process
8.822 t/h
Distiller (8)

Loss to Process 10.95 t/h

39.6 t/h 28.65 t/h
Screen 1 (9)

Loss to Process 8.64 t/h

15.84 t/h 7.2 t/h

Distiller Condenser &
Washing Spiral 1 (10)

0.476 t/h Gain from Process

3.6 t/h 4.076 t/h
Centrifugal (11)

Total Raw Water 240.318 t/h Total Gain from Process 37.391 t/h
Total Waste Water 246.054 t/h Total Losses to Process 31.655 t/h
Difference 5.736 t/h Difference 5.736 t/h

Error % 0.00

FIGURE 11.6 Existing water network (simplified).

in Figure 11.8. The freshwater target was compared with the current
freshwater consumption in order to assess the plants overall potential
for minimizing water and wastewater. The redesigned network
yielded a freshwater demand of 169.3 t/h and a wastewater flow rate
of 175.1 t/h. Although this amounts to a substantial reduction in both
water and wastewater, the design includes reuse of certain streams
that will require treatment. As already described, the analysis was
based on COD as the pseudocontaminant in reuse streams. However,
these streams may well contain other contaminants (e.g., solid waste,
Industrial Applications and Case Studies 265
0.5 t/h 0.5 t/h
Operation 1

8.96 t/h
Loss to Process, 6.265 t/h
Recycle, 23.7 t/h

6.265 t/h 2.45 t/h 2.45 t/h

Operation 2 Operation 3

Loss to Process, 4.8 t/h Recycle, 0.83 t/h

4.8 t/h 0.79 t/h 0.79 t/h

Operation 7 Operation 6

Gain from Process,

0.48 t/h 4.01 t/h
Operation 11
3.53 t/h
Loss to Process,
10.95 t/h
47.56 t/h 31.83 t/h 39.60 t/h 28.60 t/h
Operation 4 Operation 9

Freshwater Gain from Process, Wastewater

169.3 t/h 28.09 t/h 4.38 t/h 175.05 t/h
90.00 t/h 113.72 t/h
Operation 4a

3.39 t/h 3.39 t/h

Operation 5

Net gain from Process,

28.09 t/h
1.00 t/h 8.82 t/h
Operation 2a/8

Loss to Process
8.64 t/h
15.84 t/h 7.20 t/h
Operation 10

FIGURE 11.7 Water Network after Pinch Analysis as a conventional diagram.

small amounts of chemicals) that require further treatment prior to

being used for other processes. Hence additional design options
must be developed for dealing with specific process requirements,
operating conditions, and suitability standards for water reuse.
The necessary design refinements may be achieved by introducing
further constraints to potential reuse streams and by utilizing the
maximum water reuse analysis to obtain optimal designs that use
even less freshwater while meeting all process operating conditions
and restrictions. The regeneration reuse analysis can also be used to
explore additional design options incorporating the reuse of
regenerated water in some operations. Such analysis is based on
installation of a treatment unit to regenerate wastewater (by gravity
settling, filtration, membranes, activated carbon, biological agents,
etc.). This modification would further reduce plant levels of
freshwater and wastewater, and it was also evaluated with the
WATER software.
266 Chapter Eleven

Analysis = Single Component Reference Contm No. 1 [COD]

Limiting Composite Curves

Concentration [ppb]





0.0 0.2 0.4 0.6 0.8

Mass [t/h]
Limiting CC Water Supply Line

FIGURE 11.8 Limiting Composite Curve generated by WATER software.

Four different design options were generated that considered

both the maximum reuse analysis and regeneration reuse analysis
(Figure 11.9). Design options A and B were based on the maximum
reuse analysis, and both achieve a freshwater consumption of 188 t/h;
this is a reduction of about 22 percent from the actual freshwater
consumption of 240 t/h. Because of process limitations and
restrictions, there was no further scope for water reuse. Yet results
from the diagnostic stage indicated that a 31 percent reduction in
freshwater and wastewater was possible. This additional extent is
achievable by regenerating wastewater and then reusing it in other
operations. Design options C and D are based on the regeneration
reuse analysis; both result in total freshwater consumption of 169
t/h, which amounts to a 30 percent reductionthat is, nearly all of
the theoretical maximum predicted by Eq. (11.2), the expression for
The reduction in freshwater use was achieved by rerouting water
streams, a modification that requires new pipes. Design option A
requires five new pipes; this is fewer than options B or D, which
require seven pipes, and option C, which requires nine new pipes.
New piping will affect investment cost. The five new pipes identified
for design option A are also required in the other options for the
same function and at similar flow rates. Therefore, this option
requires the least investment and option C the most. Furthermore,
design options C and D each require additional investment cost for a
Industrial Applications and Case Studies 267
regeneration unit that is not part of options A and B. These results
are summarized in Figure 11.9.
Design options A and D are the most attractive ones in terms of
maximizing both reuse and regeneration reuse. Option A results in a
smaller reduction in freshwater use but at a lower investment cost
than option D, which results in a larger reduction in freshwater use
but at a much higher investment cost. Wastewater reductions are
proportional to freshwater reductions, with corresponding reductions
in wastewater treatment costs for each of the options. The cost
analysis carried out for design option A indicates attractive financial
returns for this low-investment option, whose payback period is only
0.14 years. The outlet water quality of operation 3 required further
analysis, so a complete cost evaluation of design option D was not
possible. Fully evaluating this option would require additional
detailed studies to identify the regeneration process type required
and its associated costs.
The heat energy of the reuse water streams proposed in the
design options was reviewed to ensure that stream temperatures at
the inlet of operations remained unchanged. Citrus plant managers
reported that nearly all of the process operations occur at ambient
temperatures; the only exception was operation 8, which produces
wastewater at 90C. (This particular waste stream is highly
contaminated, which imposes some limitations.) All the water reuse
streams proposed by the four design options are at appropriate
temperatures, so they should not have a thermal effect on operations.
The overall hot and cold utility requirements of the plant would not
be affected by the changes proposed in the design options.
With its existing water network, the plant consumes 240.3 t/h of
freshwater and generates 246.1 t/h of wastewater. The proposed
design options offer a 30 percent and a 22 percent reduction in
freshwater consumption and wastewater generation. For a practical
project, the number of modifications is limited. The maximum water

Number of
New Pipes % Feed Water
Theoretical Freshwater Reduction Limit, 31% Reduction
30 30
22 22 20

10 9
5 7 7

Design Options

FIGURE 11.9 Summary of four design options.

268 Chapter Eleven

reuse design requires a minimum of five new pipes, and the

regeneration reuse design requires seven new pipes. The reuse
analysis predicted a short payback period, but the regeneration
analysis was not definitive for the reasons described previously. In
sum, a Water Pinch Analysis for this citrus plant demonstrated that
the consumption of freshwater could be reduced by as much as 30
percent with low investment and few changes to the existing

11.4 Efficient Energy Use in Other Food and

Drink Industries
Many studies have employed Pinch Technology (and its associated
Heat Integration Analysis) in the food-processing industry. This
industry has a far different thermodynamic profile than that of the
refining and petrochemical industries. The food-processing industry
is characterized by process streams of relatively low temperature
(normally 120140C), a small number of hot streams, low boiling-
point evaporation of food solutions, considerable deposition of scale
in evaporators and heat recovery systems, and seasonal operation.
However, a number of studies have also found that the application
of Pinch Technology and Heat Integration is hampered by particular
aspects of the food-processing systems. These aspects include direct
steam heating, difficulties in cleaning heat exchanger surfaces, and
high utility temperatures. Despite these drawbacks, the benefits that
can be obtained by applying the Pinch Technology (e.g., optimized
heat recovery systems and reduced energy consumption) far outweigh
the difficulties of performing the studies. There are also other
advantages that can be realized by technological improvements:
reduced deposition of scale due to reduced utility temperatures, self-
regulation of heat processes, and reduced emissions.
A case study of the production of refined sunflower oil (Kleme,
Kimenov, and Nenov, 1998) exemplifies the benefits of process
analysis based on Pinch Technology and Heat Integration. The
process studied operated with a minimal temperature difference of
65C at the Process Pinch. The external heating required by the
system was provided by two types of hot utilitiesDowtherm steam
and water steam; the required external cooling was provided by two
cold utilitiescooling water and chilled water. The analysis proposed
that heat recovery be increased and that the minimum temperature
difference be reduced to 814C. The increase in heat recovery
(provided by a reduction in the minimum driving force for the
process) entailed an increase in the heat transfer area, but this was
more than offset by reductions in the hot and cold utility requirements.
A further benefit of the analysis was a reduction in the number of
utilities needed: eliminating water steam and cooling water
considerably simplified the overall design.
Industrial Applications and Case Studies 269
A case study of a whisky distillery by Smith and Linnhoff (1988;
see also Caddet, 1994) provides another example of how Pinch
Technology and Heat Integration can reduce energy use and increase
energy efficiency. In this case it was found that steam was being used
below the Process Pinch, resulting in an overall increase in utility
usage. The steam was related to use of a heat pump, so the steam
used below the Process Pinch was eliminated by reducing the size of
that heat pump. Although the steam now had to be used for process
heating above the Process Pinch, the overall energy costs were
reduced owing to the reduction in compressor duty.
Another study involving a whisky distillery was made by Kemp
(2007). The hot utility requirement for the process examined was
8 MW, and the Process Pinch was at 95C. The main hot utility
requirements were steam for the distillation system and hot air for
the drying system. Kemp showed that the form of the Grand
Composite Curve and the temperature of the Pinch could be
exploited for heat pumping, and he also suggested that the process
would benefit from the introduction of a Combined Heat and Power
(CHP) scheme for improved Process Integration and energy
efficiency. The sites power demand was 12 MW, so there were two
possibilities for providing both the power and the heat demands
from the same utility system. First, a gas turbine that produced
12 MW of power would also supply about 30 MW of high-grade heat
from the exhaust. The second option involved the use of back-
pressure steam turbines, but these would produce nearly 100 MW,
much more than what was required. Another advantage of a gas
turbine was that its exhaust could be used for drying purposes.
Figure 11.10 shows the final configuration of the utility system
matched to the GCC. Most of the necessary heat was provided by the
gas turbine exhaust and the existing thermo compressors. The
existing package boilers were used to provide steam for the thermo
compressors. The efficiency of this part of the system was increased
by using waste heat from below the Pinch to preheat the boiler feed
water. Waste heat boilers driven by the exhaust from the gas turbine
provided additional steam.
Many processes in the food and drink industry make use of
chilling and refrigeration systems. Pinch Technology and Heat
Integration have also been used to increase the efficiency of these
systems. For example, Fritzson and Berntsson (2006) studied a
Swedish slaughtering and meat-processing plant. Their analysis of
the plants subambient temperature section employed the method
proposed by Linnhoff and Dhole (1992) for low-temperature process
changes that involve shaftwork targeting. Figure 11.11 shows the
plants Exergy Grand Composite Curve (EGCC). There is a large gap
between the EGCC and the utility curve, which indicates low
efficiency in the use of available shaftwork. Improvements in this
area could result in a 15 percent reduction in energy demand.
270 Chapter Eleven

Shifted temperature [C]



200 Waste
Site steam heat
Driver steam boiler
ex rbi s
ha ne
tu Ga

Boiler Feed Water heating


Heat Cooling
pump water

10 0 10 20 30 40 50 60
Net heat flow [MW]

FIGURE 11.10 Final configuration of proposed utility system matched to the

process GCC (after Kemp, 2007).

0 500 1000 1500 2000 2500 3000 3500 4000 Q [kW]


0.05 EGCC
Utility curve




c [1]

FIGURE 11.11 Exergy Grand Composite Curve for meat-processing plant (after
Fritzson and Berntsson, 2006).

Fritzson and Berntsson (2006) started the energy reduction

process by adjusting the loads on the subambient utilities, first
maximizing the load on the highest-temperature (10C) subambient
utility. After this, the load on each lower-temperature utility was
Industrial Applications and Case Studies 271
maximized. The reduction process began at the highest level (the
highest temperature), since this utility requirement can be satisfied
at a lower cost than refrigeration at lower temperatures. The modified
system was then modeled and simulated in HYSYS, which showed a
5 percent reduction in the shaftwork required.
However, these results still made for a relatively poor fit between
the EGCC and the utility curve. To reduce the gap further, it was
suggested that the level of the highest-temperature refrigeration
utility be increased from 10C to 3C and then the loads be
readjusted as before; the result is shown in Figure 11.12. This modified
configuration yielded a 10 percent reduction in the shaftwork
requirement. Other temperature changes were suggested to reduce
further the shaftwork requirements, but these changes were found to
be less cost-effective.

11.5 Synthesis of Industrial Utility Systems

Varbanov and colleagues (2005) demonstrated the synthesis of a
utility system (CHP network) of an industrial Total Site by applying
a combination of targeting and Mathematical Programming
techniques. Figure 11.13 shows the heating and cooling demands of
the chemical site studied. There are two operating scenariosfor
winter and summer, with different prices for power and fuel. The
basic data for the problem are listed in Tables 11.3 and 11.4.
Generic estimates of the corresponding coefficients were specified
for the boiler performance (field-erected boilers are slightly more
efficient than packaged units), and the capital cost estimates were
obtained online (Boiler Cost 2003). These estimates are a function of
boiler capacity and steam pressure. The performance and cost of gas

0 500 1000 1500 2000 2500 3000 3500 4000 Q [kW]


0.05 EGCC
Utility curve




c [1]

FIGURE 11.12 Area between the EGCC and the utility curve is further
decreased by changing the temperature of the first refrigeration level from
10C to 3C (after Fritzson and Berntsson, 2006).
272 Chapter Eleven

Winter Summer
T [C] T [C]
300 HP level candidates 300 HP level candidates

209.15 270.68 270.68

209.15 209.15
200 200
198.89 198.89

130.51 100 130.51 LP level candidates 130.51 100 130.51

LP level candidates

40 20 0 20 40 60 80 60 40 20 0 20 40 60 80
H [MW] H [MW]

FIGURE 11.13 Total Site Profiles and candidate steam pressure levels for three
steam mains of an industrial chemical site (the Heat Source Profile for summer
provides more heat than that for winter).

Winter Summer
Fraction of year 0.55 0.45
Price of cooling water [$/t] 0.0185 0.0212
Site power demand [MW] 25 31
Maximum export allowed [MW] 10 10
Price of power (import and export) [$/MWh] 20 30

TABLE 11.3 Operating Scenarios for the Total Site

Working hours per year 8600

Interest rate [%] 8
Life of plant [y] 10
Capital installation factor 4.8
TCW [C] 10
TBFW [C] 120
P VHP [bar(a)] 90
TVHP,SH [C] 200
TVP [C] 50
Natural gas price [$/MWh] 9.369
Distillate oil price [$/MWh] 10.734
Fuel gas price [$/MWh] 4.984
Fuel oil price [$/MWh] 6.226

TABLE 11.4 Configuration Data for the Total

Industrial Applications and Case Studies 273
turbines (ranging from 11.7 to 85.4 MW) were estimated using data
from Gas Turbine World (2001).
The preliminary Total Site targets estimated minimum utility
demand of about 80 t/h for the winter scenario, leading to the choice
of a field-erected boiler for stand-alone steam generation. A gas
turbine with a Heat Recovery Steam Generator (HRSG) is also selected.
For this case, three steam mains (headers) were considered: one for
very-high-pressure (VHP) steam generated by the boiler and by the
HRSG, and two more intermediate steam mains. The VHP header
properties were specified by the problem definition and are given in
Table 11.4. Hence, locations for the two other headers still have to be
determined. Figure 11.13 illustrates the partitioning of the candidate
steam levels, whose ranges are highlighted. For the upper steam
main, this graph was used to identify three candidate levels with
respective saturation temperatures of 270.68C, 209.15C, and
198.89C; the only candidate for the lower steam main there has a
saturation temperature of 130.51C. The problem superstructure is
illustrated in Figure 11.14.
Optimizing and reducing the superstructure yields the flowsheet
shown in Figure 11.15. It features two fired steam boilers and one
steam turbine sized to the steam flow for the winter period. This
flowsheet features relatively low on-site power generation and a
significant amount of power import. The main reasons behind this
approach are that power is cheap and capital is expensive; these
factors preclude a better utilization of the steam systems potential
cogeneration. This design also involves a significant amount of CO2

Gas turbines
gt01 gt01
Process heating
Process cooling demands
demands HRSG HRSG b01 b02 Winter Summer
Winter Summer
VHP: 90 bara, 503.35 C 3.550 3.175
12.000 16.885 MW MW
MW MW Canditate
hdr01: st01 hdr01: st02 9.038 5.956
pressure levels
7.700 13.383 MW MW
hdr01 tb01
tb02 1.635 1.501
tb03 MW MW
1.300 2.877
hdr02: st01 hdr02: st02 50.591 43.777
15.500 18.136 MW MW
MW MW hdr02 : tb01
14.660 10.632
8.210 10.632 MW MW
CT: st01 CT: st02
COND: 0.1235 bara
Condensate return
To cooling water
DA Make-up water

To the stream generators

FIGURE 11.14 Superstructure of the industrial utility system.

274 Chapter Eleven

Next, potential design measures for reducing greenhouse gas

emissions were evaluated. The sensitivity analysis addressed four
cases, as summarized in Table 11.5. Case 1 is the base case and thus
represents the conditions discussed so far. The other cases gradually
increase the price of power and fuel as well as the penalties on
emissions. The optimal utility system flowsheet for case 3 is shown
in Figure 11.16. Case 4 also replaces fuel oil with a biofuel, which is
assumed to have zero CO2 emissions.
Analysis of these emission reduction options leads to the
following conclusions: (1) Increasing the system efficiency is the
cheapest option for CO2 abatement, but it has a relatively limited
scope. (2) The next economic option for this particular problem is to
close the carbon cycle by using biofuels; in general, however, CO2
capture and sequestration could also be considered.

Uses fuel gas Uses fuel gas

Capacity: 51.334 t/h Capacity: 70.000 t/h
Winter: 51.334 t/h b02 Winter: 41.990 t/h Winter Summer
Winter Summer b01
Summer: 45.049 t/h Summer: 0 t/h
6.289 t/h 5.625 t/h
12.000 MW 16.885 MW
VHP: 90 bara, 503.35 C 3.550 MW 3.175 MW

7.700 MW 13.383 MW Capacity: 52.409 t/h hdr : st POWER 18.907 t/h 13.210 t/h
01 01
Winter: 52.409 t/h Winter: 4.767 MW
Summer: 1.941 MW 9.038 MW 5.956 MW
Summer: 26.213 t/h
Winter: 15.717 t/h
1.300 MW 2.877 MW
Summer: 0.001 t/h
TSAT=198.89 C; P=15.19 bara 1.635 MW 1.501 MW
33.139 t/h 52.304 t/h
Winter: 16.645 t/h 84.620 t/h 77.960 t/h
25.240 t/h 29.533 t/h
Summer: 0.558 t/h
50.591 MW 43.777 MW
15.500 MW 18.136 MW TSAT=130.51 C (tb01); P=2.74 bara
Winter: 18.769 t/h 23.116 t/h 17.597 t/h
Summer: 12.493 t/h
8.210 MW 10.632 MW 14.660 MW 10.632 MW
Winter: 114.025 t/h
COND: 0.1235 bara Summer: 101.182 t/h
To cooling water
Condensate return
Winter: 136.671 t/h Winter: 22.646 t/h from processes
Summer: 116.369 t/h Summer: 15.187 t/h
To the stream Make-up
generators Winter: 154.502 t/h water Power import [MW]: 21.1 (Winter); 30.0 (Summer)
Summer: 128.237 t/h Total annualized cost: 13.124 106 $/y

FIGURE 11.15 Optimal utility system for Case 1 (base case).

Case Price of Price of Price of Power price

CO2 [$/t] SOx [$/t] NOx [$/t] [$/MWh]
1 0 0 0 20.00 30.00
2 0 0 0 30.00 45.00
3 40 500 1000 40.00 60.00
4 40 500 1000 40.00 60.00

TABLE 11.5 Sensitivity Analysis for Reducing Emissions: Basic

Industrial Applications and Case Studies 275
Capacity: 33.917 MW Capacity: 28.570 t/h
Winter: 30.085 MW Winter: 95.230 t/h Winter: 0.000 t/h
Summer: 33.916 MW Summer: 45.022 t/h Summer: 0.000 t/h
Winter Summer b02 Winter Summer
6.289 t/h 5.625 t/h
12.000 MW 16.885 MW
VHP: 90 bara, 503.35C 3.550 MW 3.175 MW

7.700 MW 13.383 MW Power 18.907 t/h 13.210 t/h

Capacity: 52.017 t/h hdr01 : st01
Winter: 52.017 t/h Winter: 7.722 MW
9.038 MW 5.956 MW
Summer: 26.023 t/h Summer: 1.942 MW
Winter: 18.016 t/h
1.300 MW 2.877 MW
Summer: 0.164 t/h
TSAT = 198.89C; P=15.19 bara (tb03)
1.635 MW 1.501 MW
33.139 t/h 52.304 t/h
Capacity: 20.000 t/h hdr02 : st01 84.271 t/h 77.935 t/h
25.240 t/h 29.533 t/h Winter: 0.733 MW
Winter: 13.997 t/h
Summer: 0.000 MW
Summer: 0.000 t/h 50.591 MW 43.777 MW
15.500 MW 18.136 MW Winter: 4.904 t/h
Summer: 0.555 t/h
TSAT = 130.51C; P=2.74 bara (tb01) 24.272 t/h 17.597 t/h
8.210 MW 10.632 MW Winter: 19.869 t/h 14.660 MW 10.632 MW
Summer: 12.491 t/h
COND: 0.1235 bara
To cooling water
Winter: 114.832 t/h Condensate return
Winter: 137.590 t/h Summer: 101.157 t/h from processes
Winter: 22.758 t/h
Summer: 116.343 t/h Summer: 15.119 t/h
To the stream Make-up
generators Winter: 156.466 t/h water Power export [MW]: 10.0 (Winter); 4.4 (Summer)
Summer: 128.209 t/h Total annualized cost: 23.060 106 $/y

FIGURE 11.16 Optimal utility system for Case 3.

11.6 Heat and Power Integration in Buildings and

Building Complexes
Herrera, Islas, and Arriola (2003) studied a hospital complex that
included an institute, a general hospital, a regional laundry center, a
sports center, and some other public buildings. The use of diesel fuel
represented 75 percent of its total energy consumption and 68 percent
of its total energy cost, which was $396,131 in 1999.
In the hospital complex, the heat demand is met by producing
steam in boilers fueled by high-price diesel fuel. There is no heat
recovery between the existing heat sources and heat sinks. The hot
streams were identified as the soiled soapy water from the laundry
and the flow of condensed steam not recovered in the condensation
network. The stream data are presented in Table 11.6.
For this hospital complex, the amount of external heating required
(i.e., the hot utility target) is 388.64 kW, which can be seen on the hot
and cold Composite Curves in Figure 11.17. This plot was employed
to determine what temperature levels of the utilities would satisfy
this requirement. The heating utility target of 388.64 kW translates
to an annual energy requirement of 12.26 TJ/y.
The amount of heating provided is actually 625.28 kW, which
represents the heat services that are currently transferred to the
complex. This figure represents a potential for energy savings of
38 percent, which is equivalent to an annual reduction in diesel fuel
of 246,000 liters (worth about $100,000). To reduce heating energy
demands to the targeted value, the Heat Integration Analysis
276 Chapter Eleven

Stream Name T supply T target DH CP

[C] [C] [kW] [kW/C]
1 Hot Soapy water 85 40 23.70 0.53
2 Hot Condensed 80 40 96.32 2.41
3 Cold Laundry 25 55 17.60 0.59
sanitary water
4 Cold Laundry 55 85 77.27 2.58
5 Cold Boiler feed 30 60 7.13 0.24
6 Cold Sanitary water 25 60 77.12 2.20
7 Cold Sterilization 30 121 12.50 0.14
8 Cold Swimming pool 25 28 151.67 50.56
9 Cold Cooking 30 100 59.63 0.85
10 Cold Heating 18 25 100.82 14.40
11 Cold Bedpan 21 121 4.94 0.05

TABLE 11.6 Process Stream Data of Hospital Complex (Herrera, Islas, and
Arriola, 2003)

T [C]
120 Hot CC
Cold CC




0 80 160 240 320 400 480

H [MW]

FIGURE 11.17 Process Composite Curves for hospital complex (Tmin = 20C).

indicates that four extra heat exchangers should be added to the

network. Two are needed in the laundry to cover part of the heat
demand, a third is needed in the machinery rooms that help to heat
boiler feed water, and a fourth is needed in the condensation tank
area that heats the sanitary water. The analysis could be refined
Industrial Applications and Case Studies 277
further by considering several other issues, such as fouling, pressure
drop, and variation in the heat demand.

11.7 Optimal Design of a Supply Chain

In the global market, optimal management control of a company is
necessary for survival. Such control involves a series of effective
strategic decisions that are concerned with various aspects of the
supply chainfor example, decisions regarding the products
themselves as well as the location of production facilities and
distribution centers. When the subject is the development or
operation of highly complex business processes (e.g., supply
chains, value chains), computer-aided decisions are preferred.
A conventional approach for systematically evaluating decision
alternatives is Mathematical Programming, although a daily
management problem is usually not resolved via MPR methods.
Even when the MPR can be constructed for the problem, it is still
difficult to verify whether the mathematical formulation of relevant
decision alternatives is sufficiently accurate and complete to
identify the optimal solution. When the mathematical model is not
generated systematically, the models chances of embodying the
optimal solution are very low.
This case study involves determining the optimal design of a
supply chain. Here, the purpose of the supply chain is to meet a given
volume demand for commodity C at location L1. Three options are
considered: produce commodity C at location L1; produce commodity
C at location L2 and then transport it to location L1; or some combination
of these. Production requires the availability of part A and part B at
the same location. Part A is available at location L3 and, to a limited
extent, at location L4; it can be transported to location L1 or to
location L2. Part B is available at location L2 and can be transported to
location L1. The list of potential activities is given in Table 11.7.

ID Activity Location Precondition Effect

PL1 Production L1 A and B at L1 C at L1
PL2 Production L2 A and B at L2 C at L2
TAL3L1 Transportation From L3 to L1 A at L3 A at L1
TAL3L2 Transportation From L3 to L2 A at L3 A at L2
TAL4L1 Transportation From L4 to L1 A at L4 A at L1
TAL4L2 Transportation From L4 to L2 A at L4 A at L2
TBL2L1 Transportation From L2 to L1 B at L2 B at L1
TCL2L1 Transportation From L2 to L1 C at L2 C at L1

TABLE 11.7 Potential Activities in the Supply Chain Case Study

278 Chapter Eleven

Given the list of potential activities defined by their preconditions

(activating entities) and effects (resulting entities), the Maximal
Structure Generation (MSG) algorithm then produces the maximal
structure (Friedler et al., 1993). Next, when applied to the maximal
structure so derived, the Solution Structures Generation (SSG)
algorithm (Friedler et al., 1995) enumerates 15 combinatorially
feasible business process structures for the problem.
To determine the optimal business practice, the following
quantitative information is provided in addition to the case studys
15 structural alternatives. The required volume of the demand is
20,000 pieces annually. Producing one piece of commodity C requires
the availability of one piece of part A and one of part B. At most 5000
pieces of part A are available at location L4 for 230 each. An
unlimited number of part A can be purchased at location L3 for 250
each, and an unlimited number of part B can be purchased at
location L2 for 310 each.
The cost of an activity depends on its volume. To estimate the
increase in the cost of an activity as a function of its volume, costs
are given for handling 1000 and 2000 pieces per year. If a linear cost
function with a fixed charge is adopted to estimate the cost of the
activities with different volumes, then this function is determined
by the fixed charge and the proportionality constant. Table 11.8
summarizes the cost of processing 1000 versus 2000 pieces per
annum as well as the parameters of the fixed-charge linear cost
function for each activity.
The optimal business process, as determined by the Accelerated
Branch-and-Bound (ABB) algorithm (Friedler et al., 1996), corresponds
to the activities given in Table 11.9 whose annual cost totals

Activity Cost of processing Cost function parameters

ID 1000 pcs/y 2000 pcs/y Fixed charge Proportionality
[/y] [/y] [/y] constant
PL1 8,000 10,000 6,000 2
PL2 9,000 10,000 8,000 1
TAL3L1 4,000 8,000 0 4
TAL3L2 10,000 20,000 0 10
TAL4L1 12,000 24,000 0 12
TAL4L2 2,000 4,000 0 2
TBL2L1 10,000 20,000 0 10
TCL2L1 14,000 28,000 0 14

TABLE 11.8 Cost of Activities in the Supply Chain Case Study

Industrial Applications and Case Studies 279

Activity ID Optimal 2nd best 3rd best

Volume [pcs/y]
PL1 15,000 20,000
PL2 5,000 20,000
TAL3L1 15,000 15,000
TAL3L2 15,000
TAL4L1 5,000
TAL4L2 5,000 5,000
TBL2L1 15,000 20,000
TCL2L1 5,000 20,000
Total cost [/y] 11,439,000 11,466,000 11,568,000

TABLE 11.9 Activities in the Optimal, Second-Best, and Third-Best Business


11,439,000. The second-best business process has a total annual cost

of 11,466,000, and the third-best business process has a total annual
cost of 11,568,000.

11.8 Scheduling a Large-Scale Paint Production System

Paint production usually consists of three major operations: grinding
and dispersion, mixing and coloring, and discharging and packaging.
Paints and coatings are typically produced in batches. They are made
in stationary and portable equipment units such as high-speed
dispersion mixers, rotary batch mixers, blenders, sand mills, and
tanks. The raw materials are solvents, resins, pigments, and additives
that include inorganic and organic chemicals. Paint manufacturing
does not usually involve chemical reactions between the raw
materials, so the finished product consists of a mixture of the different
raw materials. Several dozens of products are produced at the
manufacturing site, so the corresponding scheduling problem is
bound to be highly complex.
The S-graph framework of batch scheduling (see Chapter 7) has
been extended to solve complex paint production problems (Adonyi
et al., 2008). Changeover time is defined for any equipment unit that
requires cleaning. Traditionally, minimizing makespan (total time to
completion) is the criterion used when assigning equipment units to
tasks and scheduling the tasks. Such schedules maximize the
production systems efficiency, but they may lead to unnecessarily
high levels of waste generation. Thus, determining which task
schedule minimizes cleaning cost will require that the problems
objective function be modified. Now, rather than minimizing
makespan as in the original problem, the reformulation seeks to
280 Chapter Eleven

minimize the cleaning cost. This change in criterion has only a minor
effect on the solution procedure, so an effective solver for the original
problem is also useful for the reformulated problem.
Twenty-three equipment units, E1 through E23, are available to
generate six products, A through F. The changeover time is 70 minutes
for equipment units E6, E7, E8, and E9 but 100 minutes for equipment
units E1 through E5 and E10 through E20. All other changeover times
are presumed to be zero. The number of batches to be produced is
given in Table 11.10.
Cleaning the equipment units is a costly operation that involves
many pollutants. The minimal makespan schedule contains 11
cleaning operations, which are denoted by the dotted changeover
arcs on its S-graph; see Figure 11.18.
The cleaning cost of the solution with minimal makespan is
$14,000. In contrast, the solution based on minimizing the cost
involves four (rather than 11) cleaning operations and only $3,500 in
cleaning cost; its makespan is 6,910 minutes. If the cleaning cost is
limited to reach $5,500, then the corresponding makespan is reduced
to 6,700 minutes.

Product A B C D E F
Number of batches 3 5 1 3 9 3

TABLE 11.10 Number of Batches Produced of Each Product

60 310 120 540 40 120 60 720

E1 E6 E11 E22 97 E4 E6 E10 E21 109
A 0 E
0 0 0 0
60 310 120 540 40 120 90 720 110
E1 E6 E11 E22 98 E4 E6 E12 E21
0 0 0 70 0 0 0 0
60 310 120 540 40 120 60 720 111
E1 E6 E11 E22 99 E4 E6 E10 E21
70 0 A E
0 0 0 0
60 240 120 540 40 300 90 720 112
E1 E7 E15 E22 100 E4 E6 E12 E21
0 0 0 0 0 0
70 40 300 60
60 E7 240 E19 120 E22 540 101 B E6 E21 720 113
E1 E4 E10
0 0 100 0 0 0 0
60 240 120 40 300 90 720
E1 E7 E11 E22 540 102 E4 E6 E12 E21 114
0 0 0 0 0
0 0
60 240 60 540 40 300 60 720
E1 E7 E13 E22 103 E4 E6 E10 E21 115
0 0 0 0 0 0
100 0
60 240 120 540 40 300 90 720
E1 E7 E15 E22 104 E4 E6 E12 E21 116
70 0 0 0 0
100 40 300 90 720
E2 60 E8 120 E16 50 E22 540 105 E4 E6 E16 E21 117
C 100 E
60 240 90 540 40 240 120 720
E3 E7 E18 E22 106 E5 E7 E19 E23 118
0 0 0 0 0 0
60 240 90 540 40 240 60 720
E3 E7 E14 E22 107 E5 E7 E20 E23 119
D 0 F
0 0 100 0 0
60 240 60 720 40 240 120 720
E3 E7 E20 E23 108 E5 E7 E15 E23 120
70 100

FIGURE 11.18 Schedule graph of the solution that minimizes makespan (after
Adonyi et al., 2008).
Typical Pitfalls and
How to Avoid Them

rocess Integration (PI) has proven to be a powerful optimization
tool for designing processes that are energy-efficient,
environmentally friendly, and sustainable. The methodology
provides clear insight into the design process, but this direct
simplicity is sometimes misunderstood by potential users. Once
proper data are made available, the procedure generates an excellent
lead for the design process. But just as with all optimization tools,
potential pitfalls when using PI include improper formulation of the
problem and incorrect data extraction.
Even when the most efficient and well-developed methodology
is used to solve an optimization problem with high precision, the
results may be suspicious unless we have been solving the right
problem. In other words, has the problem been formulated in a way
that closely reflects the real process under consideration and
(especially) has the correct data been extracted? Negative answers to
these questions explain such published statements as: Pinch
technology [or process integration] did not work for this problem.
When these problems are revisited it usually becomes obvious that
the PI methodology is not at fault; rather, an inexperienced or
overconfident user is typically the cause.
Therefore, this chapter is devoted to various mistakes that a
designer might unwittingly make. The most basic issue concerns
how one starts a PI-based project and how it is run. Kemp (2007)
summarized the key steps listed below, which have been further
developed based on the authors experience. These steps are
specifically related to Heat Integration; however, they apply with
only small adjustments to mass, water, and other integration as

1. Become familiar with the analyzed process. The most efficient

way is to closely liaise with the process designer and/or plant
manager, especially if the plant is already operating.

282 Chapter Twelve

2. Develop a mass and heat balance. This should be based on

the designed process flowsheet data and calculations and/
or on measurements taken from the operating plant (if the
study is for a retrofit).
3. Select the streams. This is a critical step and, as will be shown
in this chapter, not as straightforward as it may seem.
4. Remove all the existing units related to the PI analysis. For
Heat Integration, remove all heat-transferring units, for
mass water integration remove all water interconnections
(the pipes). This step is also criticalwithout it, the
optimized design would not differ from the initial design.
Section 4.2.4 provides an example illustrating this activity.
5. Extract the stream data for the PI analysis. Different data are
relevant for each PI analysis type. For HI heat loads and
temperatures are extracted.
6. Make a qualified initial guess for the Tmin value; this value
can be adjusted later at various stages of the design
7. Perform the Pinch analysis: obtain the Pinch temperatures
and the utility targets.
8. Design the initial (heat exchanger) network using the
criterion of maximizing energy recovery.
9. Check for a cross-Pinch transfer and for inappropriate
placement of utilities.
10. Check for proper placement of reactors, separation columns,
heat engines, and heat pumps.
11. Investigate the potential for further modifying the process
in order to minimize energy consumption and reduce
capital costs. Investigate the potential benefits of applying
the plus-minus principle (see Chapter 4) and the KHSH and
KCSC principles (see Figure 4.44).
12. Investigate the potential for integration with other processes
that is, Total Site Analysis.
13. Consider the implications of pressure drop (trade-offs between
heat savings and extra energy for pumping) and the physical
layout (capital cost of heat exchangers and/or piping).
14. Make the preselection of heat exchange equipment and
perform the preliminary costing. Provision should be made
for variations in the future price of energy.
15. Make the first optimization run of the predesign plant or
site, and make adjustments to Tmin.
Typical Pit falls and How to Avoid Them 283
16. Based on the optimization, extract adjusted data and return
to step 7. Perform an additional loop (or loops) while
screening and scoping for potential simplifications.
17. Consider real plant constraints; these include safety,
technology limitations, controllability, operability, flexibility,
availability, and maintainability.
18. Pay attention to start-up and shutdown of the process; some
early designs for highly integrated plants had problems in
this area.
19. Run a second optimization for the final tuning accounting
for the information added during steps 16 to 18. If necessary,
return to any appropriate previous step for adjustment.
20. The design is now ready for detailing. However, optimization
is a never-ending procedure, and designs may need to be
modified in response to changes in operating conditions
(e.g., plant capacity) or the economic environment (e.g., tax
policy; prices for energy, materials, and production).

12.1 Data Extraction

As emphasized previously, data extraction is a crucial step. Bodo
Linnhoff presented one of his last plenary lectures (Linnhoff and
Akinradewo, 1998) on the automated interface between simulation
and integration. This has been a substantial step toward data
extraction for PI software tools. The plenary has been fairly
comprehensive and suggested the way forward for this important
task. The problem received increased interest following this lecture,
and several software packages now offer support for solving it.
Nonetheless, more work in this area is needed to satisfy the
requirements of routine industrial applications.
In their SuperTarget and Pinch Express software packages,
Linnhoff March (1998) included procedures for automatic extraction
of data for Heat Integration. Even so, thermal data, which involve the
stream heating and cooling information and utilities information,
are the most critical data required for Pinch Analysis. There are
several possibilities for extracting the thermal data from a given heat
and material balance. This must be done carefully, as poor data
extraction can easily lead to missed opportunities for improved
process design. In extreme cases, poor data extraction can falsely
present the existing process flow-sheet as optimal in terms of energy
efficiency. If the data extraction accepts all the features of the existing
flow-sheet then there will be no scope for improvement. If it does not
accept any features of the existing flow-sheet then Pinch Analysis
284 Chapter Twelve

may over-estimate the potential benefits. Appropriate data extraction

accepts only the critical sections of the plant which cannot be
changed. Data extraction skill develops with increased experience in
the application of Pinch Technology (Linnhoff March, 1998).
Since the release of these packages, the methodology has
developed further and more attempts have been made to extract
data automatically. However, experience and following the proper
rules remain valuable assets. Basic questions to ask include the

1. When is a stream a stream?

2. How precise must the data be at each step?
3. How can considerable changes in specific heat capacities be
4. What rules and guidelines must be followed to extract data
5. How can the heat loads, heat capacities, and temperatures of
an extracted stream be calculated?
6. How soft are the data in a plant or process flowsheet?
7. How can capital costs and operating costs be estimated?

12.1.1 When Is a Stream a Stream?

To those unfamiliar with PI, identifying a stream seems fairly
straightforward. In fact, many considerations are involved, and
accounting for them properly is key to setting up the problem. First,
we need not consider any stream that neither gains nor provides heat;
no data needs to be extracted from a stream with identical supply
and target temperatures and enthalpies. (Of course, in the absence of
perfect insulation, every stream loses or gains some heat; in many
cases, however, these small amounts of losses and gains can be
neglected.) If we do not extract data from such streams, the problem
is considerably simplified.
There are also streams that, for one reason or another, should not
be included in the PI problemfor example, streams that are remote
and streams that should not be altered for safety, product purity, and
operational reasons and for other (mostly practical) considerations.
Finally, Heat Integration deals with heat flows, which can be carried
not only by a pipe line but also by radiation or conduction.
Consider the example depicted in Figure 12.1. This example was
introduced, along with some data extraction rules, by Linnhoff and
colleagues (1982) and was later modified for use in many follow-up
books (e.g., Smith, 1995; Smith, 2005; Kemp, 2007) and in many courses
based on UMIST (later the University of Manchester) teaching
materials. The figure shows part of a flowsheet in which the feed
stream is heated to 45C by recuperated heat in a heat exchanger and
then enters a processing unit. After leaving this unit, the stream is
Typical Pit falls and How to Avoid Them 285

120 40
Reactor 80 10
45 45
H2 H1
140 Unit 140

FIGURE 12.1 Partial flowsheet for example process.

heated further by two heat exchangers and then enters a reactor. The
reactor requires the feed stream to be at 165C. The question is: How
many streams should be extracted?
1. Three streams: 1045C, 4580C, and 80165C
2. Two streams: 1045C and 45165C
3. One stream: 10165C

Choosing option 1 yields a design exactly like the original one;

there will again be three heat exchangers with the same heat transfer
duties as before. This case accounts for claims of no process
improvement by some PI critics. Option 2 presents more degrees of
freedom: the first heat exchanger would be the same, but the other
could be modified. Extracting two streams would be appropriate for
cases when the processing unit requires a feed temperature of close
to 80C.
Option 3 provides the most degrees of freedom and scope for
improvement, but with this design the processing unit feed could be
at any temperature between the 10C supply and the reactor target of
165C. If the processing unit is, say, a filter (as assumed by Smith,
2005), then there would probably be some restriction on the filter
supply temperature to ensure proper operation of the filter. If the
processing unit is storage (as assumed by Linnhoff et al., 1982), then
the supply temperature might be restricted to a different range
(depending, e.g., on whether liquid or gas is stored). This simple
example demonstrates that choosing the right stream from which to
extract data cannot be a fully automatic process. Making this choice
requires assessments related to specific processing units and
performance requirements for the plant.

12.1.2 How Precise Must the Data Be at Each Step?

There are frequently questions about required data precision, and a
common excuse for not applying PI analysis is that the data on a
plant is not sufficiently precise. However, the process of applying PI
286 Chapter Twelve

is itself based on (initially) rough assumptions, which are revised

during the course of several design loops. At the beginning there are
no specifications for the heat-transferring units, and neither are the
feed temperatures (and several other important factors) fully fixed.
At this stage, then, extremely precise data is not needed. It is important
to recognize that PI and the initial optimization are more about
screening and scoping than about detailed design. In seeking
potential energy savings, it is the general direction in which
optimization should proceed that is the initial concern. If the result
of this first step is that a 15 percent savings in energy is possible, then
this figure is sufficient to confirm the design approach and it doesnt
much matter if the precise figure is actually 13 or 17 percent.
It is in the regions close to the Pinch that the data should be as
precise as possible (Linnhoff et al., 1982). Also, it is best to remain
inside the Composite Curves in the plot of temperature versus
enthalpy. At the start of data extraction, we might have only a vague
idea of where (and at what temperature) the Process Pinch will occur;
also, the Composite Curves are based solely on data extraction.
Therefore, data extraction must necessarily start from rough
assessments and then be corrected step by step.

12.1.3 How Can Considerable Changes in Specific Heat

Capacities Be Handled?
Further analysis of the flowsheet in Figure 12.1 reveals that phase
changes are very likely to occur when the temperature increases
from 10C to 165C. In general, Cp varies with temperature, but latent
heat is also a determining factor. Clearly, using a constant value for
Cp would not be sufficiently realistic.
A segmentation technique has been developed to deal with this
problem. This technique is used, for example, in the software tools
SPRINT (2009) and STAR (2009). The software tools treat the segments
as individual streams that are combined to form input and output
streams. Data extraction is affected by how many segments are used
and their boundary temperatures (see Figure 12.2). Increasing the
number of segments naturally increases the complexity, so this

T 1 T 2 T 3 T 4

? ? ? ?


FIGURE 12.2 How should we linearize?

Typical Pit falls and How to Avoid Them 287
number should be minimized for industrial problems involving
many streams.

12.1.4 What Rules and Guidelines Must Be

Followed to Extract Data Properly?
As mentioned in Section 12.1.1, data extraction rules were first
introduced by Linnhoff et al. (1982) and frequently used with some
modifications thereafter (Smith, 1995; CPI, 2004 and 2005; Smith,
2005; Kemp, 2007). Most of this work is related to Heat Integration,
but the principles apply as well to mass (water) integration. These
rules are reviewed briefly in this section.
When two or more streams of different temperatures are mixed,
this nonisothermal mixing constitutes a heat exchange with
degradation of the higher temperature. In some cases, such mixing
can also cause cross-Pinch transfer problemsas in Figure 12.3(a),

FIGURE 12.3 Nonisothermal (a) 160

stream mixing extracted as
(a) three streams and (b) two

70 40 75 35

110 160

40 70
35 75

(b) 160


70 40 75 35

35 160

35 70
288 Chapter Twelve

where three streams are extracted. The correct data extraction for this
case involves two streams, as shown in Figure 12.3(b).
General guidelines for data extraction may be summarized as

1. Heat losses. In most cases, heat losses can be neglected. However,

they should not be neglected when streams (mass and heat
flow in pipes) are long or subject to varied temperatures. In
such cases, the solution is to introduce hypothetical coolers (or
heaters) that represent the heat loss.
2. Extracting utilities. The utilities should never be extracted from
the existing plant or flowsheet, for then the solution would
likely arrive at the same utility values and perhaps neglect
some options that would be more efficient. This rule of thumb
applies especially to cases where utilities can be generated
on-site and thereby (at least partially) replace costly existing
utilities. In this connection it should be remembered that
steam is not always a utilitysometimes it is also a process
stream (e.g., stripping steam in separation columns). Process
streams should remain in place and not be removed.
3. Generating utilities. The Heat Integration analysis may indicate
some valuable options for using otherwise wasted heat or
cold to generate utilities. The Grand Composite Curves can
be used to locate such options. However, when extracting
data it must be recognized that steam requires the boiler
feedwater (BFW) to be heated, water to be evaporated, and
the steam to be superheated; see Figure 12.4. Many mistakes
have been caused by designers who simply matched the

Specified Steam

BFW Conditions

FIGURE 12.4 Extraction of a cold stream including segments for BFW

preheating, evaporation, and superheating.
Typical Pit falls and How to Avoid Them 289
steam generation line without making provisions for
preheating and superheating.
4. Extracting at the effective temperature. In some cases a stream
cannot be extracted directly because it has to still be used by
a related process. For example, a hot stream should be
extracted at temperatures at which the heat becomes
available. An example is given in Smith (2005, p. 433) for a
reactor using a quench liquid.
5. Forced and prohibited matches. In almost any process there exist
matches that are necessary for technological reasonsthe
forced matchesas well as those, such as hot and cold stream
matches in a heat exchanger, that should be prohibited (e.g., to
prevent contamination of one of the streams). In manual design
these constraints have to be observed by the designer, but
software tools usually offer this option. If not then the
constraints can be secured by an appropriate penalty or bonus
(as applies) in the objective function used for the optimization.
6. Keeping streams separate only when necessary. If streams can be
merged then it may be possible to eliminate some heat-
exchanging units. For example, streams that leave the plant
to be treated as wastewater often have some heat content that
can be utilized.

12.1.5 How Can the Heat Loads, Heat Capacities, and

Temperatures of an Extracted Stream
Be Calculated?
Once a stream has been extracted, the next problem is calculating the
heat-related data. There are standard engineering procedures
available for the running plants as the measurements with the
following data reconciliation (Kleme, Lucha, and Vaek, 1979;
Minet et al., 2001; VALI III User Guide, 2003). The other option is to
develop a flowsheeting simulation model (Kleme, 1977). An
overview of flowsheeting and balancing simulators was given in
Chapter 9. If a plant is being designed, some data could be also
extracted from the process flow diagram (PFD). But all those options
consume time and resources, so in the early design stages (when the
process structure is still under development and likely to be changed
as a result of PI analysis), it is reasonable and easier to use a simplified
approach based on the extracted data. Such an approach is
demonstrated in Figure 12.5 for a part of the flowsheet from
Figure 12.1. The CPs of the stream segments are assumed constant
and calculated from the temperatures and the duties given in the
flowsheet. Experience has indicated that the resulting rough
preliminary data are sufficient and can later be made more precise
by one or more of the procedures listed previously.
290 Chapter Twelve




150 10
H3 H1 H2 H3
120 40
80 45 45 10
H2 H1
140 Unit 140

FIGURE 12.5 Obtaining rough data from flowsheet heat loads and

12.1.6 How Soft Are the Data in a Plant or

Process Flowsheet?
Distinguishing soft data from hard data is one of the most important
aspects of data extraction. Inexperienced persons are usually trying
to stick the temperatures shown in the PFD, extract those temperatures,
and then perform the PI analysis. However, this approach usually
ends up overlooking many opportunities. A better approach is to
question every temperature, discuss each one with the process
engineer (or plant designer or plant manager), and thereby establish
which temperatures are critical (the hard data) while the rest (the
soft data) can be in some way compromised. In practice, most data
are at least a little soft, and designers can use this fact to their
advantage. Typically, streams that are leaving the plant (see
Figure 12.6) are characterized by soft data and thus are suitable for
optimization via the plus-minus principle (Figure 12.7). Data softness
is closely related to changing conditions and to a designs flexibility,
operability, and resilience.

12.1.7 How Can Capital Costs and Operating

Costs Be Estimated?
The need to find cost data arises when the appropriate Tmin (which
should be close to the optimum) is being selected. The optimum Tmin
depends strongly on economic parameters, and its value is important
for both grassroots design and retrofit. Estimating capital costs is
usually a time-consuming procedure. However, it is possible to use
Typical Pit falls and How to Avoid Them 291

110 70 Storage 110 20 Storage

70 20

Cooler Cooler

FIGURE 12.6 Soft data for streams leaving a plant.

T T Increase (+)
Hot Stream
Decrease ()
Hot Stream

Reduced Cold Reduced Hot

Utility Target Utility Target

T T Decrease ()
Increase (+) Cold Stream
Cold Stream

Reduced Cold Reduced Hot

Utility Target Utility Target


FIGURE 12.7 The plus-minus principle can be used to optimize application

targets by using properly extracted soft data.

approximate methods (Taal et al., 2003) at the initial design stage,

when little is known about the types of heat transfer units to be used,
the ultimate design, the materials required, or the temperature,
pressure, and composition of streams. The estimates so derived will
usually suffice until the final detailed design cost date, at which time
information is obtained from selected manufacturers. Note, however,
that equipment cost may vary regionally and may also be related to
market conditions (e.g., a large customer can secure discounts, and
prices fall during a recession).
It is much more difficult to establish operating cost, which is
affected by labor, taxation, and so forth but is mainly a function of
energy cost. Here the most obvious problemand greatest potential
pitfallis using the current price of energy. (It is possible to find
292 Chapter Twelve

many publications and projects where this rule was not followed.) It
is better to use the anticipated average energy price for the life span
of the plant or, in the case of a retrofit, for the payback period. The
problem then becomes one of estimating this future energy price for
periods that may be as long as five or ten years. It has been shown
(see, e.g., Kleme and Bulatov, 2001; Donnelly, Kleme, and Perry,
2005) that even the forecasts of highly qualified experts are frequently
inaccurate. One of the obvious potential approaches is to use the
scenarios and target the most flexible design that would provide a
balanced optimum for various situations.

12.2 Integration of Renewables: Fluctuating

Demand and Supply
The integration of renewable energy sources was discussed in
Chapter 6. Renewable resources are usually available on a smaller
scale and are often distributed over a certain region. Their availability
(with the exception of biomass) varies significantly with time and
location. This variability is due to changing weather and geographic
conditions. The energy demands (heating, cooling, and power) of
sites also vary significantly with time of the day and period of the
year. These variations in the supply and demand of renewables can
be predicted in part, and some of the variation is fairly regularfor
instance, day versus night in predominantly cloudless areas for solar
energy. The availability of wind-generated energy can be less
predictable. One approach to dealing with these problems is the
advanced PI technique that employs time as an additional problem
dimension. A basic methodology along these lines (involving Time
Slices and Time Average Composite Curves) was developed for the
Heat Integration of batch processes (Kleme et al., 1994; Kemp and
Deakin, 1998). This methodology was recently revisited by Foo,
Chew, and Lee (2008).
This methodology has also been extended to the Heat Integration
of renewables. Important steps in this direction were reported by
Perry, Kleme, and Bulatov (2008) and Varbanov and Kleme (2010).
Dealing with variation and fluctuation brought another complexity
into data extraction. Data should be collected for all time slices that
increase the complexity. Especially, for each case, it is necessary to
choose the time horizon for the analysis and number of time slices.
This is a fast-developing field, so it is advisable to monitor recently
published research papers and conference presentations.

12.3 Steady-State and Dynamic Performance

It has been assumed that all analyzed and optimized processes
operate in a steady state. Many industrial processes do operate in this
Typical Pit falls and How to Avoid Them 293
fashion, and a common control task is to maintain such processes in a
steady state. However, there are also many situations in which the
working regime must be changed. This occurs not only at start-up
and shutdown but also in response to changes in operating conditions
(e.g., outside temperatures) and production conditions (e.g., capacity,
volume). Problems involving such variation are generally addressed
in one of two ways: (1) accommodating various scenarios via a design
objective of minimizing sensitivity to the possible changes or
(2) designing to optimize the total systems overall dynamic
performance. The second option is much more complicated and
requires more time and resources; hence it is used only when
substantial deviations from steady-state operations are anticipated.

12.4 Interpreting Results

After data extraction, the most important aspect of PI analysis and
optimization is the correct interpretation of results. These results are
typically presented in the form of a printout generated by some
software tool; in most cases, the software output is in the form of
a grid diagram (e.g., STAR, 2009) or PFD supported by printout
tablesas with ASPEN (AspenTech, 2009a, 2009b, 2009c) and
gPROMS (PSE, 2009). In order to minimize typos and misinterpretation,
many software tools feature an interface that facilitates the transfer
of data files. This technology is now fairly well developed.
The most challenging aspect of data interpretation is assessing
the results in terms of possible further development or correction of
the important process features. The interpretation will depend on the
extent to which the factors mentioned previously (i.e., data uncertainty,
data softness, flexibility, operability, controllability, safety, availability,
and maintenance) have already been incorporated into the data
extraction. If the data extraction did not reflect these considerations,
then they must be addressed during the data interpretation. At this
stage, the close collaboration of a team that includes the main involved
professionals is highly recommended.
Designers are strongly advised not to stick with just one solution
but rather to explore different potential scenarios associated with
various operating conditions and then test their designs sensitivity
to the possible variations. For screening and scoping it is helpful to
use all types of Composite Curves as well as information (if provided
by software) on which streams are contributing to specific parts of
the curves.

12.5 Making It Happen

Even when a sustainable and near-optimum design is developed,
it must still be put into practice. This involves selling the
294 Chapter Twelve

proposalswhich are often unconventionalto plant management,

investors, and contractors. This was a big problem when PI was just
beginning, and great strides in this area were made by UMIST, Bodo
Linnhoff, and his company Linnhoff March in the 1980s and 1990s.
PI has since proven itself and gained in popularity, so decision
makers have become more receptive.
Much of the situations improvement is due to multinational
companies that have incorporated PI into their design and operational
practice. Because the methodology has become widespread, it is not
possible to list all of these companies. However, among the pioneers
were members of the UMIST and, after the merger, the University of
Manchester Process Integration Research Consortium: Air Products,
Aspen, BASF, Bayer, BOC, BP, Canmet, Degussa, EDF, Engineers
India, Exxon Mobil, Hydro, IFP, JGC, KBC, Mitsubishi Chemical
Corporation, MOL, MW Kellogg, Petrobras, Petroleum Research
Centre, Petrom, Petronas, Saudi Aramco, Shell, Sinopec, Technip,
Total, UOP, and Vito. These firms were joined in the consortium by
several universities, including University POLITEHNICA Bucharest
and Petronas Technological University.
There have also been strong supporters of PI in the United
States, both at universities and in the industry; some of them are
listed in Chapter 13. A major goal remains a close collaboration and
smooth joint effort among PI specialists and the designers,
managers, and owners (or contractors) of processing plants. A
projects best chance of success is when all these stakeholders share
the goal of developing an optimized and sustainable process.
Information Sources
and Further Reading

here are various sources of information on optimization and
integration in the process industry. The aim of this chapter is
to summarize many of these sources, although their number is
growing rapidly. No such list could be fully comprehensive, but
every attempt has been made to include the most important sources
of information.
The chapter is divided into five sections as follows: (1) general
sources of information, (2) Heat Integration, (3) Mass Integration,
(4) combined analysis, and (5) optimization for sustainable industry.
Within each section, listings are further divided into four groups:
conferences, journals, service providers, and projects. A listing is
repeated when the information is relevant to more than one category;
this makes searching more efficient for users.

13.1 General Sources of Information

13.1.1 Conferences
Conferences that address Process Integration (PI) approaches to
minimizing the use of energy, water, and other resources can be
sorted into three groups. The first group consists of conferences that
are directly related to energy and resource minimization.

Process Integration, Modelling and Optimisation for Energy

Saving and Pollution Reduction (PRES), organized annually
since 1998, <>
European Symposium on Computer Aided Process
Engineering (ESCAPE), organized annually since 1992,

296 Chapter Thirteen

AIChE International Congress on Sustainability Science and

Engineering (ICOSSE), organized since 2009, <www.aiche.
Dubrovnik Conference on Sustainable Development of
Energy, Water and Environment Systems (SDEWES),
organized biannually since 2002, <>
Italian Conference on Chemical and Process Engineering
(ICheaP), organized biannually since 1993, <
Europe Energy Efficiency Conference, <

The second group includes large conferences organized

throughout the world and for which PI topics are part of their
scientific programs.

AIChE annual and spring meetings, specialty conferences,

and cosponsored conferences, <>
European Congress of Chemical Engineering (ECCE),
organized biennially
Chemical Engineering, Chemical Equipment Design and
Automation (CHISA), organized biennially since 1962 (odd
years), <>
World Congress of Chemical Engineering,<
index.html> (the 9th World Congress of Chemical
Engineering, incorporating the 14th APCChE Congress, will
be held in 2013)
World Bioenergy, <>
World Sustainable Energy Days, <>
World Renewable Energy Congress: Innovation in Europe,

The third group of conferences includes those dealing with

specific issues (food processing, pulp and paper, etc.) and at which
energy and resource efficiency are addressed as special cases.

Total Food, organized biennially, <

TAPPI EPE (Engineering, Pulping and Environmental)
Conference, organized annually, <>
CLIMA Congress, official congress of the Federation of
European Heating and Air-Conditioning Societies (REHVA),
Information Sources and Further Reading 297

13.1.2 Journals
There are many journals that cover topics related to energy, water,
and resource minimization from the Process Integration standpoint;
just a few of them are listed (alphabetically) here.

Chemical Engineering Transactions, <>

Journal of Clean Technologies and Environmental Policy, <www.
Journal of Cleaner Production, <
Resources, Conservation & Recycling, <

13.1.3 Service Providers

The following alphabetical list includes service providers as well as
professional bodies and networks.

ADAS, Inside and Solutions, Woodthorne, Wergs Rd.,

Wolverhampton, WV6 8TQ, UK, <
Artie McFerrin Department of Chemical Engineering, Texas
A&M University, College Station, TX 77843, USA, <www.che.>, contact person: Mahmoud M. El-Halwagi
AspenTech, Aspen Technology, Inc., 200 Wheeler Rd.,
Burlington, MA 01803, USA, <>
Dansk Energi Analyse A/S, <>
Department of Chemical Engineering, Auburn University,
212 Ross Hall, Auburn University, AL 36849-5127, USA, <eng.>
BIS, Department for Business, Innovation and Skills, UK,
Centre for Advanced Process Decision-Making, Carnegie
Mellon University, Department of Chemical Engineering,
5000 Forbes Ave., Pittsburgh, PA 15213, USA, <capd.cheme.>
Center for Engineering and Sustainable Development
Research, De La Salle University, Manila, 2401 Taft Avenue,
1004 Manila, Philippines, <
centers/cesdr/strg.asp>, contact person: Raymond Tan
Centre for Process Integration, School of Chemical
Engineering and Analytical Science, The University of
298 Chapter Thirteen

Manchester (formerly UMIST), Manchester, M13 9PL, UK,

centreforprocessintegration>, contact person: Robin Smith
Centre for Process Integration and Intensification (CPI2),
European Community Project Marie Curie Chair (EXC)
MEXC-CT-2003-042618 INEMAGLOW, Research Institute of
Chemical and Process Engineering, Faculty of Information
Technology, University of Pannonia, Egyetem u.10, Veszprem,
H-8200, Hungary, <>, contact
person: Ji Kleme
Centre for Process Systems Engineering, Imperial College
London, C507 Roderic Hill Bldg., South Kensington Campus,
London, UK, <
systemsengineering>, contact person: Efstratios Pistikopoulos
Centre for Technology Transfer in the Process Industries,
University POLITEHNICA Bucharest, 1 Polizu St., Bldg. A,
RO-011061, Bucharest, Romania, <>,
contact person: Valentin Pleu
Charles Parsons Institute, University of Limerick, Limerick,
Ireland, <
Engineering/Research/Research_Institutes/CPI>, contact
person: Toshko Zhelev
Chiyoda Corporation, Energy Frontier Business Development
Office, 2-12-1 Tsurumichuo, Tsurumi-ku, Yokohama 230-8601,
Japan, <>, contact person: Kazuo
COWI A/S, Parallelvej 2, DK-2800, Kongens Lyngby, Denmark,
tel. +45 45 97 22 11, <>,
DEFRA, Department for Environment, Food and Rural
Affairs, UK, <>
Department of Chemical and Environmental Engineering,
University of Nottingham Malaysia, Broga Rd., 43500
Semenyih, Selangor, Malaysia, <
my/Faculties/Engineering/Research/ENV>, contact person:
Dominic Foo
Department of Chemical Engineering, University of Maribor,
Smetanova ulica 17, Maribor, Slovenia, <>,
contact person Zdravko Kravanja
Department of Chemical Engineering, Massachusetts
Institute of Technology, 77 Massachusetts Ave., Room 66350,
Cambridge, MA 02139, <>
Department of Chemical Engineering, University of Pretoria,
Lynwood Rd., Pretoria 0002, South Africa, <
Information Sources and Further Reading 299
default.asp?ipkCategoryID=2063&language=0>, contact person:
Thoko Majozi
Department of Computer Science and Systems Technology,
Faculty of Information Technology, University of Pannonia,
Egyetem u.10, Veszprem, H-8200, Hungary, <www.dcs.vein.
hu>, contact person: Ferenc Friedler
Department of Energy and Climate Change, UK, <www.>
Department of Energy and En