Bechtel Technology Journal
December 2009

Volume 2, No. 1

Foreword Editorial CIVIL
Managing Technological Complexity in Major Rail Projects
Siv Bhamra, PhD; Michael Hann; and Aissa Medjber

v vii

3 11

Measuring Carbon Footprint in an Operational Underground Rail Environment
Elisabeth Culbard, PhD

Intermodulation Products of LTE and 2G Signals in Multitechnology RF Paths
Ray Butler, Andrew Solutions; Aleksey A. Kurochkin; and Hugh Nudd, Andrew Solutions


Cloud Computing—Overview, Advantages, and Challenges for Enterprise Deployment
Brian Coombe


Performance Engineering Advances to Installation
Aleksey A. Kurochkin


Environmental Engineering in the Design of Mining Projects
Mónica Villafañe Hormazábal and James A. Murray

57 67 81

Simulation-Based Validation of Lean Plant Configurations
Robert Baxter; Trevor Bouk; Laszlo Tikasz, PhD; and Robert I. McCulloch

Improving the Hydraulic Design for Base Metal Concentrator Plants
José M. Adriasola; Robert H. Janssen, PhD; Fred A. Locher, PhD; Jon M. Berkoe; and Sergio A. Zamorano Ulloa

Plot Layout and Design for Air Recirculation in LNG Plants
Philip Diwakar; Zhengcai Ye, PhD; Ramachandra Tekumalla; David Messersmith; and Satish Gandhi, PhD, ConocoPhillips Company


Wastewater Treatment—A Process Overview and the Role of Chemicals
Kanchan Ganguly and Asim De


Electrical System Studies for Large Projects Executed at Multiple Engineering Centres
Rajesh Narayan Athiyarath



Bechtel Technology Journal

Options for Hybrid Solar and Conventional Fossil Plants
David Ugolini; Justin Zachary, PhD; and Joon Park

133 145

Managing the Quality of Structural Steel Building Information Modeling
Martin Reifschneider and Kristin Santamont

Nuclear Uprates Add Critical Capacity
Eugene W. Thomas

157 165

Interoperable Deployment Strategies for Enterprise Spatial Data in a Global Engineering Environment
Tracy J. McLane; Yongmin Yan, PhD; and Robin Benjamins

Site Characterization Philosophy and Liquefaction Evaluation of Aged Sands
Michael R. Lewis; Ignacio Arango, PhD; and Michael D. McHood


Evaluation of Plant Throughput for a Chemical Weapons Destruction Facility
Christine Statton; August D. Benz; Craig A. Myler, PhD; Wilson Tang; and Paul Dent


Investigation of Erosion from High-Level Waste Slurries at the Hanford Waste Treatment and Immobilization Plant
Ivan G. Papp and Garth M. Duncan


Effective Corrective Actions for Errors Related to Human-System Interfaces in Nuclear Power Plant Control Rooms
Jo-Ling J. Chang and Huafei Liao, PhD


Estimating the Pressure Drop of Fluids Across Reducer Tees
Krishnan Palaniappan and Vipul Khosla The BTJ is also available on the Web at (Click on Services Engineering & Technology Technical Papers)


© 2009 Bechtel Corporation. All rights reserved.
Bechtel Corporation welcomes inquiries concerning the BTJ. For further information or for permission to reproduce any paper included in this publication in whole or in part, please e-mail us at Although reasonable efforts have been made to check the papers included in the BTJ, this publication should not be interpreted as a representation or warranty by Bechtel Corporation of the accuracy of the information contained in any paper, and readers should not rely on any paper for any particular application of any technology without professional consultation as to the circumstances of that application. Similarly, the authors and Bechtel Corporation disclaim any intent to endorse or disparage any particular vendors of any technology.

December 2009 • Volume 2, Number 1


Bechtel Technology Journal
Volume 2, Number 1

Thomas Patterson . . . . . . . . . . . . . . . Principal Vice President and . . . . . . . . . . . . . . . . . . . . . . . . . . . . Corporate Manager of Engineering Benjamin Fultz . . . . . . . . . Chief, Materials Engineering Technology, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Oil, Gas & Chemicals; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chair, Bechtel Fellows Jake MacLeod . . . . . Principal Vice President, Bechtel Corporation; . . . . . . . . . . . . . . . . . . . . . Chief Technology Officer, Communications; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bechtel Fellow Justin Zachary, PhD . . . . . . . . . Assistant Manager of Technology, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power; Bechtel Fellow

All brand, product, service, and feature names and trademarks mentioned in this Bechtel Technology Journal are the property of their respective owners. Specifically:
Amazon Web Services, Amazon Elastic Compute Cloud, Amazon EC2, Amazon Simple Storage Service, Amazon S3, and Amazon SimpleDB are trademarks of Amazon Web Services LLC in the US and/or other countries. Apache and Apache Hadoop are trademarks of The Apache Software Foundation. AutoDesk and AutoCAD are registered trademarks of AutoDesk, Inc., and/or its subsidiaries and/or affiliates in the USA and/or other countries. Bentley, gINT, and MicroStation are registered trademarks and Bentley Map is a trademark of Bentley Systems, Incorporated, or one of its direct or indirect wholly owned subsidiaries. Corel, iGrafx, and iGrafix Process are trademarks or registered trademarks of Corel Corporation and/or its subsidiaries in Canada, the United States, and/or other countries. Dell Cloud Computing Solutions is a trademark of Dell Inc. ESRI and ArcGIS are registered trademarks of ESRI in the United States, the European Union, or certain other jurisdictions. ETAP is a registered trademark of Operation Technology, Inc. Flexsim is a trademark of Flexsim Software Products Inc. Google is a trademark of Google Inc. Hewlett-Packard, HP, and Flexible Computing Services are trademarks of Hewlett-Packard Development Company, L.P. IBM is a registered trademark of International Business Machines Corporation in the United States. IEEE is a registered trademark of The Institute of Electrical and Electronics Engineers, Incorporated. Linux is a registered trademark of Linus Torvald. Mac OS is a trademark of Apple, Inc., registered in the United States and other countries. MapInfo Professional is a registered trademark of Pitney Bowes Business Insight, a division of Pitney Bowes Software and/or its affiliates. Mentum Planet is a registered trademark owned by Mentum S.A. Merox is a trademark owned by UOP LLC, a Honeywell Company. Microsoft, Excel, and Windows are registered trademarks of Microsoft Corporation in the United States and/or other countries. Oracle is a registered trademark of Oracle Corporation and/or its affiliates.

Justin Zachary, PhD . . . . . . . . . . . . . . . . . . . . . . . . . Editor-in-Chief Siv Bhamra, PhD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Civil Editor Jake MacLeod . . . . . . . . . . . . . . . . . . . . . . Communications Editor William Imrie . . . . . . . . . . . . . . . . . . . . . . . Mining & Metals Editor Cyrus B. Meher-Homji . . . . . . . . . . . . Oil, Gas & Chemicals Editor Sanj Malushte, PhD . . . . . . . . . . . . . . . . . . . . . . . . . . Power Editor Farhang Ostadan, PhD . . . . . . . . . Systems & infrastructure Editor

Barbara Oldroyd . . . . . . . . . . . . . . . Coordinating Technical Editor Richard Peters . . . . . . . . . . . . . . . . . . . . . . Senior Technical Editor Teresa Baines . . . . . . . . . . . . . . . . . . . . . . . Senior Technical Editor Ruthanne Evans . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Editor Brenda Thompson . . . . . . . . . . . . . . . . . . . . . . . . . Technical Editor Ann Miller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Editor Angelia Slifer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Editor Bruce Curley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Editor

Keith Schools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphic Design Matthew Long . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphic Design Mary L. Savannah . . . . . . . . . . . . . . . . . . . . . . . . . . Graphic Design John Connors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphic Design Diane Cole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Desktop Publishing is a registered trademark of, inc. Simulink is a registered trademark of The MathWorks, Inc. Sun Microsystems is a registered trademark of Sun Microsystems, Inc., in the United States and other countries. TEAMWorks is a trademark of Bechtel Corporation. Tekla is either a registered trademark or a trademark of Tekla Corporation in the European Union, the United States, and other countries. ULTIMET is a registered trademark owned by Haynes International, Inc.


Bechtel Technology Journal

I. editorial team. am proud to be a small part of this effort and am confident that this Bechtel Technology Journal provides a better understanding of how Bechtel applies our best practices to our work. Bechtel Fellows December 2009 • Volume 2. our customers have also made significant contributions as co-authors. and specialties. Bechtel does represent innovation in our approaches to both solving engineering challenges and managing technical complexity. Number 1 v . Bechtel’s expertise is truly diverse and represents numerous industries. Materials Engineering Technology Bechtel Oil. Sincerely. fellow employees. The objective of the BTJ is to share with our clients. disciplines. and we thank them for that! The authors. As you will see when reading the selected papers. As can be seen from the variety of topics.Foreword W elcome to the Bechtel Technology Journal! The papers contained in this annual compendium highlight the broad spectrum of Bechtel’s innovation and some of the technical specialists who represent Bechtel as experts in our business. and select industry and university experts a sampling of our technical and operational experiences from the various industries that Bechtel serves. and graphics/design team who have made this publication possible can be truly proud of the outcome. advisory board. too. Benjamin Fultz Chief. In some cases. editorial board. To each go my personal thanks. Gas & Chemicals Chair. The papers included have been written by individuals from all of the major business units within our company.

Justin Zachary Assistant Manager of Technology. offers useful and interesting information and presents solutions to real problems. Number 1 vii .Editorial F ollowing our successful publication of the inaugural issue of the Bechtel Technology Journal in 2008. The BTJ provides a window into the innovative responses of Bechtel’s leading specialists to the diverse technical. The editorial staff invites your comments or questions germane to the BTJ’s content. selected from a substantial number of worthy submissions. and with much appreciation for the interest it generated in various industry sectors. Please send them to me at jzachary@bechtel. We wish you enjoyable reading! Bechtel Power Editor-in-Chief Bechtel Fellow December 2009 • Volume 2. and policy issues important to our business. We are confident that this collection of papers. operational. regulatory. we are pleased to offer our second annual issue.

Civil Technology Papers 3 Managing Technological Complexity in Major Rail Projects Siv Bhamra. part of the project to renovate three historic lines of the London Underground— Jubilee. PhD Michael Hann Aissa Medjber 11 Measuring Carbon Footprint in an Operational Underground Rail Environment Elisabeth Culbard. Northern. PhD JNP Secondment A pair of Piccadilly line trains rest in Acton Town station. . and Piccadilly.


leading to cultural and behavioural issues that prevent technology alone from delivering Aissa Medjber amedjber@bechtel. operationally flexible. where major investments are now underway to improve railway service performance. trains have become faster and more frequent in response to growth in demand. As the demand grows for safer. systems engineering. customer requirements can be met only through the carefully controlled application of emerging technology. Using a case study example. The major Crossrail Project in London is used as a case study for the application of the approach. safety. more efficient. technology. This is particularly true in Europe. South Asia. rail projects. Advanced technologies and shrinking design times are increasingly being seen as a means to assist in responding more quickly to growing customer requirements in a commercially competitive environment. the ability to run at ever-higher speeds has become a viable commercial proposition. the Middle East. These risk reduction means are accomplished via the structured provision of assurance evidence combined with continuous validation against requirements. This approach ensures close and continuous adherence to customer requirements.MANAGING TECHNOLOGICAL COMPLEXITY IN MAJOR RAIL PROJECTS Issue Date: December 2009 Abstract—This paper discusses the common technical issues that may arise during the execution of large projects and presents a structured approach to managing the technological complexity of delivering major rail projects that comply with customer requirements. Siv Bhamra. 3 . BACKGROUND Increasing the Performance of Railways Railways have grown and expanded throughout many parts of the world since their invention in the UK two centuries ago. At the same time. implementation. Additional industry challenges arise from the fact that rail projects are often spread over long geographic distances. and final handover of a major rail project. crossing different communities and even countries. and reliability in response to a demand for higher capacity and performance. and higher performance railway systems that are well integrated with other forms of transport. In many areas. the paper then sets out means to control and reduce the project risks involved in the design. All rights reserved. alternative systems for traction power. validation INTRODUCTION R ecent decades have seen a continued increase in the demand for railway passenger and freight services in most regions of the world. builds confidence in the end product. In Michael Hann mchann@bechtel. and counteracts the risk of not meeting final delivery for commercial operation. and the Far East. since the 1950s this demand for rail service in some countries has exceeded the performance levels provided by traditional mechanical interlockings to maintain safe distances between trains and has driven the need to develop more sophisticated technology without compromising safety standards. Keywords—integration. In addition. The globalisation of system solutions has given rise to a wideranging number of reference sites within the rail industry. This paper discusses some of the recent trends in the growth of technological complexity and examines root causes for the risks that can interfere with meeting customer requirements and expectations. as trackforms have improved and rolling stock has become more resilient. PhD © 2009 Bechtel Corporation. Advanced computer-based signalling and train control technologies are increasingly being specified by customers around the world who seek to gain maximum performance from both existing and future infrastructure.

During a project’s development phase. ACRONYMS. A simplified diagram of systems engineering activities is shown in Figure 1. automatic fare collection. step in building the confidence that is so vital when seeking final handover. availability. political. safety safety management system • Enhance service capacity by increasing the number and speed of trains to accommodate the growing passenger and freight usage demand • Meet increasing passenger expectations regarding service quality and punctuality • Comply with commercial targets for improving operational and maintenance efficiencies and environmental performance and keeping railways affordable for passengers whilst minimising the burden for state subsidies • Improve safety. it has become necessary to develop a systems engineering process that uses a robust suite of tools capable The increasing complexity of rail technologies now requires a systems engineering approach for successful integration. Failure to manage the integration of a complex system results in significant problems not only at its handover to the operator but also during its operational life. and availability by minimising the frequency and impact of equipment failures • Integrate new security provisions to protect passengers. and railway operational control centres are becoming critical requirements on new projects. staff. communications. In dealing with complex systems. AND TERMS CPFR IM LU NR PDP Crossrail Project Functional Requirements infrastructure manager London Underground Network Rail Project Delivery Partner reliability. and environmental restrictions encountered in providing new rail corridors. complexity encompasses not only engineering technology. Reaching agreement is the first. The figure illustrates how systems engineering management needs to sit at the heart of a project. reliability. and perhaps the most important. The commercial. passenger information. In this context.ABBREVIATIONS. stations. maintainability. but also the human organisation and the wider business and environmental fields within which major rail projects are now delivered. it is necessary for the parties involved to come to agreement regarding the life-cycle planning and baselines that eventually lead to project acceptance. have contributed to making more intense use of existing routes the most beneficial way of delivering improved performance. and assets against the threat of malicious activity DEFINING THE PROCESS Managing Complexity The increasing complexity of rail systems and the need to ensure their integration into the surrounding infrastructure have created a need for a systems engineering approach. This emerging discipline has become essential to projects around the globe. Business Drivers for Technology Application The use of ever-more-sophisticated and complex technology in rail applications has arisen principally as a result of the need to: Development Phase Baselines Systems Engineering Management Systems Engineering Process Life-Cycle Planning Integrated Teaming Life-Cycle Integration Figure 1. RAMS SMS ventilation. particularly in heavily populated urban environments. This agreement is particularly important when dealing with customers that are inexperienced or are new to the delivery organisation. Systems Engineering Activities 4 Bechtel Technology Journal .

etc. of capturing project requirements and managing organisational interfaces. The management team must have the ability to define clear work processes and understand and integrate inputs of the key people. and the basic approach is now being increasingly adopted on complex rail projects. Systems engineering is the systematic process that includes reviews and decision points intended to provide visibility into the process and encourage early and regular stakeholder involvement. documentation. Stage 2—Requirements Management Stage 2 comprises two sub-functions: • Requirements Identification: Definition. regulators. finance. Stage 4—Verification and Validation Stage 4 interactively validates the outputs from the other functions throughout the design and execution phases of the project to ensure that customer requirements have been achieved and that risk to project execution and future users of the operational railway has been managed to the extent practicable. highly competent systems engineering management. At the heart of all activities is proactive. and optimisation of the proposed new railway as it is expected to operate after construction and handover • Requirements Validation: Robust analytic support to the requirements.Stage 1 Stage 2 Stage 3 Stage 4 Customers/ Stakeholders Input Leadership and Integration Requirements Management Design Verification and Validation Key Personnel Appointment Team Integration Requirements Identification Requirements Validation Engineered Designs Life-Cycle Risks Definition Figure 2. Figure 2 shows a schematic representation of the four stages. with emphasis on four broad stages: • Stage 1—Leadership and Integration • Stage 2—Requirements Management • Stage 3—Design • Stage 4—Verification and Validation Stage 1—Leadership and Integration Stage 1 manages the concurrent input from all participating customer functions (railway operations. Therefore. and other stakeholders in project development. Simplified Four-Stage Process Rail projects require earlier and continuous engagement of the customer. It has been progressively applied in other industries. and verification functions Stage 3—Design Stage 3 generates the engineered designs from the customer requirements. and life-cycle planning for system integration starts at the development phase. Successful rail projects require early and continuous involvement of the customer. Number 1 5 . Their participation provides stakeholders the opportunity to contribute to the steps in the process where their input is needed. maintainers. December 2009 • Volume 2. Process Overview The systems engineering approach was originally developed and successfully applied in the United States defence and space industries. the appointment of key personnel to the leadership team who are both managerially and technically competent in the task at hand is critical. maintenance. along with proposed measures for their control. Baselines establish the agreed-upon project development stages. railway operators. legal. design. These designs are used for follow-on procurement and construction activities and provide a detailed definition of all project life-cycle risks. modelling.) to optimise the railway project’s definition and capital investment objectives.

stakeholders. and local industry participants Stage 1—Leadership and Integration The Crossrail Project has the full support of the UK Government and all main political parties.CASE STUDY: APPLICATION OF SYSTEMS ENGINEERING APPROACH FOR CROSSRAIL he four-stage systems engineering model is being applied in the UK to the Crossrail Project. and programme to deliver the project • The geographical location of Crossrail • The involvement of multiple sponsors. No rt h Se a Glasgow Edinburgh Denmark United Kingdom Copenhagen Belfast York Galway Dublin Ireland Irish Sea Bradford Leeds Hull Huddersfield Manchester Limerick Berlin Cork Birmingham Amsterdam Netherlands Cardiff Bristol Germany London Reading Celtic Sea Plymouth Belgium En Wycombe District nel Shenfield Barnet LB Harrow LB Surface Line Tunnel Portal Stations Harold Wood Paris Haringey LB Waltham Forest LB Redbridge LB Goodmayes Seven Kings Gidea Park om Romford South Bucks District Hillingdon Ealing LB Brent LB T Taplow Maidenhead ad Burnham Slough g Slough Iver Langley e Royal Borough of Windsor & Maidenhead Hayes & y Harlington on Southall hr Heathrow Central ntra Ea g Ealing oadwa Broadway West Ealing Acton Main ain e Line Hanwell ith rsmit mme LB Ham ulham L &F or Manor Park Islington Hackney LB est Forest Gate LB atfor r Farrington Stratford F o To h Tottenham City of Ct Rd C d Tower Hamlets Westminster Camden LB LB Paddington dd gto Kensington & Chelsea LB Whitechapel Southwark Lambeth LB Bexley LB ow Heathrow al Terminal Baymtinal 5 f of Biscay F r a Wandsworth LB nce Figure 3. Crossrail will increase London’s public transport network capacity by 10%. The Crossrail Project has an estimated total installed cost of £15. Dundee T The complexity of the Crossrail Project is a result of many factors. the project will provide new railway tunnels and stations under London and connect the existing rail routes to the east and west. and local communities • The requirement to integrate major complicated systems • The myriad of technological challenges and opportunities • The need to co-ordinate project delivery with design consultants. including: • Constraints over the scope. Crossrail Location and Route 6 Bechtel Technology Journal . The Crossrail Act 2008 provides the Crossrail Sponsors the powers they need to deliver this major project within the Kattegat The defined process will help deliver the Crossrail Project to customer expectations. When it opens for passenger service. contractors. supporting regeneration across the capital and helping to maintain London’s position as a world-leading financial centre for decades to come.9 billion ($24 billion) and will provide passenger services along a 118 km (73 mile) route from Maidenhead and Heathrow in the west through new twin-bore 21 km (13 mile) tunnels under central London to Shenfield and Abbey Wood in the east. As shown in Figure 3.5 million people within a 60-minute commuting distance of London’s key business districts. funding. Crossrail will serve an additional 1.

Scope of Design deliverables. key parties involved in designing and delivering the Crossrail Project are located together. The PDP supervises and co-ordinates the work of the design consultants to make sure that the requirements of the Crossrail Sponsors are achieved at maximum value for money. By providing control throughout the life cycle.. and delivery outputs are set out in a comprehensive list of the Crossrail Project Functional Requirements (CPFR). To the extent possible. The PDP is responsible for co-ordinating the detailed design in accordance with the overall project programme in order to initiate the start of procurement and construction activity. is awarded a competitively tendered contract that includes a set of General Obligations (i. under the supervision of the PDP. Stage 3—Design Once Crossrail Project requirements have been defined and understood. contract terms and conditions. Delivery of the Crossrail requirements is planned via the industry standard V-cycle shown in Figure 4. and reporting arrangements) and a detailed list of Design Outputs (i. improves communication. A dedicated Crossrail Project client team. Hybrid Bill. the programme for submittals. V-Cycle Diagram December 2009 • Volume 2.e.e. they are flowed down into the design process. the V-cycle tracks each department’s understanding of its scope of work and ensures that the integrated sum of the many scopes ultimately delivers the project requirements. which also sets out the funding arrangements. financial controls. and the dependencies and interfaces with the PDP and other design consultants). Project Remit (Feasibility and Concept Phase Represented by Stakeholders and Operations Requirements. Stage 4—Verification and Validation The Crossrail Project systems engineering process produces the evidence needed to support Engineering Safety Cases. is managing delivery of all works. which. The Operational Safety Case is used to obtain Systems engineering produces the evidence to support Engineering Safety Cases. along with a Project Delivery Partner (PDP) and Industry Partners. and maximises teaming. This juxtaposition assists in providing consistent leadership. support the Operational Safety Case. Below this sits the Project Development Agreement amongst the Crossrail Sponsors.) Project Requirements + Project Design Specification (Functional Specification Phase) Stage 1 Acceptance Tests Verification Acceptance Stage 2 Validation Tests via Acceptance Criteria System Integration Tests Operational Trials Project Requirements + Systems Design Specification (Design Specification Phase) Clarification / Elicitation Ver Verification S Stage 3 Test Specification Systems Testing Installation and Commissioning Subsystem Acceptance Tests Detail Design Verification Stage 4 Test Specification Subsystem Testing Integration Checking Satisfaction (via Traceability) Existing Requirements Procurement/Construction Scope Addressed by This Plan Figure 4. Number 1 7 . Each detail design consultant. performance. in turn.. etc. Then a series of operational.agreed-to constraints (such as controls over the environmental impact). Stage 2—Requirements Management The Crossrail Project requirements are captured at the highest level in the Crossrail Act 2008.

Early involvement of the future operator and maintainer (Stage 1 of the process) has already counteracted project risks and allowed the assurance regime to be developed and planned. Systems Engineering Assurance Process for the Crossrail Project authorisation to put the new railway into passenger service (see Figure 5). combined with a trust in the competence of the team. Affirmation Trial Operations Dynamic Testing Performance Measurement Crossrail Project Objectives Sponsors Requirements End-to-End Performance End-to-End Operations Railway Systems Concept Design Railway Subsystems Project Testing and Commissioning By Station Detail Design Construction Testing of Elements Integrated Station Tests New Station Testing Figure 6. Engineering Safety Agreed-to Hazard Cases Mitigation System Design Specifications Hazard Logs Engineering Safety Management System Legend: Delivery Team Client Team Stakeholders Figure 5. the four stages of systems engineering become apparent. both geographically and system-wide. It is the certified documentation. Crossrail involves complex relationships amongst civil infrastructure and railway systems. hence timely acceptance by the future infrastructure managers (IMs) of the Crossrail assets. Key systems engineering inputs to the safety cases are the hazard management process and the requirements identification and management processes. as shown in Figure 6. This regime will provide the required assurance evidence throughout the V-cycle.Stakeholder Safety Management Systems NR IM SMS LU IM SMS Train Operator SMS Accept Sponsors’ Requirements Crossrail Project Functional Requirements Operating Functional Requirements Normal Operations Plan Degraded Operations Plan Emergency Operations Plan Maintenance Plan Accept Operational Safety Case RAMS and Hazard Allocation Process Crossrail involves a complex relationship amongst civil infrastructure and railway systems. An integration process and assurance regime has been established that will allow a progressive build-up of evidence. Crossrail systems engineering is being delivered through a structured assurance regime that sets out the hierarchical design. and commissioning plans. The assurance process starts in the top righthand corner of the diagram. Underpinning each stage is the body of assurance evidence that is generated. Moving clockwise. the safety cases that form the project’s backbone. In common with most railway projects. in particular. The evidence of adequate systems engineering provides design assurance. execution. Progressive Assurance of the Crossrail Project 8 Bechtel Technology Journal Requirements Management System Accept . that will eventually lead to a smooth handover of the railway into commercial service.

provide a means for handling complexity from the wide range of sources typically faced during the design and execution of major rail projects. A strong Estimated Project Cost Design Construction Requirements Design Construction Service Operations Phase Where Defect is Corrected Figure 8. The structured systems engineering process defines the requirements and validates the design documents early and continues to do so throughout the project life cycle. Figure 8 further shows how a poorly defined or missed requirement can be less costly to fix early. years Figure 7.AVOIDING THE CONSEQUENCES Minimising Uncertainty In the early stages of a complex project. This can be depicted by the “cone of cost certainty” shown in Figure 7. The systems engineering approach focuses on resolving uncertainty in a project’s early stages by providing a better understanding of the requirements and their dependencies. As technology changes more quickly.9x 0. Number 1 9 . thus maximising the chances of identifying and resolving defects early. is reliable. As technology changes ever more quickly. On the other hand. If base project requirements are significantly changed during implementation. Strategic planning for major rail projects is typically 20 to 30 years ahead. This has driven the need for a structured systems engineering approach that fosters the competency and leadership skills needed to deliver complex projects. design changes during implementation typically have a lesser impact because they are usually driven by value engineering in the same way as construction changes. Figure 8 graphically depicts how a change in base project requirements can result in the most significant impacts.0x 0. the rectification is generally very disruptive and therefore costly. Avoiding Late Changes and Project Cost Increases Whilst there are limited instances of no errors in project development. and meets requirements. it can have a disproportional impact on costs. so too does the challenge to more effectively manage the resulting complexity. so too does the challenge to better plan and more effectively manage the resulting complexity in the design and execution of major rail projects. change orders are often issued during construction.2x 1. The process of incremental design and verification also reduces the risk of uncertainty in estimates developed early in the project. there is invariably some cost and schedule uncertainty.8x 0.3x 1. Delivery has to be achieved whilst providing a high level of confidence to operators and maintainers that the end product is safe.1x 1. CONCLUSIONS T 1.7x 1 2 3 4 5 Project Life Cycle. Software development projects have also shown that the legacy cost of early design defects can significantly increase the overall project cost if they are identified late. As time progresses and the scope of the work becomes better understood. and illustrated by a case study example of their application on the UK’s Crossrail Project. rather than during later stages when the cost of rework can be compounded. Depending on the nature of the change. Cone of Cost Certainty Phase Where Defect is Created Requirements Cost to Correct a Defect echnological advancements in railway systems are being introduced with increasing rapidity in the effort to meet the business demands of freight customers and to expand and improve passenger services. the uncertainty diminishes. The four key stages of the systems engineering process set out in this paper. Impact of Late Changes on Costs December 2009 • Volume 2. The less-experienced project teams in the industry face greater uncertainty. Advanced technology applications for rail projects being pursued today typically have been in development for 3 to 6 years and are intended to support passenger operations for 25 to 40 years.

Aissa was responsible for all aspects for the delivery of some £350 million worth of system-wide contracts for track. 10 Bechtel Technology Journal . Elected a Bechtel Fellow in 2004. Whilst the application of this paper is to rail projects. Aissa Medjber has a 25-year career in project engineering. of High Speed 1 in the UK. For 2 years. an MBA in Project Management from the University of Westminster. or the loss of operational service functionality sought by the customer. he had worked for the London Underground and a number of other railway companies. the implementation of advanced train control technologies to improve the performance of existing and future railways. from 2002. Siv has a PhD in Railway Systems Engineering from the University of Sheffield. engaging in activities ranging from conducting feasibility studies to implementing full schemes. Mike is a Fellow of the Institution of Civil Engineers (UK) and a member of the Institution of Railway Signalling Engineers. For a short time in the late 1990s. and disappointments during later stages of execution. South Yorkshire. all in the UK. He won the Enterprise Project of the Year Award in 2006. and the performance of research into state-of-the-art security management systems. tunnel systems. More recently. In this role. and construction on rail and petrochemical projects. Currently. Siv has delivered rail projects in Europe and the Far East and has performed studies for rail operators in the US. power supplies. London. and South Asia. PhD. Mike returned to London to be the systems acceptance manager for one of the first fully computer-based signal interlocking systems introduced into the UK. Engineering Manager for Crossrail Central. He also the project engineering manager for construction of the world’s largest refinery petrochemicals complex in Jamnagar. and a Safety Management Award in 1998. UK. Mike gained a BSc in Civil Engineering at the University of Greenwich. Before that. commended for helping to recover operational service on the Northern Line following a derailment in 2003. he manages technical functions and oversees the delivery of systems works on this major project. signaling. He has worked on the full spectrum of rail projects from light rail. On this major project. and has twice been accredited with further awards of technical excellence (1984 and 1986). management. Recent evidence from many parts of the world suggests that there is a need to deploy a more rigorous approach to specifying project requirements and their means of verification throughout the project life cycle as a means of avoiding cost increases. Middle East. His numerous technical achievements encompass the development of solidstate traction inverting substations to save energy. Aissa’s prior experience involved a stint as control systems manager for the Onshore Development 1 & 2 projects in Abu Dhabi. Mike has functional oversight of 400 engineering personnel deployed on capital projects and maintenance activities. has 28 years of experience in the project management and engineering of major rail projects. He made a major contribution to the safe delivery. Siv is a guest lecturer to several universities and is also a respected transportation security specialist and advisor. both in the UK. delays. This experience has allowed him to take up the role of system-wide manager on the Crossrail Project. and mechanical and electrical equipment. Michael Hann. Siv was recognized for his efforts in restoring the Piccadilly Line to passenger service following the terrorist attacks in London in 2005.emphasis on using a structured approach to define and document project requirements and then to rigorously validate them through each stage of design and execution is critical to minimise delays. his wide-ranging skills have been ideally suited to the role of manager of Engineering for Tube Lines. the Gas was the and BIOGRAPHIES Siv Bhamra. the London Transport Award in 2004. Aissa received a Post-Graduate Diploma in Systems and Control at the University of Manchester and a BSc in Electrical Engineering at the University of Salford. He has presented at several conferences and has written numerous papers on management and technical disciplines. he was the senior project manager responsible for the delivery of Hung Hom Station in Hong Kong. the 30-year contract to renew three of London’s busiest lines. as the delivery director for Bechtel on Crossrail. and an MSc in Engineering Design from the University of Loughborough. and commended for courage after a major fire at Kennington Station in 1990. India. His early career was spent delivering integrated transport projects for the London Underground. has over 30 years of experience in the rail industry and has an impressive record of delivering major works. on time and budget. Siv joined Bechtel in 1999 while on the Jubilee Line Extension Project in London. Siv is a member of five professional institutions and three technical societies. Siv was also a senior transportation advisor to the European Bank for Reconstruction and Development. cost increases. communications and control. and it was there that he developed an interest in all engineering disciplines. the four structured systems engineering stages set out are also applicable to managing complexity in any industry. urban metros to high-speed lines.

11 . Tube Lines calculated its carbon footprint. Tube Lines measured © 2009 Bechtel Corporation. and Piccadilly underground metro lines. The 2006 data (2006 baseline) represents the carbon footprint used to make business and investment decisions and drive change to ensure that carbon management is central to the way Tube Lines undertakes its work. Tube Lines had reduced its carbon footprint by 5. This work encompasses upgrading the signalling on all three lines to increase capacity and reliability and reduce journey times. information flow. Tube Lines calculated its carbon footprint (impact of human activities 1 Tube Lines is indirectly owned by Bechtel Enterprises (one-third) and Ferrovial (two-thirds). Working with the Carbon Trust (an organisation created by the UK government to help businesses accelerate the move to a low carbon economy) and AEA Energy and Environment (AEA). Tube Lines calculated the carbon emissions that arise from the life cycle of all of its operations and projects. Tube Lines’ Carbon Footprint The corporate and process components of Tube Lines’ carbon footprint (see Figure 1) consist of the following emissions: • Corporate footprint—emissions resulting from Tube Lines’ direct operation of its premises. the baseline was updated to account for changes in Tube Lines projects and operations. waste generation. PhD eculbard@bechtel. carbon footprint. measured in units of CO2 ) and instituted measures that achieved the targeted reductions.2% per year until 2050 on the environment based on the amount of greenhouse gasses produced. To calculate its corporate footprint.MEASURING CARBON FOOTPRINT IN AN OPERATIONAL UNDERGROUND RAIL ENVIRONMENT Issue Date: December 2009 Abstract—As part of a commitment to meet local and national CO2 emission targets. emissions. and its employees’ • London target of 1. and transport Elisabeth Culbard. which maintains the UK National Atmosphere Emissions Inventory (NAEI) (the official air emissions inventory for the UK). In 2009. separating it into the following two components: Corporate (direct) footprint – energy and utilities consumption Process (indirect) footprint – embedded/indirect CO2 emissions resulting from materials use. and the general environment for passengers. its paper consumption.102 ton) reduction target has been set based on the 2008 baseline and will be met. Northern. fuel INTRODUCTION Tube Lines Overview Tube Lines1 has a 30-year public private partnership (PPP) contract with London Underground (LU) to maintain and upgrade all infrastructure on the Jubilee.7% per year until 2025 To meet these targets.277 metric tons (5.817 tons) by the end of 2008 based on its 2006 baseline. and replacing and refurbishing hundreds of kilometres of track and numerous lifts and escalators and improving the general travelling environment for passengers.000 metric ton (1. the following CO2 emission reduction targets have been set: • UK government national target of 1. In 2008. a further 1. Keywords—carbon. introducing a new fleet of trains on the Piccadilly line in 2014 and refurbishing the fleets on the other two lines. CO2 Emission Reduction Targets In the UK. All rights reserved. upgrading 100 stations with an emphasis on improving security.

waste types and volumes. etc. method statements. e. The process carbon footprint was then factored up by multiplying the individual carbon footprints by the number of times the process was carried out.980 tons) Corporate 6. and fleet maintenance. This assessment evaluated environmental impacts and identified inputs and outputs in terms of resources.. materials. e. Process 72. e. The impacts in terms of CO2 emissions were also determined. the amount of standardisation that can be achieved in station modernisation is less obvious and involved considerable thought and analysis by Tube Lines and AEA. To calculate the process carbon footprint. The following formula was developed: (Materials used + waste generated + materials transport + staff transport) x AEA conversion factors = A process champion was identified and a workshop was held to identify the carbon footprint for 34 typical work processes.000 metric tons CO2 (85. Data was obtained from bottom-up forecasts.000 metric tons CO2 (6.g.000 tons) • Traction •D t Depots • Stations Tube Lines 2006 Baseline 78. waste. carbon footprint for an average station modernisation.000 metric tons CO2 (226.g. paper consumption from ordering records. and fuel. However.g. e. escalator refurbishment (see Figure 2).614 tons) • 15 Westferry Circus ( l f ) (at left) Ci • Trackside House Metric tons of CO2 This data was then processed and converted into equivalent metric tons of CO2 emissions for an average process. and employee commute emission levels from a study conducted in 2006.g.366 tons) • Co Composed of omposed 34 P 4 Processes Figure 1. track replacement. These processes involve materials use and waste generation and the transportation of materials and people 12 Bechtel Technology Journal . and transportation. LU Power 205. AND TERMS AEA DSM ERU GPS GSM L&E LED LU NAEI PPP P-Way ZWTL AEA Energy and Environment distribution services management emergency response unit global positioning system global system for mobile communication lifts and elevators light-emitting diode London Underground National Atmosphere Emissions Inventory public private partnership permanent way zero waste to landfill during the course of Tube Lines’ work. escalator refurbishment.ABBREVIATIONS. ACRONYMS. Whilst track replacement can be standardised for a metre of track. For each process.. Elements Constituting Tube Lines’ Carbon Footprint full-year 2006 energy and utilities (gas and water) consumption from utility bills. e. Tube Lines has also been able to identify and target efficiency improvements to reduce the CO2 impact of that activity. standardising the process footprints has posed a challenge.g.. station modernisation.000 metric tons CO2 (79. with the amount of CO2 tied to the control mechanism required to achieve best practicable means. a process champion was identified and a workshop was held to identify and quantify material types and volumes. • Process footprint—emissions resulting from 34 processes that Tube Lines chose to measure. and bills of quantities.. Tube Lines used a complex set of calculations and AEA’s carbon impact tool to perform a lifecycle assessment of each process. then converted the values into equivalent metric tons of CO2. In identifying the CO2 impact of an activity.: (carbon footprint for an average station modernisation x number of station modernisations completed) + (carbon footprint for an average metre of track replacement x number of metres of track replacement completed).

lift use was restricted during off hours. expectations of potential reductions in energy consumption at each location. Environmental business objectives are set by Tube Lines’ Executive Committee and are tied into the employee bonus scheme.Handrail Comb-plate Trailer Wheel Tracks Main Drive Shaft Assembly Materials Newel Wheels Balustrade Energy Steps Waste Steps Deliveries D-Tracks Travel Main Drive Chain Gearbox Travel Truss Tension Carriage Shaft Assembly Step Chain Wheel Tracks Drive Machine Figure 2. achievements in reducing energy consumption during the preceding year. Stratford Training Centre. and employee awareness was raised. Stratford Training Centre. The target December 2009 • Volume 2. lighting banks and controls were re-set. and other smaller sites. The data was collected for 15 Westferry Circus. airconditioning thresholds were adjusted at office locations. Stratford Training Centre. new personal computer equipment was provided to all Tube Lines sites. A 20% reduction was achieved! • 2008 Energy Consumption (kWh)—2% reduction at Head Office. and Piccadilly line depots by the end of 2007 This target was set based on the technical potential for reducing energy consumption. Trackside House. and Stratford Training Centre) comes from renewable energy. Tube Lines increased its energy savings by giving up occupancy of two floors at its Westferry Circus offices and sharing other areas with other tenants (79/21 split based on floor space occupied). A 5% reduction was achieved! • 2007 Paper Consumption—15% reduction in white A4 paper usage A 44% reduction was achieved! • 2008 Paper Consumption—5% reduction in white A4 paper usage This target was established after measuring the number of A4 and A3 reams of paper purchased in 2007. movement detectors were fitted. This new equipment automatically goes into a “sleep” state when left unattended. 2006–2008 Objectives Over the past few years. During 2007.5% reduction was achieved! • 2007 Energy Consumption (kWh)—2% reduction at Head Office. the following objectives were set: • 2006 Energy Consumption (kWh)—5% reduction at Head Office by the end of 2006 A 3. Trackside House. ENVIRONMENTAL BUSINESS OBJECTIVES “Go Green” is the name of Tube Lines’ environmental management system. Trackside House. Escalator Cross-Section Energy consumption was reduced by 20% in 2007 and 5% in 2008 against 2% annual targets. and forecasted weather. Number 1 13 . which saves energy. and Piccadilly line depots by the end of 2008 During 2008. Blade-type computer servers were also employed. Tube Lines also converted to a green energy tariff so that all Tube Lines-sourced electricity (Head Office.

Tube Lines CO2 Reductions Activity That Reduces CO2 Reduction in electricity consumption Reduction in paper consumption More fuel-efficient DSM road fleet Dedicated paper recycling by DSM ERU fleet CO2 Reduction.2 (32.5) (51.0 169.001 tons) of CO2.2) (26.2 (1.518) distribution services management emergency response unit global positioning system global system for mobile communication lifts and elevators permanent way zero waste to landfill 14 Bechtel Technology Journal .804.0) (25.1) (21.7) (5.3) (105.0 9. • P-Way — Extension of zero-waste-to-landfill (ZWTL) pilot to the entire permanent way (P-Way) • Jubilee and Northern Lines Upgrade Project — Investigation of power supply upgrades — Investigation of reconfiguring distribution network LU has provided funding to review low-carbon technologies for use in an underground energy-efficient “model” station.9 23.9) (2.3 23. LU has provided funding to review all available low-carbon technologies for use in stations.7 46.4 0.8 32.7 19.000 kWh electricity and 908 metric tons (1.6 137.2) 1.0 121.3 tons) A 14% reduction was achieved! • 2008 Fuel Efficiency—Maintain achievement of 2007 target for commercial road fleet over 7.5 0.091. 2007 DSM fleet improvements Using Acton for storage ZWTL – P-Way civils embankment works Platform resurfacing – infrared method.0 72.6) for 2008 was to reduce paper consumption by 5% across the business based on the total amount ordered in 2007.2) 8.8 21. 2008 Information technology computer refresh Procurement of fire doors Gas consumption at Westferry Circus and Trackside House Water consumption at Westferry Circus and Trackside House Overtiling on stations L&E metal savings ZWTL – P-Way civils embankment works Borough lifts Truss escalator replacement at Heathrow Airport TOTAL DSM ERU GPS GSM L&E P-Way ZWTL 29.9) (50.006 (9.3 tons) This level was maintained! (To be third-party verified in 2009.5 metric tons (8. with estimated savings of 2.6) (3.7) (26. metric tons (tons) 1.4) (45.8) (10. The purpose of this report is to prioritise those technologies for use in practical trials in the underground energy-efficient “model” station proposed to LU.Table 1.1) (4.6) (85.0 47.0 24.9 2. P-Way sleeper popping ZWTL – Kingsbury Embankment GSM Installation of GPS on ERU fleet Reduced waste seat covers In situ wheel turning Hose nozzle to reduce water Platform resurfacing – infrared method. A 22% reduction was achieved! • 2007 Fuel Efficiency—5% improvement in fuel efficiency of commercial road fleet over 7.7) (2.) Current or Recently Completed Objectives The following activities have also been completed or are under way: • Corporate — Inclusion of an energy assessment in the investment application form — Updates to the environmental training courses to include energy management — Quantification of the financial implications of climate change on Tube Lines • Stations — Installation of long-life lamps and light-emitting diodes (LEDs) during station modernisations — Trial use of 360-degree cameras — Installation of waterless urinals — Identification of 22 energy-saving initiatives.8) (0.8) (151.0 41.9) (35.5 metric tons (8.7) (133.7) (23.7 5.199.7 2.8 (1.0 95.4) (186.189.6) (0.5 77.3 4.311.0) (80.6 2.

as depicted in Table 1. This 2008 target was equivalent to a 6% reduction of Tube Lines’ 2006 (baseline) footprint. Number 1 15 .512 tons) was then agreed to and planned to be achieved through potential reductions in electricity. and CO2 impacts embedded in prequalification process for new rolling stock — Investigation of energy storage — Investigation of composite materials CARBON CONSUMPTION B y the end of December 2007. paper.085 metric tons (1. However. This was subsequently increased to 5. 6. the CO2 impact of all processes was determined and the 2007 reduction was estimated to be 1. Figure 3 and Table 1 show that by the end of Period 13 2008.277 metric tons during the verification process. 2008 CO2 Reductions by Period December 2009 • Volume 2.000 1. gas. and fuel consumption and waste production. The target was agreed to by the Executive Committee.000 2. as well as increases in recycling and transportation efficiencies.006 metric tons (5. energy efficiency. Tube Lines is not able to provide monthly figures for plant and equipment fuel usage since the use of diesel/petrol plant and equipment is discouraged and restrictions exist for its use in Section 12 stations (fire regulated).518 tons) of CO2 was achieved. The original 2008 achieved reduction of 5.196 tons) against the 2006 baseline data.000 metric tons (5. fuel usage has formed part of the CO2 management process assessments.006 metric tons was subsequently increased to 5.000 3.817 tons) during the verification process.4 miles) of composite conductor rail — Reorganisation of track power segments — Discussions with LU regarding coasting — Modelling of temperature and humidity on Jubilee and Northern lines — Sustainable design of supporting infrastructure – Northern line control centre— green roof/intelligent lighting – Stratford train crew accommodation—zero-maintenance cladding/optimal use of natural light/intelligent lighting/ green roof/flexible floor design • Piccadilly Line Upgrade Project — Consideration of environmental innovation. water.000 5.000 Carbon Dioxide (CO2 ) Saved.277 metric tons (5.— Installation of 20 kilometres (12. metric tons 4. the target had been met: a reduction of 5. The 2008 reduction target of 5.000 0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 Period Action Level Warning Level Target Performance Figure 3.

100 600 100 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 Total Target 13-Period Rolling Average Figure 5. mpg Target 12-Month Rolling Average Figure 6.1.600 Reams of Paper Purchased 2. 2008 Fleet Fuel Consumption 16 Bechtel Technology Journal Mean Temperature.000 5% 25 Electricity Usage.600 1. 2008 Paper Usage 25 Fuel Consumption. mpg 20 15 10 5 0 Jan Feb Mar Apr May Jun Jul Aug Sept Oct Nov Dec Consumption. MWh Target 12-Month Rolling Average Figure 4. 200 5 0 Jan Feb Mar Apr May Jun Jul Aug Sept Oct Nov Dec 0 Monthly Average Temperature Consumption. °C . a 22% reduction in paper consumption led to cost savings in paper purchased.200 30 1.100 22% 1. MWh 800 20 600 15 400 10 In 2008. 2008 Target and Average Energy Usage 2.

and construction supervision and sustainability planning for the High Speed 1 Temple Mills Depot. The aim of this programme is to introduce CO2 management into the business case process. in particular. environmental and social impact assessment. December 2009 • Volume 2. and 4 years of environmental planning. all in the UK. Before joining Bechtel. Elisabeth’s responsibilities include developing sustainability and responsible procurement programs across a range of Bechtel Civil’s business portfolio of projects.6) (207) (57) (21) BIOGRAPHY Elisabeth Culbard.050 (1. LU CO2 Reductions Activity That Reduces LU CO2 Level • Installation of green roofs • Creation of “model” station (proposal submitted to LU): – Automatic/controlled lighting – Use of waste heat in tunnels for heating – Reduced use of escalators during off hours – Installation of wind turbines for auxiliary supply 6 188 52 19 (6. She has more than 25 years of experience on transport and infrastructure projects in London. hands-on construction management expertise.” She is also an Expert Member on a Joint Institute of Civil Engineering/Engineers Against Poverty Panel on “Promoting Social Development in International Procurement. By the end of 2008. Elisabeth was responsible for the day-to-day environmental and social performance of the International Finance Corporation’s global portfolio of infrastructure construction projects. This work won her the prestigious James Wolfensohn Excellence Award for due diligence on global railway projects. The aim of this programme is to introduce CO2 management into the business case process. Through the Tube Lines business objective targets. energy consumption at the end of 2008 was 5% below 2007 performance levels and surpassed the 2% reduction target. and construction execution. the UK.As shown in Figure 4. her PhD in Environmental Engineering from the Royal School of Mines. and internationally.judd@tubelines. by the end of 2008. T ACKNOWLEDGMENTS The author would like to thank Charlotte Simmonds and the rest of the Tube Lines “Go Green” Environment team for their valuable contributions to this paper. Elisabeth received her DIC (Diplomate of Imperial College of Science and Technology) from the University of London. She also worked on the Channel Tunnel as environment manager. fleet fuel consumption was on target to maintain or slightly improve upon its 2007 performance— achieving 13 miles (21 km) per UK gallon (11 miles [18 km] per US gallon). project stakeholder engagement. all enquiries on this topic and associated environmental management issues should be forwarded to Steven Judd at steven. Crossrail. design integration. and construction management. corporate and process footprint activities and improvements that target a reduction against the LU power footprint can also be tracked (see Table 2). As shown in Figure 6. leading the project through the Hybrid Bill. She is also experienced in finding workaround solutions to problems that can cause project delays or budget overruns. Table 2. Number 1 17 . Elisabeth was team leader on Bechtel’s Strategy Working Group on Climate Change and a Steering Group Member on the UK Construction Industry Research Information Association project on “How to Deliver Socially Responsible Construction Projects.157) CONCLUSIONS ube Lines has developed a mechanism for evaluating the carbon footprint of its day-to-day operations and created a baseline against which it can make focussed operational and investment decisions to reduce CO2. the yearly paper consumption was 22% below 2007 levels and surpassed the 5% reduction target (see Figure 5). CO2 Reduction. and strategic environmental and sustainability policy advice. and her BSc with Combined Honours in Geology and Environmental Science from the University of Aston in Birmingham. providing problem-solving technical solutions.” Elisabeth was involved in route optioneering. University of London. and Autostrada Transylvania. is a technical expert in the field of sustainability. PhD. Since Charlotte has now left Tube Lines. Tube Lines. metric tons (tons) Imperial College of Science and Technology. contaminated land remediation.

18 Bechtel Technology Journal .

we are managing site deployment for a nationwide wireless network. In Bulgaria. Kurochkin Hugh Nudd 33 Cloud Computing—Overview. Bechtel is the perfect partner for rapidly growing communications companies. Advantages. Kurochkin BTC Mobile With global reach and a local touch. and Challenges for Enterprise Deployment Brian Coombe 45 Performance Engineering Advances to Installation Aleksey A. .Communications Technology Papers 21 Intermodulation Products of LTE and 2G Signals in Multitechnology RF Paths Ray Butler Aleksey A.


Butler@andrew. These other frequencies are the harmonics (integer multiples) of the fundamental frequencies and the sum and difference frequencies of any combination of these fundamental and harmonics. resulting in the generation of additional. while most vital signal parameters remain within the limits specified by applicable standards. such as 2f1 – f2 (third order). All rights reserved. some of these signals can appear in the desired communication channels and cause interference.Nudd@andrew. occupy one or more channels and make them unavailable for traffic. spurious emissions. IM can still produce unwanted noise that reduces system efficiency. they cause additional noise in the system or. To avoid degrading both existing and new systems. Kurochkin aakuroch@bechtel. connectors. This and other signal quality issues must be considered when designing and deploying multitechnology systems. In radio communication systems such as terrestrial microwave links.). [1.and intermodulation (IM) can result when the composite signal power approaches the specified maximum power. filters. These interfering signals can be generated by nonlinear behavior in either the active circuits (amplifiers and signal processing circuits) or the passive circuit elements (antennas. Signals at different frequencies in any nonlinear circuit create a large number of additional signals at other frequencies. some of the passive circuit elements such as antennas are required to 21 M Ray Butler Ray. and mobile phone systems. at high enough magnitudes. multitechnology wireless system. unwanted. and a common requirement in all such systems has been to suitably manage spurious signals produced by intermodulation (IM). Andrew Solutions Aleksey A. In doing so. Keywords—accessibility. Depending on the particular frequency plan. When carried in a circuit together. Consider two fairly close frequencies f1 and f2. a single amplifier. component selection. . but appropriate system design typically limits the number of frequencies handled by. its harmful effects at the system level. To illustrate these points. field testing. key performance indicator (KPI).) Introducing a third nearby frequency f3 can cause products such as f1 + f2 – f3 (also third order) to be generated. 3] IM is the interaction (or mixing) of the fundamental signal frequencies in a nonlinear circuit. or 4f1 – 3f2 (seventh order). an increasing level of spurious emissions caused by cross. However. The main sources of nonlinearity are usually in the active circuits. code division multiple access [CDMA]) do not have discrete “channels” in the frequency domain. the paper discusses the causes of passive IM. and can even be the same as. IM products. they can be caused to generate a number of related frequencies. etc. (The harmonic numbers in the two terms of these difference frequencies differ by 1. signal distortion. methods of measuring it. If they are the same. intermodulation (IM). which have been deployed for many decades. Even in more recent mobile phone systems whose wideband modulation schemes (e. it is important to understand the sources of IM and to minimize the levels of IM generated. say. passive intermodulation (PIM). cables.INTERMODULATION PRODUCTS OF LTE AND 2G SIGNALS IN MULTITECHNOLOGY RF PATHS Issue Date: December 2009 Abstract—Communication signal distortion in wireless network radio frequency (RF) paths has been predicted theoretically and studied experimentally by numerous authors for various signals.. and design principles to reduce its generation to acceptable levels. this paper examines the reaction of a multiband antenna system to a wideband long-term evolution (LTE) signal and a narrowband 2G signal. receiver desensitization. antenna system. the system’s receive channel frequencies.g. connector. 3f1 – 2f2 (fifth order).com Andrew Solutions © 2009 Bechtel Corporation. These various products are close to. the necessity to minimize generated IM levels has long been Hugh Nudd Hugh. which have been deployed for about 25 years. long-term evolution (LTE). 2. antenna testing. retainability. passive RF component. voice quality INTRODUCTION ultichannel radio communication systems have been deployed for many years. signals. Research and testing have demonstrated that.

This paper describes the causes of PIM. electrons still cross the very thin surface layer by tunneling. In long-term evolution (LTE) systems. PIM also increases LTE intercell interference on the affected band and overall in the system.. the greater the curvature of the voltage/current or output power/input power characteristic). its harmful effects at the system level. depends on how well true metalto-metal contact is created at the junction. ACRONYMS. [2] The result is some degree of rectifying action and hence a nonlinear mechanism for generating PIM products. This nonlinear process produces a small diode effect. the complete transmission path has many metallic junctions that the RF currents must cross. for instance). The oxide itself may be semiconducting (e. The greater the degree of nonlinearity (i. but even if it is insulating (e. then there should be some means of creating high pressure across the contacting surfaces. especially at junctions of dissimilar metals in the presence of moisture. copper oxide). the RF signal may be moved to a more nonlinear portion of the voltage/ current characteristic. correspondingly. DIN Eb/No EDGE GPRS GSM IM KPI LTE MIMO MU PIM RF RSSI Rx SNR SU Tx UMTS PIM is generated in a circuit carrying more than one frequency whenever a nonlinearity occurs. in turn. causing an increase in the magnitudes of the PIM products generated. 22 . Because there are usually many such components in the system. or brazing the two metal parts. The metals used in transmission paths usually have very thin oxide layers on the surface. PIM affects one or several 180 kHz blocks.. the two fundamental causes of nonlinearity and PIM generation are (a) some degree of currentrectifying action at the conductor joints and (b) a varying magnetic permeability because of the presence of ferromagnetic materials in or near the current path.g.. the magnitudes of the PIM products generated. soldering. PIM can cause the system to operate at maximum power instead of under power control. causing undesirable increased power dissipation in the components.e. The severity of PIM generation depends on the degree of nonlinear-to-linear current flow. making it necessary to also control passive intermodulation (PIM). and design principles to reduce its generation to acceptable levels. These may form over time. whenever the voltage is not exactly proportional to the current or the output power is not exactly proportional to the input power. Furthermore. multiple metallic parts form the conduction path. Nonlinearity at conductor joints can also be produced by the presence of corrosion products. which. If doing this is impractical.e. aluminum oxide). Note also that when a direct current exists in the transmission path along with the RF signal (to power a tower-mounted amplifier. Low contact pressure increases the proportion of nonlinear current flow and. multiple output multiple user passive IM radio frequency received signal strength indication receive/receiver signal-to-noise ratio single user transmit/transmitter universal mobile telecommunications system PIM is generated in a circuit carrying more than one frequency whenever a nonlinearity occurs. The best contact is achieved by welding. Bechtel Technology Journal carry many frequencies. AND TERMS 2G ARQ BCCH BS BTS CDMA second generation digital mobile phone service automatic repeat request broadcast control channel base station base transceiver station code division multiple access Deutsches Institut für Normung (German Institute for Standardization) ratio of signal energy to additive noise enhanced data rates for global evolution general packet radio service global system for mobile communication intermodulation key performance indicator long-term evolution multiple input.g.CAUSES OF PASSIVE INTERMODULATION ABBREVIATIONS.. this reduces cell and neighbor capacity. In the passive radio frequency (RF) circuits of typical components. methods of measuring PIM. [2] Current-Rectifying Action In a typical passive RF component. i. then the greater the level of the PIM signal generated.

Ferromagnetic materials show a high degree of nonlinearity to RF currents. In practice.g. by a phenomenon called “rusty bolt noise. At the typical signal levels found in communications systems. PIM can also be generated in the tower. and thus field strength. all major key performance indicators (KPIs) are worsened in the area of the affected cell. iron or nickel). either or both may be affected to the point that the transmit signal cannot be detected by the mobile device. In severe cases. The mechanisms for PIM generation are considered in more detail in the following descriptions of design principles and methods for minimizing PIM generation. The differences between PIM products are in their magnitudes and in their effects on system performance. and the domains experience internal forces resisting the motion (hysteresis). For example. but the rate of change based on the driving signal amplitude is greater. which must be specified. Number 1 23 . their amplitudes depend in a nonlinear way on the amplitudes of the generating signals. especially in outdoor device connectors. these materials consist of small regions (domains) that are highly magnetized but oriented in different directions so that there is no net magnetism. and the levels of PIM generation are correspondingly high. An RF field causes the domains to oscillate. as discussed below. PIM testing often catches these issues. such issues reduce the RF coverage of forwardlink-limited systems. Basic considerations suggest that third-order products increase by 3 dB for every 1 dB increase in signal level. When the input signal to a passive device includes more than one frequency. Schematic of Cell Site December 2009 • Volume 2. Naturally. Some RF path failure modes have been shown to increase insertion loss in the path. the systems stop working partially or completely.Presence of Ferromagnetic Materials The second mechanism of PIM generation arises from the nonlinear permeability that occurs when RF currents flow in ferromagnetic materials (e. when a component becomes overheated. [4] BTS1 GSM Diplexer Node B UMTS Rx Tx Rx Tx Rx Tx Rx Tx Duplexer Duplexer Duplexer Duplexer Figure 1.” which is caused by erosion of the devices in harsh environments. passive RF components usually experience some level of PIM. the relative level of the PIM products with respect to that of the driving signals depends on the signal level. In their nominally unmagnetized state. These issues manifest themselves mostly in the transmit path. the increase varies somewhat from case to case but the measured increase is usually somewhat less than the predicted figure.5 dB per 1 dB is typical. In all cases. various products of the input harmonics are generated. [2] Because the skin depth in a conductor (the effective thickness of the surface layer in which the RF current flows) depends on permeability. Product signals falling within the receive signal band pass through the transceiver filter and could easily desensitize the receiver. The result is a magnetic permeability that depends strongly on the field strength. In general. where the power is high.. ferromagnetic materials show a high degree of nonlinearity to RF currents. a rate of about 2. Laboratory tests show that PIM also increases as device temperature rises. While tolerable for systems with reverse-link-limited link budgets. then. Additional Considerations Because PIM products are caused by nonlinear processes. and the levels of PIM generation are correspondingly high. so does the RF resistance. the amplitudes of higher order PIM products (fifth order. Dual-Band or Single-Band Antenna Two RF Coax Cables GSM Tx1/Rx1 UMTS Tx3/Rx3 GSM Tx2/Rx2 UMTS Tx4/Rx4 Diplexer EFFECTS OF PIM IN WIRELESS SYSTEMS PIM Generation and Levels of Tolerance As described in the previous section. Fortunately. if the transmission path passes both global system for mobile communication (GSM)/general packet radio service (GPRS) and LTE signals (as shown in Figure 1). seventh order) are lower by 10 or 15 dB between successive orders. and also often result in PIM generation.

further decreasing accessibility. this power is measured in decibels and is added to the noise floor of the BS.PIM affects most wireless systems with balanced or uplink-limited link budgets by increasing the base station noise floor. Again assume that a GSM mobile originates a call near a BS in an area with a relatively good SNR.5 dBm. Thus. the power increases. the BS. because most wireless systems are designed with balanced or uplink-limited link budgets. If receiver desensitization is the typical 0. PIM decreases the maximum allowable path loss for the mobile. [5] However. the receiver has a noise floor of –99. The degree of desensitization varies depending on the power of the generated harmonics. barely recognizes the mobile. By increasing the BS noise floor. but as soon as it starts picking up more users. a cell affected by PIM also shrinks from its planned radius. Retainability PIM causes decreased retainability in a GSM/ GPRS/enhanced data rates for global evolution (EDGE) cell. which consists of 180 kHz frequency blocks. In severe cases. When the LTE downlink experiences path nonlinearity. A receiver becomes desensitized when PIM power is comparable to thermal noise or other interferences. the mobile continues to read a stronger received signal strength indication (RSSI) from the originating cell than from the neighbor and does not initiate the handoff. There. although the mobile still reads a stronger RSSI from the originating cell than from the neighbor. the PIM tolerance of the receiver filter located on the other side of the diplexer is not as stringent because it receives only a very low signal power in a single frequency band. However. affected by PIM. the mobile neither initiates the handoff nor drops the call. dropping the mobile connection before it reaches another cell. Typically. thereby reducing receiver sensitivity and the ratio of signal energy to additive noise (Eb/No) in the system. while still demanding full power from the mobile and transmitting the full power from the BS. The LTE downlink. the specifications for an antenna might be indicated as –107 to –110 dBm IM3 power. which affects all of the major KPIs.5 dBm and can still detect signals with an interference level at the equipment antenna port of –106. assume that a GSM mobile originates a call near a BS in an area with a relatively good SNR. the locations of the devices in the RF system. but this results in decreased battery life and an increased level of uplink interference. The mobile again moves closer to the cell edge. In this example. which in turn increases the PIM levels and exacerbates the problem. Nonetheless. The PIM level drops during the reload and the cell starts working normally. RF engineers should indicate the maximum available PIM level for each component in the RF path based on a complete analysis of the feeder system. where the BS. The GSM mobile may not be able to read the broadcast control channel (BCCH) and access the BS when the mobile is located on the planned edge of the cell. Effectively. suppose an LTE receiver works over the 10 MHz band with 90% utilization. the mobile moves to the cell edge. The maximum tolerable PIM performance for an RF network depends on the signals that traverse the individual devices. thus reducing retainability and worsening the statistics. For example. the receiver is overloaded and blocked [5] and the LTE BS loses its reference signal and resets. PIM levels grow. Thus. stops recognizing the mobile. without crossing it. because they carry almost the full power of the base station (BS) and carry signals with various frequencies. and the cycle is repeated.8 dB and the noise figure is 5 dB. For example. because PIM affects the BS receiver by also increasing the overall noise floor. Effects on Legacy 2G Systems Legacy 2G systems suffer when PIM reduces the signal-to-noise ratio (SNR). Then. Effects on LTE An LTE mobile experiences PIM problems similar to those of a legacy mobile despite designed resiliency to interference. Accessibility Mobiles in the narrowband legacy systems such as GSM experience receiver desensitization as reduced cell radius and increased interference areas. Antennas usually have very stringent PIM requirements. slow data rates. the BS drops the call. For all KPIs. can generate PIM even without the presence of 2G signals. and the signal power that is transmitted. To illustrate. Instead. and other poor quality effects. PIM further increases the receiver noise floor by 1 to 3 dB. uplink power control may try to combat some level of PIM in the BS by increasing mobile power. Bechtel Technology Journal 24 . it experiences voice clipping. the cell shrinks. affected by PIM. Such power causes receiver performance to deteriorate. On the other hand. This type of PIM affects most networks. measured with two 43 dBm carrier tone inputs. Quality Quality is also affected.

the PIM levels will be different in each. For example. [6] Instead. if both antennas are connected to the same receiver. the RF/systems engineer should be encouraged to distribute these signals over different cables.the tones interact with each other and their products distort the downlink signal and leak into the adjacent channels. At the same time. However. the greater the number of frequencies passing through those components. Multiple Antenna Element Transmission Multiple antenna element transmission is designed to create diversity gain in LTE. The RF engineer can then try to avoid these combinations by using advanced options in 2G frequency hopping or the LTE scheduler. usage of the automatic repeat request (ARQ) mechanism may increase because more transmission errors are being caused by PIM-generated noise in the channel. if SU-MIMO transmission is established with four antenna paths (layers). When overall power in the band of the affected cell increases. multiple-output (MU-MIMO) and single-user (SU-MIMO) is affected. In this case. Contacts A low-pressure conductor junction actually has true metal-to-metal contact at only a relatively The steps to minimize PIM are twofold: through proper planning and through better physical conditions. If the antennas are connected to different receivers. Power Control and Inter-Cell Interference Coordination An LTE mobile’s power control pushes up the mobile’s power in an attempt to combat PIM in the BS receive path. PIM may affect the whole scheduler algorithm. Better Physical Conditions PIM products are minimized by improving the physical connections. Several IM study software packages are available that can quickly run through combinations of frequencies and point out which ones can affect the receivers. Cable Utilization Cable utilization to help avoid PIM generation involves a systemwide decision. More data frames would need to be retransmitted. Spatial Planning If the harmful frequency combinations overlap the carrier frequencies. In extreme cases. the steps to minimize PIM are twofold: through proper planning and through better physical conditions. The neighboring cells start losing capacity in an attempt to accommodate the interference issue of the affected cell. albeit to various extents. Number 1 25 . the mobile may stop recognizing even an otherwise strong downlink channel. if one or more of these paths is dropped or affected by PIM. This decreases the maximum transmit power in a path and decreases the PIM. LTE spectrum efficiency decreases and inter-cell interference increases. the main mechanisms that generate IM products in passive components are poor (low pressure) contacts between conductors. The affected cell may get a high interference indicator over a large portion of the band. Scheduler and ARQ In addition. this decision would also be locked in due to PIM. the diversity would cause the path less affected by PIM to be chosen. four times the data transmission speed of the single connection can be expected. Therefore. When there is a need to transmit two or more transmit signals (Figure 1). The scheduler may not be able to fight this increased noise and the resulting SNR reduction because the scheduler is better designed to combat temporary RF signal fading. resulting in slower data rates than would be expected with multilayer transmission. the presence of corrosion products. instead of working against fading and capitalizing on multipath. [6] However. The same analysis may need to be repeated at each site in question to determine site-specific antenna placement. The site engineers should be advised of the appropriate minimum vertical and horizontal separation between carrier antennas. Transmission to/from two or more antennas in both multiple-user multiple-input. It can be predicted that the second layer of the ARQ mechanism would be used more often than intended. and by eliminating ferromagnetic materials from any region where significant RF currents flow. MINIMIZING PIM GENERATION A s stated. December 2009 • Volume 2. and the presence of ferromagnetic materials in or near the conducting path. [6] However. the greater the possibility that harmful frequency combinations may occur. site selection and antenna placement should be considered. which may decrease channel throughput. the top speed could be significantly less. by paying attention to the detailed design of all conductor junctions so that they remain solid and reliable for all service conditions. Proper Planning Frequency Planning The RF engineer should conduct IM analyses to identify possible frequency combinations that could produce harmful products. by eliminating the possible formation of corrosion products over time. PIM from one antenna receive path would desensitize the whole receiver.

it is most important to prevent the ingress of even small amounts of water into an RF transmission path (where it would also cause immediate electrical performance degradation). Such pressures force a higher proportion of the contact area to be true metal to metal. silver. brass. too.g.1 0 0 50 100 150 200 Distance from Outer Surface. the nonlinearity and the generation of IM products would be high. which means that some current flows across the nonlinear oxide layers on the conductor surfaces and generates IM products.7 0. in approximately descending order of quality in terms of minimizing IM generation. the design of the contact and the arrangement of the associated mechanical support must be such that high contact pressure is maintained throughout the fluctuations in ambient temperature and vibration that occur under various operating conditions. and nickel has generally not been used on RF connectors in these systems for some time. the current density at the nickel under-plating surface is still 5% of that at the outer conductor surface. then. soldering. Nickel was sometimes used. but they are often impractical. In these cases. thereby reducing the degree of nonlinear current flow. but even at 10 GHz (much higher than present operating frequencies). however.0 GHz Total Current 0. The best methods for doing this are welding. and doing so was also found to give rise to IM products. Some junctions must be able to be disassembled (e. For this reason. It may be necessary to use a ferromagnetic material for strength or other reasons. The degree of nonlinearity and the generation of IM are reduced. which can loosen with repeated thermal expansion and contraction. and brazing. Ferromagnetic Materials As noted earlier. on RF connector bodies) because of its excellent environmental stability and reasonably low cost.8 0. 1. microinches Figure 2. only nonferromagnetic materials (such as copper. It is also good design practice to use compatible metals in the electrochemical series at all junctions. gold.9 0. include: • Soldering. form on metal surfaces in the presence of water.0 0.5 0. Figure 2 shows the variation of current density with distance into the conductor for 100 microinches of gold (which is a typical plating thickness) on nickel at three different frequencies. This practice was soon found to cause IM generation. In these cases.g. carbonates. At 1 GHz (close to cellular operating frequencies). nickel (which is ferromagnetic) was often used as a plating material on conductor surfaces (e. ferromagnetism gives rise to a field-dependent permeability and generation of IM products. Corrosion Products Corrosion products. and the configuration of others may be determined by product assembly or field assembly requirements..3 0. the mechanical arrangements for the contact design must be capable of generating high contact pressures (up to 1. by improving the contact mechanism. the current density at the top of the under-plating is about 40% of that at the outer current-carrying surface. and phosphor– bronze) can be used for conductors or as plating or under-plating materials for conductor surfaces. Methods of making RF electrical contacts. low number of contact points.4 0. Before the deployment of modern cellular phone systems.0 GHz 10. brazing.2 0.1 GHz 1. very small diameter coaxial cables may use a copper-clad steel wire for the inner conductor.000 psi). hence. typically metal oxides. the current-carrying surfaces must have a thickness of several skin depths over the ferromagnetic material.6 0. To minimize IM products.It is good design practice to use compatible metals in the electrochemical series at all junctions.. connector interfaces). and other salts. aluminum. The situation improves somewhat at higher frequencies because of the reduction in skin depth. as under-plating for gold. hydroxides. or welding • Clamped joints (with suitable support arrangements) • Butt joints • Spring fingers • Crimped joints All of these find application in typical passive components except possibly the last (crimped joints). Current Distribution for Composite Plating [7] 26 Bechtel Technology Journal . Also. thus. Their formation can be especially severe at the junctions of dissimilar metals. These methods establish a continuous metallic path for the current flow.

then the performance of that same antenna would be –164 dBc when measured with two 2 W test tones. as shown on the right.4 dB and 2. So a PIM analyzer specified to output 43 dBm signals actually outputs two signals at 20 W (43 dBm) each. This is because reducing the input signal by 10 dB reduces the PIM from the same antenna by 24 dB. is measured to have PIM products of –150 dBc with two 20 W test tones. if the transmitter carrier changes by 1 dB. the upper bar represents the high power measurement. A load supplied by the test equipment provider must instead be used. a PIM test can also be more difficult to perform. Two test tones. The difference between –131 dBm and +33 dBm is –164 dBc. Number 1 27 . Antenna manufacturers. A PIM test can be used to identify loose connectors as well as poorly assembled or designed RF components and modules that might be missed by a swept return loss or other test. this means that care must be taken in specifying and executing the tests. One of the first things to understand about PIM measurements is how the difference in the transmitter power and receiver sensitivity of the 10 dB Margin for PIM tester can affect the Valid Measurement accuracy and sensitivity of the measurements. PIM reduces ~2. is the industry standard measurement. In Figure 3. The reason the absolute PIM level is 24 dB lower is because A PIM test can be used to identify loose connectors as well as poorly assembled or designed RF components and modules. where 43 dBm of signal power per tone is used. test tones with 33 dBm of signal power are used. Figure 3 helps to explain this concept. the measured PIM signal from the same antenna is 24 dB lower. Test Tone Power Level As indicated earlier. Then. The difference between these two is –150 dBc. dBm –30 –15 0 15 30 45 Typical Residual PIM and Noise Floor PIM Carrier Levels Figure 3.PIM MEASUREMENT CONSIDERATIONS A s indicated earlier. in most cases. which is –131 dBm. Note that the specified power level applies to each of the two transmitter signals that the instrument generates. not only to validate the integrity of the components in the RF path. This section explores the best practices for testing PIM and provides recommendations on how best to proceed. In practical terms. for example. Equivalent PIM Tests for Different Power Levels December 2009 • Volume 2.) –107 –150 dBc 43 –131 –164 dBc 33 10 dB –135 –120 –105 –90 –75 –60 –45 Level. Theoretically.0 dB. The required signal level is –107 dBm. In this case. At the same time. such a load consists of a length of small diameter coaxial cable. each +43 dBm (20 W). the measured products are 24 dB – 10 dB = 14 dBc. for a total output power of 40 W (46 dBm). typically use a PIM test chamber with RF-absorptive material to avoid the effects of the environment. this change ranges between 2. the corresponding change in PIM power level is 3. it is important to measure PIM. power level is important because the effect of transmitter power on 24 dB the level of the PIM products is nonlinear. Typically.8 dB for every 1 dB change in transmitter signal power level. for example. If an antenna. In practice. the 50 ohm dummy loads used to calibrate the test set for a swept return loss test generate a high PIM level.4 dB. Equivalent PIM Tests – 20 W vs. As another example. but also to verify that the connectors have been properly attached and connections properly torqued. In the lower bar. 2 W (For every 1 dB of carrier reduction. Measurement Sensitivity The successful measurement of PIM products in the factory or in a field setting is difficult because of the sensitivity of the measurement to factors in the environment as well as in the test setup and instrumentation itself.

then the loss of the cables must the instrument’s noise floor. In this case. of the PIM signal to be measured and the residual PIM level of the test equipment. The routine that automatically accounts for the loss in the test cables Interference Effect of Two PIM Sources and increases the PIM testing output power 6 5 exactly enough to meet 4 the requirements at the 3 output of the test cables. depending on their length and As seen in Figure 4. The low power tester shown in Figure 3 is incapable of accurately measuring a device with performance of –150 dBc (as measured at 20 W) and should not be used for PIM measurements any better than –140 dBc (at 20 W).5 dB of insertion loss exists in cables. significantly worse be compensated for by increasing the output errors can result.4 dB for every 1 dB of change in the test tones. the error resulting from quality. PIM signals change phase as they travel down the cable.3 dB. Figure 4. This compensation is the signal path where there is a connector or typically accomplished via a built-in calibration contact creates some small level of PIM. the specification measuring within 10 dB of the residual PIM calls for testing at the input terminal of the floor is +2. Typically. repeated manner has been demonstrated to catch poorly performing units that would have otherwise passed the test. The chart is derived by vector addition. a key factor in selecting a PIM analyzer is its ability to perform swept measurements. as is usually the case. PIM Measurement Minimum Signal Level 28 Bechtel Technology Journal .the fundamental test tones are reduced in signal strength by 10 dB. and the frequency being observed. If. Because the frequencies required to perform a swept test are in commercial use. The in-phase and anti-phase levels depend on the relative phases of the individual PIM sources. which is typical for a third-order product. Residual PIM of Analyzer Another significant measure of the performance of a PIM analyzer is its own internal residual PIM level. It is recommended that the measured PIM value of the unit under test be at least 10 dB above the residual PIM level of the analyzer. dB Mechanical Shock A common practice among top antenna vendors is to induce mechanical shock to an antenna while it is undergoing testing. the test must be performed in a factory setting. Figure 4 shows the measurement error due to residual PIM as a function of PIM signal proximity to the residual PIM noise floor. The only way to get an accurate measurement is to sweep one of the tones across the band of interest— otherwise an error almost always results. dB + = Anti-Phase + = Superposition Cancellation No Interference IM1 – True PIM of device under test (not known. A PIM analyzer with at least 20 W per tone is recommended. their frequencies.5 dB to 1. a swept test is recommended. A practical alternative is to step one of the test tones across the band in four or five steps when using analyzers that do not allow swept tests. and they add or subtract depending on their relative phases at all points along the cable. in the best and worst phase cases.2/–3. 2 1 0 –1 –2 –3 –4 –5 –6 –7 –8 –9 –10 –11 –12 Resultant – IM1. Thus. This is because each point in power of the instrument. such as Residual IM of the test equipment Measurement Uncertainty = Resultant – IM1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 IM1–IM2. In-Phase Insertion Loss It is important to clearly define the reference point of the test tone power level because there is insertion loss in the test cables. A PIM analyzer with at least 20 W per tone is recommended. For this reason. the PIM signals change 2. Doing this in a controlled. on the order of 0. For signal levels closer to unit under test. estimated by Resultant) Resultant – Measured PIM IM2 – Other PIM source.

Use of Too Many Adapters and Old. Passive IM Response (IM3) F2 Down F1 Up –60 –80 F1=2110.0 MHz. they can degrade with repeated flexing.0 MHz. Low PIM loads typically contain lengths of coiled cable and have excellent PIM performance. F2=2170.6 dBm at 2056.0 MHz IM Level.Test Equipment Setup The test equipment setup.0 MHz. the use of a connector saver at the test equipment connection is encouraged.0 MHz IM Level. Figures 6.0 MHz F1=2113. IM3=–66. High performance jumper cables are a must to correctly measure PIM. Number 1 29 . • Inspect connector faces for damage and cleanliness.2483 Figure 6. In Figure 5.0 MHz.2 dBm at 2060. MHz REVERSE IM Summitek Instruments VFP SI-2000E (UMTS) Rev. use low PIM jumpers to connect heavy or stiff items (antennas and large diameter coaxial cables). typically generate high levels of PIM. IM3=–88.2 dBm at 2056. IM3=–87. F2=2164. worn connectors.0 MHz F1=2113. However. • Perform swept tests as noted above. Fundamentals that must be followed include: • Use low IM jumper cables. 4.0 MHz. These generally have poor PIM performance. • Torque connectors per specification to keep them tight. In Figure 6. IM3=–66. the problem is too many adapters and old. 4. In Figure 7. and 8 illustrate inappropriate setup configurations and the poor results they lead to.4 dBm at 2056. MHz REVERSE IM Summitek Instruments VFP SI-2000E (UMTS) Rev. F2=2160. Test Equipment Setup High performance jumper cables are a must to correctly measure PIM. the cable PIM dominates the measurement and masks the actual performance of the system being tested. Use of Braided Test Cables Passive IM Response (IM3) F2 Down F1 Up –60 –80 =2 Down F1=2110. braided test cables are used.0 MHz. • To prevent mechanical strain on the connectors. • Use a low IM 50 ohm load. Clean or replace them as needed. dBm –100 –120 –140 –160 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 Frequency.0 MHz.1. Worn Connectors December 2009 • Volume 2. as used for swept tests. Resistive loads.0 MHz. F2=2170. • Do not use braided cables for test applications. shown in Figure 5. • Use low IM adapters and minimize the number of adapters. dBm –100 –120 –140 –160 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 Frequency. Otherwise. Even if they are good initially.2483 Figure 7. must first be verified before it is used. 7.1. • Use 7/16 DIN connectors as the high performance PIM connectors of choice.

8 dBm at 1770.0 MHz.0 MHz F1 UP from 1805.0 dBm Offset 0.0 MHz.0 MHz.0 MHz. F1 Fixed at 1805.0 dB F2 DOWN from 1880.or 65-degree antenna. MHz REVERSE IM Frequency. IM3=–124. and was validated.0 to 1831.0 to 1828.0 MHz.0 MHz The antenna must be tested in an area clear of metal objects and obstructions and within the antenna’s main beam pattern.0 dB 0. F2=1880. Testing Standalone Antennas When unmounted standalone antennas are to be tested. MHz REVERSE IM Figure 8.0 MHz F1=1823. The antenna shown in Figure 9 meets its PIM specification. IM3=–94. Keys. If such a chamber is unavailable.0 to 1831.1 dBm Offset 0.0 dBm 43. has a good test setup.0 MHz F2 DOWN from 1880.0 MHz.0 MHz. the antenna on the left is properly torqued.9 dBm 43. IM3=–98. most repeatable measurements are obtained in an RF anechoic chamber.0 MHz. dBm –60 –60 –80 –100 –120 –140 –160 1730 1738 1744 1749 1754 1760 1766 1771 1776 1782 1785 Frequency. F2=1840. Adapters. the best.Carrier Sweep ALC is on TORQUE Measured Power 43. F2 Down F1 Up F1=1805. dBm –80 –100 –120 –140 –160 1730 1738 1744 1749 1754 1760 1766 1771 1776 1782 1785 IM Level. The performance of the loose connector is roughly 20 db to 25 dB worse than that of the properly torqued connector.0 MHz F1=1827. Environmental Impacts on Antenna PIM Testing Results Figure 8. A 90-degree antenna requires a wider clear area than a 33.0 MHz F1 UP from 1805.0 to 1828. F1 Fixed at 1805.3 dBm at 1766.1 dBm at 1774. The measured results indicated that extreme 30 Bechtel Technology Journal .0 dB Carrier Sweep ALC is on IMPROPER Measured Power 42.0 MHz. while the antenna on the right is loosely torqued. As the figure depicts. IM3=–121.0 MHz Passive IM Response (IM3) Passive IM Response (IM3) F2 Down F1 Up F1=1805. F2=1880. This means that the antenna must be tested in an area clear of metal objects and obstructions and within the antenna’s main beam pattern. it was leaned on its side and pointed in different directions. and Badge (–94 dBm [–137 dBc]) Near Shelter (–102 dBm [–145 dBc]) Pointed at Fence (–102 dBm [–145 dBc]) Near Cabinet and Test Equipment (–96 dBm [–139 dBc]) Figure 9. F2=1844.0 MHz. special care must be taken to guarantee a free sky view for the antenna. Proper Torque for Connections Under Clear Sky (–123 dBm [–166 dBc]) Pointed Toward Forklift (–84 dBm [–127 dBc]) Next to Person with Phone. F2 Fixed at 1880.0 MHz IM Level.0 MHz.0 dB 0.0 MHz.5 dBm at 1766. F2 Fixed at 1880.

Inc. No. A. PIM testing is sensitive and can identify faulty components and poor workmanship during installation. Reprinted as Andrew Corporation Bulletin No. 7/16 DIN and Type N.D. April 2009. minimizing PIM generation is critical to achieving optimum system performance. B. [4] [5] [6] Measuring PIM should be added to the standard site testing and troubleshooting procedures. Jading. “Coaxial Cables as Sources of Intermodulation Interference at Microwave Frequencies. GA. “Importance of Antenna and Feeder System Testing in Wireless Network Sites. Intertech Publishing Corp. the wireless business unit of CommScope. September 2007. 2. “LTE: The Evolution of Mobile Broadband. M. However.bechtel. 61–68. Vol. 47. RADC-TR-82-240. Ray Butler is vice president of Engineering.” IEEE Communications.pdf. and S. 91–98. Clearly. Base Station Antenna Systems.html. Rome Air Development Center. Smith. pp. Parkvall.” Final Technical Report. access via http://dl. Astely. [2] December 2009 • Volume 2. and 3. measuring PIM should be added to the standard site testing and troubleshooting procedures. Georgia Institute of Technology. April 1995 and May 1995. 4.jsp?isYear=1978&isnumber= 4091185&Submit32=View+Contents. Chugunov (Bechtel) for his thorough paper review and thoughtful comments. making accurate measurements requires good test equipment. responsible for worldwide technical sales engineering and marketing solutions for wireless tocresult. access via http://dl. if followed. Naval Research Laboratory. the environment in which an antenna is tested must be free of objects and people to avoid this type of problem. “RF/Microwave Connector Design for Low Intermodulation Generation. Kurochkin and E. Atlanta. J. in fact. G. Stauss. Chugunov. PIM measurement must be undertaken using suitably reliable and accurate test equipment. San Jose. September 1982. Furuskär. pp.. “Studies on the Reduction of Intermodulation Generation in Communication Systems. No. The recommended steps. and S. depending on where the antenna is pointing.A. “Coexistence Studies for 3GPP LTE with Other Mobile Systems. he is responsible for overall base station antenna design. http://www. when. 153–166. July 1980. Smirnov. see http://www. Kurochkin. As such. EMC-20.M. No. S. Summary To summarize. as well as Igor A. [3] J. assets/files/TechnicalJournals/September2007/ BTTJv5n2. 3652.” IEEE Communications.” Bechtel Telecommunications Technical Journal.” Mobile Radio Technology. Ng. “Investigation of Intermodulation Products Generated in Coaxial Cables and Connectors.G. the poor readings are caused by the environment. No. 4. • of as much as 40 dB or more can be observed. A. Benson. Y. Previously at Andrew. Lin. At the same time. and care throughout. Tatesh. produce valid test results and sound RF paths and components. and the measurements must be conducted with care.H. 47. Paynter and R. Vol. In taking steps to minimize PIM generation.bechtel.” IEEE Transactions on Electromagnetic Compatibility. particular attention must be paid to the quality of the system components and interconnections. Li.A. http://www.pdf. Ed.” Proceedings of IICIT CONN-CEPT ‘92: 25th Annual Connector and Interconnection Technology Symposium. June 2006.comsoc. pp. Bani Amin and F. 1992. 376–384. Woody and T. I ADDITIONAL READING Additional information sources used to develop this paper include: • I. Carlson. 4. D. September 30–October 2. CA. Vol.ieeexplore. pp.” NRL Memorandum Report 4233. “Experimental Study of 3G Signal Interaction in Nonlinear Downlink RAN Transmitters. Washington. us/43/4362/ communications/assets/files/ TechnicalJournals/June2006/Article10. 5. ACKNOWLEDGMENTS The authors gratefully acknowledge the managements of Bechtel Corporation and of Andrew Solutions (a CommScope Company) for permission to publish this paper. A. “Coaxial Connectors: BIOGRAPHIES REFERENCES [1] M. Number 1 31 .comsoc. appropriate procedures.A. These results illustrate how an antenna that is better than the specification by 16 dB can be shown to apparently fail by more than 20 dB. M. Dinan. Ray was vice president of Systems Engineering and Solutions Marketing. May 1995. Vol.A. Shands. August 1978. No. access via http://www. Lindström. [7] CONCLUSIONS n modern wireless communication systems. April 2009.” Bechtel Communications Technical Journal.A. Dahlman.H.stormingmedia. at Andrew Solutions. 2. DC.

mainly for trunk radio systems. Kurochkin. Before joining Andrew. project manager for Bechtel Communications. he worked at Marconi Space and Defence Systems on transmission components and subsystems for satellite communications. He is also a member of Bechtel’s Global Technology Team and was a member of the Bechtel Telecommunications Technical Journal Advisory Board. Russia. Scotland. rigid waveguides. a smart antenna company. Hugh has managed groups responsible for product design and development for air and foam dielectric coaxial cables. Ray holds an MS in Electrical Engineering from Polytechnic University and a BS in Electrical Engineering from Brigham Young University. Ray was also technical manager of Systems Engineering for Lucent Technologies Bell Laboratories and has held other management positions with responsibility for the design of RF circuits. Research and Development. In addition to his North American experience. Before joining Bechtel. Aleksey managed the Site Acquisition and Network Planning Departments and oversaw the functional operations of more than 300 telecommunications engineers.Ray has over 25 years of RF engineering experience. he originated the RF Engineering and Network Planning Department in Bechtel’s Telecommunications Technology Group. As a member of Bechtel’s Chief Engineering Committee. is currently responsible for the product testing. and amplifiers. Provo. Utah. Aleksey has an MSEE/CS in Telecommunications from Moscow Technical University of Communications and Informatics. Hugh Nudd is cable product development manager with Andrew Solutions. Hugh has worked for more than 40 years on RF and microwave components and transmission lines. transferring to the United States in 1982. Aleksey A. filters. and managers. and their connectors and other components. His initial job was at the General Electric Company (UK). where he worked on the design and development of microwave components. and of International Operations at Metawave Communications. and entire site implementation cycle of EV–DO and LTE technology in one of the most important regions for a US cellular operator. Aleksey established an efficient multiproduct team at Hughes Network Systems. Formerly. 32 Bechtel Technology Journal . semiflexible elliptical waveguides. system design. specialists. Earlier in his career. In addition. fiber cables. He joined the firm at its Lochgelly. Hugh graduated with Honors in Physics from the University of Oxford. he was director of National RF Engineering at AT&T Wireless and vice president of Engineering. Aleksey introduced the Six Sigma continuous improvement program to this group. focused on RF planning and system engineering. and his engineering and marketing background gives him both theoretical and hands-on knowledge of most wireless technologies. he has worked in Russia and the Commonwealth of Independent States. He is experienced in international telecommunications business development and network implementation. England. as executive director of Site Development and Engineering for Bechtel Telecommunications. plant in 1978. Since 1983.

virtualization techniques. and ubiquitous network access. Cloud computing builds on the grid computing concept that created a “virtual supercomputer” through distributed. The Cloud Infrastructure © 2009 Bechtel Corporation. This paper provides a basic overview of cloud technology and reviews several deployment options that can be described as instantiations of the cloud. networked commodity machines. data centers leverage commodity hardware. flexibility. performance. virtualization INTRODUCTION he extensibility and flexibility of software architectures and the promise of distributed computing have created a concept known as cloud computing. 33 . parallel processing. Enterprises that want to reap the benefits of cloud computing must realize that the decision to migrate is neither quick nor easy. scalability. and reduced capital and operating expenses—are reviewed.CLOUD COMPUTING—OVERVIEW. AND CHALLENGES FOR ENTERPRISE DEPLOYMENT Issue Date: December 2009 Abstract—Cloud computing is a paradigm shift that enables scalable processing and storage over Figure 1. software as a service (SaaS). security. Grid computing was generally used to run a few processor-intensive tasks that would normally be run on a high-performance machine. Keywords—cloud computing. and availability issues. Cloud computing extends this concept to perform multiple tasks for numerous users in a distributed fashion. T Grid Infrastructure Brian Coombe bcoombe@bechtel. open frameworks. A brief economic analysis of cloud computing and an overview of key players offering cloud services are also provided. The network (intranet or Internet) is employed to interconnect commodity machinery and to deliver services to disparate users. security. Potential advantages of using cloud computing—including scalability. owned-and-operated computing infrastructure model to a fully distributed decentralized paradigm. Key enterprise personnel must fully understand the cloud services provider’s offering and be ready to discuss the challenges and obstacles both organizations will face together as the enterprise migrates to the cloud. The cloud shifts the centralized. All rights reserved. grid computing. ADVANTAGES. The latter include regulatory. as are hurdles to successful deployment. [1] Figure 1 illustrates the cloud infrastructure. To enable the cloud.

[4] CPU FISMA HIPAA IT OS OSI New management capabilities. including storage. These components. the limited telecommunications services bandwidth prevented wide-scale adoption of this model. SaaS The cloud computing concept is not new. produce a cloud services offering. including: • Software as a service (SaaS) • Utility computing • Disaster recovery C CLOUD ARCHITECTURE S everal fundamental components make up the cloud architecture: • Computing resources are located off site in a data center that is not owned or managed by the enterprise using the cloud services. many cloud offerings are quick to associate a service or solution with the cloud architecture. proposed a time-sharing computing model in which hardware and services were: • Centrally hosted and managed • Sold and billed like utilities such as electricity and water However. scalable machines providing services via a network central processing unit Federal Information Security Management Act Health Information Portability and Accountability Act information technology operating system Open Systems Interconnection (International Organization for Standardization Standard 35. memory. the steady decrease in size and cost of general-purpose computing hardware pushed widespread adoption of the traditional enterprise-ownedand-operated hardware and software paradigm. Below the client layer is the cloud applications layer. [3] The architecture of cloud computing can be described using a layered model. in a manner similar to that of the Open Systems Interconnection (OSI) seven-layer model developed to provide an abstract description of layered communications and computer network protocol design. Almost five decades later. 34 Bechtel Technology Journal . Furthermore. • Resources often leverage virtualization for ease of management and interoperability. this cloud software kernel can include a hypervisor for executing virtualized applications. availability of open-source and low-cost software. At the top of the cloud model is the client layer. and communications hardware. computing. • Virtualization can enable multiple customers and applications to share the same physical machines. Figure 2 depicts the relevant layers. Applications that run on the cloud reside here and are generally accessed by application developers. and increase in telecommunications services bandwidth at reduced costs allow the cloud computing model to make technical and financial sense for some enterprises. ABBREVIATIONS. where basic infrastructure services. the development of new management capabilities. a Massachusetts Institute of Technology professor. commoditization of computing. ACRONYMS. commoditization of computing. storage. which interfaces directly with cloud environment end users. are performed. AND TERMS API Cloud application programming interface Computing architecture with distributed. Finally. Next is the software infrastructure layer. In many architectures. While taking many forms. As early as 1961. fundamentally tied together into an architecture. cloud offerings can generally be sorted into several categories. and increase in telecommunications services bandwidth at reduced costs allow the cloud computing model to make technical and financial sense for some enterprises. • Services are generally provisioned on demand and scaled up or down as required. John McCarthy. In its current buzzword status. Below these three layers are the actual cloud environment software and hardware layers. and communications. which includes processor. • Services are usually subscription-based. availability of open-source and low-cost software. At the software layer resides the kernel that translates and executes the cloud applications’ instructions on the cloud hardware. with a variety of tiered service offerings as well as flat-rate and per-use pricing models. [2] CLOUD OFFERINGS loud computing is really about two fundamental concepts: leveraging economies of scale and improving hardware use.100) software as a service • Infrastructure is often shared.• Resources are available on demand. underpinning all of the cloud layers is the hardware layer.

the cloud also enables a new way to deliver disaster recovery services. Number 1 Utility Computing The term utility computing predates widespread use of the term cloud. Other SaaS offerings include typical productivity and office suites. but is a classic example of the benefits of a cloud infrastructure.and medium-sized enterprises that often do not have the infrastructure and resources to run larger-scale enterprise applications. The application is generally software and hardware agnostic and relies on server components outside the user’s network. routine processing and storage. The cloud paradigm can allow enterprises to 35 . hosted and managed by another party). [6] Disaster Recovery Along with offering traditional server replacement and utility computing options. Disaster recovery often requires dedicated. scalability. and reliability. Traditional software firms are beginning to provide SaaS offerings to small.Client Cloud Applications Software Infrastructure Computing Storage Communication Communications Communications Software Kernel Microsoft® Windows® OS Linux® OS Mac OS® Hardware Enterprises can outsource some or all data center needs to the utility computing provider. connected topology. Figure 2. The cloud paradigm allows these services to be offered via a distributed. Enterprises can outsource some or all data center needs to the utility computing provider. an Internet-enabled commodity processing and storage hardware environment. rather than through a single data center. The server hardware can be owned and managed by the firm selling the SaaS application. routine processing and storage. flexible. Providers offer solutions that enable a virtual data center—that is. maintenance. often starting with lower-value. or it can be further removed (in other words. [5] December 2009 • Volume 2. SaaS is generally characterized in four maturity levels. Current offerings range from on-demand processing and storage up to entire remotely hosted and managed data centers. often starting with lower-value. which can reduce an enterprise’s licensing. specific hardware for data storage and remote applications operation. Cloud Layer Model • Application programming interfaces (APIs) • Managed services • Large dataset processing • Cloud integration Software as a Service SaaS allows an application to be delivered through a Web browser. and infrastructure requirements. with the most mature allowing the greatest flexibility.

[7] Application Programming Interfaces APIs are becoming a popular way to provide new service offerings while leveraging the cloud infrastructure. and other back-end offerings. recent advances allow services offerors to distribute the deployment and management of their offerings. In a recent example of its application. Apache Hadoop™ is a software product that provides a distributed computing platform for sharing and processing large amounts of data. scalable. When the datasets became too large for a single. [7] Managed Services Managed services. making it available to a wider range of clients. the enterprise used Hadoop (including MapReduce) and other Apache™ products to deploy and process the data across 10 commodity nodes. Large Dataset Processing Cloud architectures have enabled significantly improved processing of large datasets. an enterprise needed to process terabytes of data and analyze server use logs to provide better troubleshooting and optimization. The commercially developed Google™ MapReduce—a programming model and associated implementation for processing and generating large datasets—is particularly efficient and allows distribution. These applications are software and hardware agnostic and can be run via a Web browser. An application provider can use another provider’s Web-enabled services and software and either manage and host the application itself.use distributed networked commodity devices to replace dedicated data centers for disaster recovery.. resulting in flexibility. available for general use. open-source version of MapReduce. as shown in Figure 3. distributed computing. Performance was greatly improved. thereby reducing the costs to provide this service and. security services.. anti-virus scanning. [8] Map1 Da ta Da ta Map2 Da ta Da Da ta ta Map3 Data Data Data Data Start Da ta Da ta Map4 Data Da ta Reduce R d Finish . potentially. have been prominent for over a decade. or allow the existing provider to host the application on its own network. independent processing. such as remote monitoring and administration. The Hadoop project develops open-source software for reliable. one of its subprojects is a free. However. APIs allow unique applications to be written and offered via the Web using existing software and services. and reduced costs for both the provider and the user. Da ta Da ta MapN Figure 3. processing times were reduced. and the system demonstrated significant scalability. increased reliability. The MapReduce Process 36 Bechtel Technology Journal . APIs allow unique applications to be written and offered via the Web using existing software and services. high-performance machine to handle efficiently. and consolidation of analysis.

a large enterprise set up. and management abilities. and. and groups to own and manage this hardware according to their specific needs. As a result. Enterprises do not have to maintain a software baseline. at home. purchasing relationships. or anywhere else an Internet-connected computer is accessed. Furthermore. [9] Scalable Services An enterprise’s computing requirements are never static and often peak around a specific time or event. lightweight client support On-demand storage and service with lower overall costs Increased flexibility and reduced costs Easier deployment of new services using existing cloud Flexibility. The enterprise pays only for services used. In a recent example. operating. When peak demand is greater than forecasted. documents. version control. Cost and Operational Advantages For large enterprises. drivers. while knowing that the scalability to offer additional resources is available. Table 1. and reduced costs Increased performance and scalability Linking of disparate services using cloud infrastructure The enterprise plans and designs its data center hardware and network to handle the maximum computing requirement. that potentially benefit enterprises using cloud services as part of their information infrastructure. which results in idle capacity. and services thus look the same in the office. while the cloud operators leverage their expertise. Enterprises may have to integrate data and existing applications with applications deployed on a cloud or integrate multiple cloud applications. and tore down a computationally intensive process on a virtual server in only 20 minutes. on-demand computing offers significant cost and operational advantages. giving end users a consistent look and feel while providing them access to the same set of services and resources from disparate locations. In a recent Benefits over Traditional Model Flexible. cloud computing can enable an enterprise to detach end-user hardware from its managed services. end-user licenses. This approach not only is inefficient from a capital and operational perspective. Number 1 37 . which results in idle capacity. executed. CLOUD ADVANTAGES he overarching benefit of the cloud is simple in theory—it offers computing that is better distributed and managed as a result of applying economies of scale. at a cost of $6. departments. many cloud offerings are still deployed as “islands” today. This can result in lost revenues. Benefits of Cloud Offerings Cloud Offering SaaS Utility Computing Disaster Recovery APIs Managed Services Large Dataset Processing Cloud Integration Desktop Support Cloud infrastructure can flexibly support and reduce costs for both desktop virtualization and traditional desktop services. A new cloud integration market is emerging. When the process was complete. This model results in advantages. unhappy customers. the enterprise no longer had to manage or disposition the hardware. This approach not only is inefficient from a capital and operational perspective. the process would have taken 12 weeks and cost tens of thousands of dollars. Table 1 summarizes the benefits to be derived from taking advantage of the seven cloud offerings just discussed. increased reliability. capacity cannot easily be added to the infrastructure to immediately address that demand.Cloud Integration While a variety of cloud services scenarios and deployments exist. Mobility and Flexibility Cloud infrastructure can potentially support better mobility and flexibility. distributed data centers run by firms that exist for the purpose of operating such data centers. discussed in the following paragraphs. and maintenance costs. These economies of scale are realized by moving the computing hardware from a myriad of enterprise data centers to centrally managed. with established and upstart players offering software and services that promise to stitch together disparate applications and services in a cloud environment. The distributed hardware is less expensive for enterprise users because they share purchasing.40. The cloud utility computing model can make scalable services available on demand. A user’s applications. the enterprise plans and designs its data center hardware and network to handle the maximum computing requirement. T December 2009 • Volume 2. but can also prove to be an engineering challenge. but can also prove to be an engineering challenge. and multiple images for either type of deployment. potentially. a strategic competitive disadvantage. allowing individual users. To achieve the same results with dedicated physical hardware.

[10] Private and Hybrid Clouds Large enterprises that own and manage their own infrastructure have the ability to leverage private clouds. To remove humans from the equation. [12] The jurisdiction where the data is held poses another security issue. [13] To improve security. A potential solution is to have the cloud services provider act as the data custodian only. However. While some contend that cloud computing services can lead to less security. While there has been an uptick in demand. where services are provided via a cloud implementation with some or all hardware and resources managed by the infrastructure. New applications can also be written to balance the load so that multiple. while providing the customer the advantages of a private cloud. and deployed on an enterprise’s hardware and network. [11] A potential new approach may be a hybrid of a private and public cloud. These applications can be designed to scale so that additional servers. Today. the White House used a commercial provider to augment network and server capacity to support the additional demand on its network. users. In this hybrid. Security A key concern for any enterprise deploying cloud computing is security. where privacy laws are strong. Human error is the single largest cause of security breaches. An enterprise may be operating in Europe. a jurisdiction more favorable to lawful data interception. enterprises should be lining up to adopt the new model and to begin migrating infrastructure to the cloud or purchasing new cloud services. several logical arguments point. to enhanced security. hardware manufacturers or cloud offerors construct a private hardware and software network. Is data protected now that it is no longer within the confines of the data center. End-to-end data encryption further removes the cloud provider from data ownership. IT executives cited security as the number one hurdle to deploying cloud services. cloud proponents advocate moving data that is at rest off of personal machines because documents. as seen in Figure 4. Processing advantages using both proprietary and open source algorithms (such as Hadoop) can be realized by deploying private clouds. SaaS. These identical instances allow efficient and elegant recovery from failure or disruptions. and other files are often the target of enterprising thieves. with varying users accessing these servers without noticing a difference. licensed. which removes the single largest advantage of cloud computing. either collocated with the customer or in a separate managed environment. Private clouds can also leverage desktop virtualization. as well as the provider’s expertise in managing such services. not as the owner. but is instead distributed and traversing the Internet? In a recent poll. and other flexible information technology (IT) offerings. It may also use a provider that hosts services in the United States. that is purpose-built and maintains the integrity of the customer’s data in an air-gap environment. where services are provided via a cloud implementation with some or all hardware and resources managed by the infrastructure. Employee access to data is another significant security issue that can be managed by removing large amounts of readily accessible data. An independent analysis of the hosting provider may afford better insight into the end-to-end security of the implementation. CLOUD CHALLENGES ith the advantages that cloud computing offers. identical instances of the application exist on multiple servers. This may increase the level of security. allowing scalable deployment of a cloud infrastructure without relying on an outside service provider.US presidential town hall meeting. This model can leverage the provider’s hardware and software pricing economies of scale. or capacity can be added without modifying a single line of code. large-scale cloud operators automate as many processes as possible. databases. Moving this data to the cloud. New applications can be written to perform better in a cloud environment and purpose-built to work in that distributed architecture. Cloud Software and Applications Cloud benefits will be further realized as software is designed around the cloud. the private cloud still forces an enterprise to retain ownership of and maintain server hardware. instead. significant challenges still need to be addressed before an enterprise can effectively leverage the advantages of the cloud. where it W Large enterprises that own and manage their own infrastructure have the ability to leverage private clouds. many applications have been designed and written to run as a single instance serving one set of users from a single server. 38 Bechtel Technology Journal . Software and interfaces developed to allow cloud resources to perform in a shared environment can be obtained. Enterprises interested in deploying cloud services must consider the security of both the service provider and any hosting services the provider uses.

resources. enterprises are not reimbursed for lost revenues and lost customers.1% 80. a stolen laptop is simply a piece of hardware that needs to be replaced. in terms of risk and availability. Poll [12] can be accessed only by legitimate. as well as what components and services.6% 70% 75% 80% % Responding 3. August 2008) 85% 90% 95% Figure 4.8% 84. increases the security level.Why Not Cloud Services? Q: Rate the Challenges/Issues of the “Cloud”/On-Demand Model (1=Not Significant. December 2009 • Volume 2.3% 81. Number 1 39 . in terms of risk and availability. One cloud storage provider claims to offer a solution that meets all HIPAA requirements for privacy and security. use. not a data breach that costs thousands of hours and dollars to mitigate and potentially results in lost revenue and customers. the Health Information Portability and Accountability Act (HIPAA) of 1996 requires personal health and identification information to be protected and encompasses protecting it from unencrypted transition over open networks or from downloading to public or remote computers. IT executives cite availability and reliability as key concerns when migrating to cloud services. and capital to address security issues. they want to control in house versus extending to a cloud infrastructure. and disclosure requirements for personal and financial data. For example. [12] Regulatory Hurdles Regulatory hurdles may also prove challenging for cloud providers. Availability and Reliability Along with security.5% 88. [15] A regulatory hurdle for government agencies seeking to take advantage of the cloud paradigm is information security legislation such as the Federal Information Security Management Act The choice for enterprises comes down to whether they feel they can operate their infrastructure more reliably than an outside party. [14] Another argument put forth by cloud proponents is that the economies of scale leveraged by the cloud extend to security. they want to control in house versus extending to a cloud infrastructure. secure architectures maintained by security experts have the potential to outperform those developed by firms whose core competencies lie outside of IT. including access control and audit policies. as well as what components and services. Large-scale providers of cloud infrastructure have significant expertise. While service-level agreements may include penalties for downtime. 4. HIPAA also requires security controls. Thus. The choice for enterprises comes down to whether they feel they can operate their infrastructure more reliably than an outside party. 5=Very Significant) Security Performance Availability Hard to Integrate With In-House IT Not Enough Ability to Customize Worried That Cloud Will Cost More Bringing Back In House May Be Difficult Not Enough Major Suppliers Yet 65% n=244 88. enterprises dealing with this type of data must thoroughly understand the regulatory requirements and ensure that the cloud provider meets or exceeds them. authenticated users.3% 74.5% 83. Large providers of cloud services have suffered from outages and performance bottlenecks. However. Purpose-built. or 5 (Source: IDC Enterprise Panel. Legislation may be enacted that can affect location.1% 84.

” When an IT system is outsourced to a contractor. space. actual server use can average as low as 5%.7 times as much on server hardware as is actually needed to meet the average demand over a 3-year period. This hurdle can be overcome by close collaboration between the agency and the provider and the application of clear. Complicating this process is the amortization of power. thorough requirements. or other source. [19] Along with its financial analysis. [16] Table 2. contractor. can reduce potential interoperability issues. as well as the cost of operations with this equipment. a data center with peak use of 500 servers. interoperable standards. Standard. [17] ECONOMIC ANALYSIS OF CLOUD MODELS able 2 compares and contrasts the traditional and cloud infrastructures with respect to some of the advantages and challenges of T 40 Bechtel Technology Journal . Compliance with FISMA requires an agency to explicitly define the security controls of any new IT system prior to its authorization for use. an enterprise must compare the costs of the cloud’s wide-area network bandwidth. [18] In this context. storage. [19] • Migrate between self-managed cloud-managed services • Deploy new applications and services. hardware. As a result. and bandwidth costs actually double when the facilities costs are amortized across them. a strong supporting argument for migration can be made that the economic benefits of the cloud architecture come from its elasticity—the ability to add or remove resources quickly and easily as required. storage. One estimate is that computing. This extends to any services provided by “any other agency. both for application and management traffic. Any economic analysis of cloud models must be performed against this backdrop. an enterprise must compare the costs of the cloud’s wide-area network bandwidth. and CPU with the costs of its own network. have joined together to publish the Open Cloud Manifesto. the new system must have security processes and procedures identical to those of the system being replaced.(FISMA). While the cloud model may make sense financially. several of these providers. and software. Traditional Infrastructure Versus Cloud Infrastructure Traditional Performance Scalability Cost Security Flexibility Manageability Availability Cloud — — — ? ? ? ? ? ? ? ? To determine whether migration to the cloud is economically viable. Since data centers are often provisioned for peak load. [17] Addressing the Challenges The potential pitfalls associated with cloud service deployment have given pause to some major cloud services providers. storage. either on their own hardware or on cloud hardware. and average use of 300 servers would spend 1. To determine whether migration to the cloud is economically viable. including appropriate existing and adopted standards. and central processing unit (CPU) with the costs of its own network. along with enterprises wishing to benefit from cloud computing. the study by the University of California at Berkeley cited above recommends that a detailed analysis be conducted of the performance. trough use of 100 servers. other performance parameters may prevent deployment in the cloud. hardware. bandwidth. well-defined and documented interfaces. and processing time requirements of any application to be executed via a cloud. Calculating and allocating both of these costs is challenging. Interoperability and Portability Finally. as well as new standards when warranted. and cooling costs across the enterprise equipment. that will have to interface with the existing services This challenge also extends to applications and data management with respect to the need to use different management tools and techniques to interface with data on and off the cloud. data and information interoperability and portability can pose a challenge for those wanting to use cloud services. In a model cited by the University of California at Berkeley. This document is intended to encourage a dialogue among the providers and users of cloud computing services about the infrastructure requirements and the need for open. Enterprises will want the option to: • Change cloud reconfiguration providers without and cloud computing just discussed. and software.

Underpinning this platform is the already widely used Salesforce. Through close. [20] Oracle® has also partnered with AWS to deploy Oracle software and back up Oracle databases in the AWS cloud computing environment. virtualization. like other companies. which is scalable. Current key players include: • Amazon Web Services LLC (AWS) • Dell Inc. Sun. with new entrants and modifications in services offerings common. • Salesforce. enterprises are experiencing real benefits from using cloud computing. AWS’s offerings include: • Amazon Elastic Compute Cloud™ (Amazon EC2™). as well as technical assistance. Dubbed by HP as Flexible Computing Services™. which is essential to providing on-demand capacity. TRADEMARKS Amazon Web Services. open collaboration with cloud infrastructure. software. students. In addition to these established firms. [23] HP has offered cloud computing services since® • Hewlett-Packard™ (HP™) • International Business Machines (IBM®) • Sun Microsystems® platform. It is also possible that cloud computing is overhyped and oversold. Enterprises that want to reap the benefits of cloud computing must realize that the decision to migrate is neither quick nor easy. much like previous concepts that claimed to be the next big thing. which provides a programming model and cloud-based run-time environment where developers can pay on a per-login basis to build. the personal computer manufacturer. has leveraged that infrastructure to provide a portfolio of services known as Amazon Web Services™. famous for the infrastructure that powers its online retail service. distributed storage • Amazon SimpleDB™. which provides configurable. a public compute and store infrastructure targeted at developers. enterprises can focus on delivering the right technologies and services to their personnel and customers while using new technologies and architectures and realizing cost and operational benefits. However. which leverages the cloud to provide database services AWS has stated that its cloud services have now used capacity beyond the excess needed for internal operations. Amazon Simple Storage and execute code. and startups. this offering provides computing power as a service in a utility model. Amazon. on-demand computing capacity in the cloud • Amazon Simple Storage Service™ (Amazon S3™). many small. [21] Dell. and other architectures that brought about a technological paradigm shift. Key enterprise personnel must fully understand the provider’s offering and be ready to discuss the challenges and obstacles they will face together in migrating to the cloud. [26] This review of current providers is by no means all inclusive. Key enterprise personnel must fully understand the provider’s offering and be ready to discuss the challenges and obstacles they will face together in migrating to the cloud.SUMMARY OF CURRENT CLOUD PROVIDERS A variety of cloud computing services providers exists today. is offering its Force. and configuration assistance for enterprises interested in deploying private clouds. For up-to-date information on vendors and services. Apache and Apache Hadoop are trademarks of The Apache Software Foundation. it is possible that cloud computing will become as ubiquitous as client-server computing. Number 1 41 . test. IBM also offers hardware.and medium-sized startups are providing a variety of competitive cloud offerings by leveraging their infrastructure or building services on top of the infrastructures provided by the key players. also provides hardware and software for private clouds. and Amazon SimpleDB are trademarks of Amazon Web Services LLC in the US and/or other countries. Amazon EC2. Amazon S3. refer to reference materials or contact providers directly. [25] Sun has announced its forthcoming Sun Cloud. One DCS offering includes hosted rental of CPU cycles per hour. [22] Salesforce. recently established its Data Center Solutions (DCS) division to offer design-to-order cloud computing hardware and services under the rubric Dell Cloud Computing Solutions™. December 2009 • Volume 2. the online customer relationship management firm. Amazon Elastic Compute Cloud. [24] Hardware and software giant IBM has introduced a suite of services known as IBM Smart CONCLUSIONS A s enterprises realize the benefits of the cloud.

.com/ ?p=66. Sun Microsystems is a registered trademark of Sun CloudPlatforms--Chappell.” Chemical & Engineering News.. Martin and J. “Cost of Power in Large-Scale Data Centers. 2009. BlueLock.informationweek. 2009. “A Definition of Cloud Computing. Armbrust et al. Salesforce.acs.” TechCrunch. [18] dbsAttachedFiles/IDC_Cloud_Computing_ IGT_final.” AWS white paper.html. 2009. 2009. http://msdn. June 21. Curran. http://www. accessed July 11.techcrunch. accessed June cyber/Gourley_Cloud_Computing_and_ Cyber_Defense_21_Mar_2009. [15] “Creating HIPAA-Compliant Medical Data Applications with Amazon Web Services. [2] [3] [4] [5] [6] [7] 42 Bechtel Technology Journal .com/ 2008/11/28/CostOfPowerInLargeScale DataCenters. “A History of Cloud Computing. “A Federal Cloud Computing Roadmap. http://www.whitehouse.” ComputerWeekly. September 26.” Mailtrust. March 24.jsessionid= AY130JZGGFXPCQSNDLPCKHSCJUNN2JVN? articleID=208403766&pgno=1&queryText= &isPrev. Foley.berkeley. L.opencloudmanifesto. “Cloud Computing and Cyber Defense. doc/13565106/A-Federal-CloudComputing-Roadmap.” Microsoft Corporation paper. Arrington. 21.” InformationWeek. accessed June 15.” ServerVault Corp presentation. 10–14. [9] [10] M. 2009.eecs. And YouTube. Knorr and G.. http://gigaom. And AppEngine.mvdirona. 87.amazonaws.” InformationWeek. Preston. 2008..informationweek. http://blog. is a registered trademark of salesforce. and Flexible Computing Services are trademarks of Hewlett-Packard Development Company.s3.” Perspectives blog. Google is a trademark of Google Inc.pdf. accessed June 25. [13] A. accessed June 16. accessed June 25. Hamilton. University of California at Berkeley.grid. Carraro. December accessed June 16.” white paper provided to the National Security Council and Homeland Security Council as input to the White House Review of Communications and Information “MapReduce at Rackspace. http://www. April 7. “Above the Clouds: A Berkeley View of Cloud Computing. “White House Using Google Moderator for Town Hall Meeting. Gourley. http://awsmedia. November 28. 2009. 2009. 2008. [17] Open Cloud cen/coverstory/87/8721cover. Gruman. HP. R. Mullin.” InformationWeek. REFERENCES [1] D. “Down To Business: Customers Fire a Few Shots at Cloud Computing. August 2008.htm. March 22. A.pdf. 2009. Hoover.. “Architecture Strategies for Catching the Long Tail. 2008. Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States and/or other countries.” The GigaOM Network. Hewlett-Packard.computerweekly.” Electrical Engineering and Computer Sciences. June 14. 2008.informationweek. F. [8] S. aa479069. http://www. J. http://pubs. in the United States and other countries. http://perspectives. April 2006. 2009. “What Cloud Computing Really Means. 2009. accessed July 2009/03/24/white-house-using-googlemoderator-for-town-hall-meeting/accessed June 25.davidchappell. 2008. Hood.infoworld. “Cloud Computing: It Is Time for Stormy Weather.N. http://www. May cloud-computing/blog/archives/2008/09/ a_definition_of. registered in the United States and other countries. [12] D. IBM is a registered trademark of International Business Machines Corporation in the United States. slide 10. accessed July 19. R.P. No. Mac OS is a trademark of Cloud Computing Solutions is a trademark of Dell Inc.jhtml? articleID=208700713 . pp. accessed June 25.jhtml. 2009/EECS-2009-28.” Chappell & Associates.pdf. Vol. blog/bluelock/0/0/cloud-computing-a-five-layer-model. 2008. [19] services/data/showArticle. Linux is a registered trademark of Linus Torvald. services/hosted_apps/showArticle.bluelock. Chong and G. Chappell. 2009. March 21. 2009. [16] J. “Guide to Cloud Computing. “The New Computing Articles/2009/03/27/235429/a-history-ofcloud-computing. [11] B. http://blog. http://www. http://www.” InfoWorld.racklabs. http://www.scribd. Yachin. “A Short Introduction to Cloud Platforms: An Enterprise-Oriented View. [14] R. March 27. Croll. accessed June 25. print/34031. 2009. January 23. April 2009.pdf. “Cloud Security: The Sky is Falling!.aspx. 2009. accessed June 25.” IDC Emerging Technologies PowerPoint presentation. 2009. 2009. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Mohamed. E. http://www. inc.

[20] Amazon Web Services LLC,, accessed July 19, 2009. [21] Oracle Technology Network, cloud/index.html, accessed July 19, 2009. [22] Dell Cloud Computing Solutions™, global.aspx/sitelets/solutions/cluster_grid/ project_management?c=us&l=en&cs=555, accessed July 19, 2009. [23] R.B. Ferguson, “ Unveils Cloud Computing Architecture,” eWeek, January 17, 2008,, accessed July 19, 2009. [24] Hewlett-Packard – Utility Computing Services, 284425-0-0-0-121.html, accessed July 19, 2009. [25] IBM Cloud Computing, business/, accessed July 19, 2009. [26] Sun Microsystems – Sun Cloud Computing, solutions/cloudcomputing/offerings.jsp, accessed July 19, 2009.

Brian is a member of the IEEE™; the Project Management Institute; SAME; ASQ; NSPE; MSPE; INSA; AFCEA; the Order of the Engineers; and Eta Kappa Nu, the national electrical engineering honor society. He authored six papers and co-authored one in the Bechtel Communications Technical Journal (formerly, Bechtel Telecommunications Technical Journal) between August 2005 and September 2008. His most recent paper—“Desktop Virtualization and Thin Client Options”—appeared in the December 2008 (inaugural) issue of the Bechtel Technology Journal, a compilation of papers from each of Bechtel’s six global business units. Brian’s tutorial on Micro-Electrometrical Systems and Optical Networking was presented by the International Engineering Consortium. In 2009, he was selected to attend the National Academy of Engineering’s Frontiers of Engineering conference, an annual 3-day meeting that brings together 100 of the nation’s outstanding young engineers from industry, academia, and government to discuss pioneering technical and leading-edge research in various engineering fields and industry sectors. Brian earned an MS in Telecommunications Engineering and is completing an MS in Civil Engineering, both at the University of Maryland. He holds a BS with honors in Electrical Engineering from The Pennsylvania State University. Brian is a licensed Professional Engineer in Maryland and a certified Project Management Professional. He is a Six Sigma Yellow Belt.

Brian Coombe, deputy project manager for the Ground-Based Midcourse Defense project based in Huntsville, Alabama, oversees the execution of Bechtel’s scope, which includes the design and construction of facilities and test platforms designed to protect the US from attack by long-range

ballistic missiles.

Previously, as a program manager in the Bechtel Federal Telecoms organization, Brian supported a major facility construction, information technology integration, and mission transition effort. Prior to holding this position, he was the program manager of the Strategic Infrastructure Group, which included overseeing work involving telecommunications systems and critical infrastructure modeling, simulation, analysis, and testing. As Bechtel’s technical lead for all optical networking issues, Brian draws on his extensive knowledge of wireless and fiber-optic networks. He was the Bechtel Communications Training, Demonstration, and Research (TDR) Laboratory’s resident expert for optical network planning, evaluation, and modeling, and supported planning and design efforts for AT&T’s wireless network and Verizon’s Fiber-tothe-Premises architecture. Before joining Bechtel in 2003, Brian was a systems engineer at Tellabs®.

December 2009 • Volume 2, Number 1



Bechtel Technology Journal

Issue Date: December 2009 Abstract—Site integration and wireless system launches have always been intense and demanding activities for both the operator and the general contractor performing the work. The new network is typically overlaid on the existing network, which must remain fully operational during the overlay process. Ideally, the upgrade should not noticeably affect customer experience in any part of the network. Inevitably, however, some of the existing sites must be altered to accommodate the new technology’s equipment. These changes, in turn, may cause existing coverage patterns to degrade. Such degradation can arise from high-level network design problems, poor workmanship in installation, or unexpected hardware defects in the new components. It is imperative to quickly identify the source of the degradation and move toward a resolution. Failure to do so may result in significant loss of revenue and a reduced level of customer satisfaction due to the amount of time taken to resolve the problem. Much attention is being given to using performance engineering to quickly and accurately isolate and classify degradation problems based on key performance indicator (KPI) statistics for the existing network. Also, processes exist that can reduce the time needed to rectify problems, thus improving overall customer experience during the network upgrade. Keywords—component failure, degradation, global system for mobile communications (GSM), key performance indicator (KPI), legacy network, performance engineering, sector verification testing, site failure, system launch, tower-mounted amplifier (TMA), universal mobile telecommunications system (UMTS) overlay, workmanship

INTRODUCTION oday’s competitive cellular environment forces operators to upgrade their networks more often than they did in the past. Moreover, the new technology being implemented is typically only a preliminary version based on a new standard and will eventually be upgraded. As a result of delays in the full-service launch of new systems, operators must rely on the legacy system to generate revenue. Meanwhile, upgrades cause the legacy network to be in an almost constant state of change. These changes can produce service degradation and make it challenging for the operator to maintain consistently high levels of service quality. Some statistics even show dangerously high failure rates in networks being upgraded. Since the legacy system generates revenue and is the base for any future upgrades, it must be protected.


in global system for mobile communications (GSM)/general packet radio service (GPRS)/ enhanced data rates for GSM evolution (EDGE) networks. To the extent that degradation can be detected and resolved during the installation process, as proposed in this paper, the legacy system may be protected from the adverse effects introduced by the new technology installation.



Aleksey A. Kurochkin

This paper explores service degradation issues and provides some useful key performance indicator (KPI) changes. If compared correctly, these KPI changes can help the operator to identify and mitigate the causes of degradation

cellular system upgrade typically occurs on a market level, when hundreds, or even thousands, of cell sites start to operate cohesively under a new technology to serve a new type of subscriber or provide the existing subscribers with a new type of service. When state-of-the-art technology requires coaxial cable and antenna sharing [1] on many sites, the launch becomes not only critical to the new system, but also very intrusive to the revenue-generating legacy system. For example, a universal mobile telecommunications system

© 2009 Bechtel Corporation. All rights reserved.


BEAST BSS EDGE GPRS GSM Basic Engineering Analysis and Statistical Tool base station subsystem enhanced data rates for GSM evolution general packet radio service global system for mobile communications human-machine language integrated services digital network key performance indicator Link Access Procedure–D network management system network operation center protocol data unit radio frequency RF data sheet receive antenna interface tray traffic channel tower-mounted amplifier transceiver universal mobile telecommunications system

a wide geographic area. The problem should be corrected during a period of low traffic (typically lasting only a few hours). When the site goes back on line, wireless customers should not notice any difference in service during their daily commute. The wireless network operators employ performance engineers to check critical alarms and the KPIs immediately after a site returns to service after the upgrade. If all alarms are clear, the site is accepted back into the network. Otherwise, troubleshooting begins as soon as enough statistics are gathered or as soon as practical. Various data-processing tools are available for use in interpreting the raw data from the base station subsystem (BSS). Because a service provider cannot afford to have its service degraded even for a short time, onsite testing and human-machine language (HML) terminals are used to immediately evaluate service after changes are implemented at a site. Hourly reports generated by software such as the Basic Engineering Analysis and Statistical Tool (BEAST) are used to monitor short-term BSS statistics, and operator- and industry-developed 24-hour and longer-term average trending tools are used to support more in-depth analysis. These tools provide better trending capabilities and alert the performance engineer to the changes at adjacent sites, which can signal abnormal site performance in the highly interdependent cellular system. The performance engineer’s primary challenge is to differentiate between degradation due to changed design parameters and degradation due to a fault introduced in the radio frequency (RF) path during the installation process. Marginal degradation across all KPIs indicates the first scenario, while significant deviation from the pre-upgrade performance requires a field diagnosis of RF paths of the underperforming sector. However, the following questions arise: How much degradation requires troubleshooting in the field and what issues need to be diagnosed to fix the problem? The decision regarding which problems require field troubleshooting has to be made judiciously, since such troubleshooting is costly and may divert resources needed for the new network rollout. The performance engineer’s role is to assess the problem quickly and make recommendations based on the initial classification of the source of the problem. The following sections describe the classifications of the common causes of legacy service degradations.

The performance engineer’s primary challenge is to differentiate between degradation due to changed design parameters and degradation due to a fault introduced in the RF path during the installation process.


(UMTS) technology overlay is intrusive to the existing GSM network when both technologies share the antenna systems. Wireless network operators measure the impact of this sharing by monitoring changes in the legacy system KPIs such as traffic, retainability, dropped calls, and accessibility. These KPIs can be measured on various levels (network, cluster, and individual cell site) to provide operators with enough detail to troubleshoot problems occurring during the launch. Since cell sites are typically upgraded one at a time before the launch, there is a small window of opportunity to resolve an issue developing at a legacy site due to upgrade before it negatively affects subscribers in


Bechtel Technology Journal

SERVICE DEGRADATION CLASSIFIED BY SOURCE S ervice degradation needs to be classified by source so that different resources can be allocated to rectify problems quickly and efficiently. Sometimes. During the technology upgrade. Throughout the industry.6 dB loss to the legacy system path. and the legacy cluster itself needs to be optimized. during the technology upgrade. the current legacy system was simply not optimized to support this extra loss across all sites. as well as the paper “Intermodulation Products of LTE and 2G Signals in Multitechnology RF Paths” (pages 21–32 of this issue of the Bechtel Technology Journal). the changes fail to preserve the subscriber level of service. Ultimately. some sites with stretched coverage will develop a traffic issue due to the cumulative effect of this loss and shrinking coverage. this goal is not achievable in real networks.1 to 0. Component Failure As discussed earlier. While the impact of a 1 to 2 dB loss on the individual sector link budget may seem insignificant. To share the same coaxial cable. or some level of quality degradation has to be accepted. While this paper does not cover this source of legacy system service degradation. Feeder Configuration for UMTS and GSM Antenna Sharing December 2009 • Volume 2. the new RF components introduced into the legacy network path share antenna systems at a site. Design-Related Degradation Although every RF design engineer and systems engineer intends to make an overlay of new technology non-intrusive to the legacy system. Workmanship Figure 1 does not adequately illustrate the complexity of the state-of-the-art cellular site. Number 1 47 . either more sites are required to support the legacy system. On average. duplexers. Even if the performance of most of the legacy sites is not hindered by this loss. the legacy system must also be altered to maintain service levels. the legacy system alterations can cause other problems. various diplexers. which explain the processes and the ways to avoid component failure or non-performance issues during the network launch. Some legacy sites may receive new azimuth and/or downtilt designs. However. and combiners need to be introduced on existing sites (as shown schematically in Figure 1). there are many examples of intermodulation issues caused by various active or even passive components [2] that have been used in combination but have not been tested to the site-specific conditions prior to rollout. Both the new and legacy technologies have to share antenna systems to leverage the real estate at the site. each of these new RF components introduces a 0. Dual-Band or Single-Band Antenna Two RF Coax Cables GSM Tx1/Rx1 UMTS Tx3/Rx3 GSM Tx2/Rx2 UMTS Tx4/Rx4 Diplexer BTS1 GSM Diplexer Node B UMTS Rx Tx Rx Tx Rx Tx Rx Tx Duplexer Duplexer Duplexer Duplexer Figure 1. the legacy system must also be altered to maintain service levels. the reader is encouraged to check references [2] and [3]. Therefore.

this type of analysis has been carried out after the site has been brought back on line and has experienced performance issues for some time. the performance engineer checks the reported sector for critical alarms after physical work at the site is finished but the installation crew is still on site. Thus. accessibility failures and handover failures to the sector from neighboring sectors increase. However. these problems can usually be corrected within a couple of hours. Meanwhile. the crew checks the suspected connection in a specific path and rectifies the issue immediately. Though Bechtel Technology Journal Figure 2. Poor workmanship can affect performance significantly. it may take days or even weeks and several site visits to troubleshoot the same problems because the same crew may not be available and another crew needs time to learn the site details. most service-related issues are eliminated before the site comes on line and starts interacting with other cell sites and serving customers. The reports generated by the data-processing tools discussed previously. a cell site is a conglomerate of RF components from various manufacturers connected by coaxial cables marked with color schemes from previous owners. If degradation problems are discovered while the original crew is still on site. these problems can usually be corrected within a couple of hours. TIMELY DETECTION AND TROUBLESHOOTING I f degradation problems are discovered while the original crew is still on site. Changes in the KPIs shown by these reports have to be interpreted in conjunction with the impact of the changes carried out at the site as per the RF data sheet (RFDS). after the crew has left the site. In this scenario. reasons for degradation. If one connector has substandard weatherproofing. If a performance engineer has discovered compromised performance instead of the expected increase in traffic. if a TMA is malfunctioning or not powering up. along with a record of changes at the site. an implementation team member travels to a designated spot in the middle of the sector and makes several test calls to the network operation center (NOC). For example. Typically. Installation of TMAs should increase the traffic in the sector because the TMAs compensate for the transmission line losses and effectively increase the site coverage. Table 1 lists major KPIs. One crossed jumper reduces site traffic at least twice. If the sector is clear of alarms. TMA issues should be considered along with other component failures or workmanship issues. However. that site. typical causes. Therefore. At the NOC. an RFDS may call for new tower-mounted amplifiers (TMAs) to be installed on the existing coaxial lines. These failures not only affect the performance of the sector under consideration.As a result of many network consolidations and technology upgrades. can provide insight into reasons for the performance degradation at SECTOR VERIFICATION TESTING T he transmit-path-testing troubleshooting method listed in Table 1 has a protective effect on network quality if it is used during the final hours of installation instead of during troubleshooting. the operator is losing revenue from the legacy system and the implementation team is losing money on troubleshooting. If any of the calls fail or alarms are discovered. moisture entering the connector will degrade performance in that sector within a couple of months. much depends on the skill of the technicians and the quality of their workmanship on each connector. it would be interesting to explore whether steps in the analysis could be introduced earlier in the installation process before the site resumes serving customers and capital expenditures are increased. and troubleshooting methods. Therefore. this approach entails considerable expense for the operator in terms of both revenue and service quality. the performance engineer shuts down and starts up the sector transceivers (TRx’s) in sequence to test both RF portions of the call path. One connector tightened loosely can introduce as much service loss as 200 feet (60 meters) of cable. but also that of neighboring sectors. As discussed earlier. such as shown in Figure 2. Coaxial Cable Runs on an Existing Tower 48 .

Typical Causes. change in azimuths. a part of integrated services digital network (ISDN) Layer 2 protocol.Table 1. or change in azimuth • Change in relation of serving cell to its neighboring cells Troubleshooting Methods • Compare new RF coverage with existing coverage to check for deficiencies • Compare RFDS for this site and its neighbors Component Failure • Failures in backhaul network such as T1 malfunction due to transmission equipment fault or microwave fading • Higher insertion loss of RF transmission cable system • Check alarms at the site for radio failure. usually.. check TMA functionality Service degradation needs to be classified by source so that different resources can be allocated to rectify problems quickly. duplexers. formerly associated with Bechtel. due to drop in signal level or inability to hand over) Typical Causes Design-Related Degradation Reasons for Degradation • Change in coverage due to downtilt. Workmanship • TMA not powered up • Loose or faulty connectors on radio ports. RXAIT. which results in blocking of TCHs and fewer Erlangs being picked up by the sector • Receive or transmit jumpers crossed with another sector Workmanship • Test the RF transmit paths by making test calls from the middle of the sectors while locking and unlocking TRx’s in the NMS TCH TMA TRx traffic channel tower-mounted amplifier transceiver GSM KPI NMS PDU global system for mobile communications key performance indicator network management system protocol data unit RF RFDS RXAIT radio frequency RF data sheet receive antenna interface tray (Acknowledgment: Table 1 was originally created by Harbir Singh. or insertion loss from additional components introduced in transmit path • Another band added to the sector. or TMA • Verify that PDU for TMA is working • Check for loose or faulty connectors • Compare the before and after RFDS values for change in downtilts • Account for additional insertion losses in the link budget • Adjust the transmit power level or document the reduced coverage footprint • Determine if new bands were added to accept some of the traffic • Check for locked or blocked radio Traffic (measure of channel usage. or vice versa Component Failure • Not all radios functioning. T1 failure (LAPD1 failure). GSM 1900 added to GSM 850. and Troubleshooting Methods KPIs Dropped Calls or Retainability (increase in percentage of dropped calls or decrease in percentage of calls successfully completed.) 1 LAPD (Link Access Procedure–D).g. or transmission equipment failure • Check cable sweep results for return loss and insertion values • Compare RFDS and link budget Workmanship Accessibility (ability to successfully assign a TCH to a mobile phone) Design-Related Degradation • Link imbalance between transmit and receive signal (receive signal is weaker than transmit signal) • Not all radios functioning • One of two transceivers not working • TMA faulty • Diplexer/RXAIT faulty Component Failure • Check alarms at the site for radio failure. e. a change in traffic is measured) Design-Related Degradation • Reduced or changed coverage footprint due to downtilt. KPI Degradation. change in antenna pattern. Number 1 49 . December 2009 • Volume 2.

5. When sector construction is complete. the same crew is still assigned to the site and maintains its momentum. Sample Site Verification Data Sheet 50 Bechtel Technology Journal .1 (dead) GSM 850 .1 (dead) GSM 1900 . Performance engineer checks for the sector alarms again and directs the installation crew according to the findings.2 (idle) GSM 1900 . Sector Verification Test Locations with Latitude/Longitude and Broadcast Control Channel Information co nS t ood Dr DATA SHEET SITE VERIFICATION Site Name: CONST ID: Location ID: Site Address: Lat: Long: Record Data per "Site Verification Field test Procedure" some issues may not be addressed immediately due to weather changes or nighttime hours.1 (idle) GSM 1900 . Performance engineer prepares for on-air site integration by reviewing the RFDS and comparing it with the actual system configuration in the network management system (NMS). the sector verification test consists of the following steps: 1. In general.2 (dead) Sector 1 Cell ID BCCH RSSI Sector 2 Cell ID BCCH RSSI Sector 3 Cell ID BCCH RSSI Cell ID PSC RSCP Cell ID N/A N/A PSC RSCP PSC RSCP N/A UMTS 1900 (idle) N/A UMTS 1900 (dead) 1 COMMENTS . Performance engineer checks for pre-existing alarms to alert the installation crew of any pre-existing conditions. A member of the installation team travels to a planned location (see Figure 3) and verifies that the test phone is locked on this sector. 4. it remains assigned to the site and can troubleshoot the issue quickly the next day. Cell ID N/A N/A Tx path GSM 850 .2 (idle) GSM 850 . the installation crew brings the sector on line in private mode. Moreover.Sector 2 COMMENTS .Sector 2. 3.Sector 3 PE Name: or: War Room Coordinat Tester Name: Date: Figure 4. the crew is already aware of what needs to be checked or fixed as soon as practical.1 (idle) GSM 850 . g ton R ll f o Br i rd g gs Dr d Ch yR ane d Site 1 Lat.Little Paint Branch Park Bay Hill Dr Ba n Medallio yH Dr Dr Ait ch ill es on Ha Bay Hill Drive (132) [655] <268> Rd sbr oo kD r Test Phone Location Alternative Location r bo ur Tow nD r Inn 95 St If the installation crew cannot fix the problem during the maintenance window.2 (dead) GSM 1900 . COMMENTS . Long mi nS t St on Dunnin B Little Paint Branch Park De ja en We Fr a n eh Old Gun Iv Ful ler r yD 29B Rd powder nim Rd all k lin Dr St G or don ale Rd Am Belt sv il F lin t Ro t on St (134) [649] <284> ck D (149) [646] <276> ale Rd r A e Tr ot t e mm nd Ave me nd ale Rd y Ln le Dr Am nd me Beltsville North Neighborhood Park 29B Ma Little Paint Branch Park l Rd Pine 212 Powd er Mil K no tt S t Heart w Figure 3.

even if it is experienced over several months. The 18% failure rate depicted in Figure 5 is not acceptable for the legacy network. most failures were rectified immediately or by the next day. 10. As depicted in Figure 3. as such. It is believed that conducting immediate testing while the crew is still on site instills an increased sense of ownership among the crew members. kinked cable. Initial Sector Failure Rate During Verification Test 2% 22% Design Component Workmanship 76% Figure 6. most likely. 7. The performance engineer shuts down the sector TRx’s to test all bands and RF paths. For example. if the new technology upgrade rate is 10 sites (30 sectors) per day. Sector Verification Test Results This section discusses statistics analyzed for 919 completed sector verification tests. Special attention should be given to the coaxial cable connector and weatherproofing workmanship. The test results and any issues are documented (see Figure 4). Issues are qualified using the methodology discussed in the previous section. creating a false need for additional temporary manpower (with all its associated inefficiencies). the team identifies more instances of failed components and design issues where workmanship had initially been blamed. typically. In this case. It can be especially difficult to pinpoint a design issue. 9. 18% Test Passed Test Failed 82% Figure 5. the right resources (performance engineers and troubleshooting crews) were assigned to the issues immediately and were used effectively. a crew member may exercise more care in performing tasks when he/she knows that his/her work will be put to the test immediately. loose connectors. Although the source ratio changes as troubleshooting progresses. Number 1 51 . the performance engineer discusses the plan for correction with the crew foreman. it is classified as workmanship related. Alternative locations (marked with a green cross) may have to be provided as well. Preliminary Issue Source Classification December 2009 • Volume 2. a focus for improvement. together with the installation team. As the troubleshooting process proceeds. the planned location should be easily identifiable and accessible by car. Figure 5 illustrates the initial sector failure rate. more than one sector and both 850 MHz and 1. Figure 6 illustrates the initial classification of issues by source. some testing steps may need to be repeated. an Conducting immediate testing while the crew is still on site instills an increased sense of ownership among the crew members. In some cases. 11. These tests were performed after physical work was completed and the sector(s) was deemed ready to resume service. Depending on the situation. which in turn improves overall workmanship quality. If any issue affecting service qualifies as a workmanship issue. since it often takes time to compare the performance indicators for the site and its neighbor before and after the change. with any defects in workmanship likely to be exposed in front of the supervisor and co-workers. poor workmanship remains a significant source of the issues encountered during the launch and. which in turn improves overall workmanship quality. the legacy system will lose five to six sectors each morning.6.900 MHz bands with GSM technology were tested together. This data has a flaw in that when the cause of an issue is not immediately apparent. stretching existing resources and. 8. For those failures that took longer to rectify. Also. To explain further.900 MHz band calls from the test phone to the performance engineer at the NOC. were not involved in testing during the early stages of the implementation process. and swapped jumper cables are suspected as being at fault. The installation team member makes 850 MHz and 1. It is important to note how many issues would adversely affect network and customer experience if performance engineers.

” Bechtel Telecommunications Technical Journal. no operator would want to place hundreds of legacy sites with a 63% alarm ratio back into service. 4.A. under a traditional launch scheme. 5. When any alarm persisted. No. no statistics have been compiled that prove an increased level of workmanship quality inevitably results from an increased sense of ownership among installation crew members. Vol. Results of Site Alarm Check 52 Bechtel Technology Journal .pdf. The business plan necessity of leveraging existing network infrastructure and sharing the same coaxial lines between the new and the legacy systems introduces new components into the optimized legacy network. alarm checks are common in the industry. TechnicalJournals/January2004/Article3. pp. September 2007.installation crew that turns over consistently alarm-free sites gets immediate recognition. an inseparable part of life of any state-of-the-art network. No.pdf. [2] Alarms Found 37% 56% 7% Post-installation Pre-existing None [3] Figure 7.bechtel. I. Kurochkin and E. an unacceptable percentage of sectors or sites brought on line is likely to have issues affecting service. accolades (possibly).” Bechtel Telecommunications Technical Journal. Therefore. “RF Engineering Approach to Site and Antenna Sharing. Site Alarm Check Statistics Alarm checks can be performed jointly or independently during verification testing. Nevertheless. June 2006. “Experimental Study of 3G Signal Interaction in Nonlinear Downlink RAN Transmitters. Smirnov. “Importance of Antenna and Feeder System Testing in Wireless Network Sites. 91–98. tri-sector sites have higher alarm rates than any of the individual sectors. pp. These problems lead to customer churn and decreased revenue from the legacy communications/assets/files/ TechnicalJournals/June2006/Article10. Chugunov. CONCLUSIONS N When performance engineering is integrated into the installation process files/TechPapers/importance-of-antenna. Typically. Introduction of the new technology decreases the performance levels of the legacy sites (to varying degrees).pdf. however. http://www. Kettani. pp. and A. 1.bechtel. This could be a good topic to explore in another paper. Vol. Kurochkin. Figure 7 shows the ratio between pre-existing and post-installation alarms for a 125-site sample. This paper proposed a more integrated approach to solving these problems whereby sector verification testing and performance engineering are introduced into the installation process to identify the performance issues early.A. http:// www. and personal satisfaction from the work it does―all of which sustain willingness to take ownership of quality. 61–68. Other discussions centered on the methodology used to quickly classify issues by source and how to address the issues with the right resources— with the goal of reducing troubleshooting and rectification time so that the customer can enjoy the same level of services as before the update. In this example. not just those affecting service. the percentage of issues is seen to increase dramatically in comparison with verification test issues because the sector verification process inherently detects only major issues affecting service.” Bechtel Communications Technical Journal. 2. Dinan and S. Therefore. the performance engineer could have helped to rectify these alarms by directing the crew to the most likely causes before the sites were returned to service. the crew left the site with some troubleshooting directions and often was assigned to future troubleshooting efforts. REFERENCES [1] E.M. The test results presented in this paper demonstrate that. Vol. The foregoing assertions notwithstanding.bechtel. most of the issues are addressed before the site is brought back to service. http://www. No. 2. Dinan. 21–30. the subscribers in the area experience service degradation due to these issues immediately (next morning) after a site is returned to service. 2. All alarms were checked at the 125 sites.A. A. Also. ew technology overlay is intrusive to the existing legacy system. January 2004.

as executive director of Site Development and Engineering for Bechtel Telecommunications. Aleksey managed the Site Acquisition and Network Planning Departments and oversaw the functional operations of more than 300 telecommunications engineers. He is experienced in international telecommunications business development and network implementation. Aleksey established an efficient multiproduct team at Hughes Network Systems. Formerly. Number 1 53 . In addition. and managers. Aleksey introduced the Six Sigma continuous improvement program to this group. project manager for Bechtel Communications. is currently responsible for the product testing. Before joining Bechtel.BIOGRAPHY Aleksey A. specialists. focused on RF planning and system engineering. he originated the RF Engineering and Network Planning Department in Bechtel’s Telecommunications Technology Group. Aleksey has an MSEE/CS in Telecommunications from Moscow Technical University of Communications and Informatics. He is also a member of Bechtel’s Global Technology Team and was a member of the Bechtel Telecommunications Technical Journal Advisory Board. Russia. system design. As a member of Bechtel’s Chief Engineering Committee. and his engineering and marketing background gives him both theoretical and hands-on knowledge of most wireless technologies. and entire site implementation cycle of EV–DO and LTE technology in one of the most important regions for a US cellular operator. In addition to his North American experience. Kurochkin. he has worked in Russia and the Commonwealth of Independent States. December 2009 • Volume 2.

54 Bechtel Technology Journal .

. McCulloch 81 Improving the Hydraulic Design for Base Metal Concentrator Plants José M. Berkoe Sergio A. PhD Fred A. Locher. PhD Jon M. Zamorano Ulloa Los Pelambres Repower 2 The copper concentrator process area includes process water ponds (foreground) and tailings impoundment (far background). Adriasola Robert H.Mining & Metals Technology Papers TECHNOLOGY PAPERS 57 Environmental Engineering in the Design of Mining Projects Mónica Villafañe Hormazábal James A. The strong safety record at Los Pelambres helped Bechtel win new work in Latin America. Murray 67 Simulation-Based Validation of Lean Plant Configurations Robert Baxter Trevor Bouk Laszlo Tikasz. PhD Robert I. Janssen. Cerro Nocal mountain is in the distance.

. James A. EIA requirements and regulations have been adopted in most countries. All rights reserved. and most regulations covered worker industrial hygiene or a few special pollution control districts (boards created to enforce local. international agreements. and with growing environmental awareness on the part of the World Bank. i. 57 . Before 1962. federal and local regulations and laws. who often concentrate on the differences among the documents instead of the considerable similarities. Keywords—engineering design. otherwise. environmental impact assessments (EIAs) originated under the National Environmental Policy Act (NEPA) 1969 to predict the combined effect of a project on the environment. pollution control complaints fell under the same regulations as complaints about a neighbor’s barking dog! The US Environmental Protection Agency (EPA) was created in 1970 to establish a national environmental policy. However. In practice. Silent Spring (Houghton Mifflin. solid.. and submitted to the local authorities.) By the mid-1980s. environmental engineering. Mónica Villafañe Hormazábal mvillafa@bechtel. 1962). evaluated. the training is mostly aimed toward EIA and environmental management and not toward the actual practice of environmental engineering design. and requirements of financial institutions are applied from the conceptualization of a project through its subsequent design phases. Good environmental practices. this means that the environmental engineering discipline works closely with the other engineering disciplines in preparing engineering designs that mitigate environmental impacts. In their team © 2009 Bechtel Corporation. with different agencies or departments within agencies independently responsible for regulations pertaining to air. citizen complaints were handled under general nuisance control regulations. At approximately the same time. following the United Nations Conference on the Environment and Development (UNCED) (known as the Earth Summit). was widely credited with launching the environmental movement in the US. In the US. pollution control regulations were enforced under a number of programs administered by various agencies.e. The uncertainty regarding the differences between these functions can confuse the engineers responsible for delivering a successful project outcome. wherein the environmental impacts are assessed. environmental impact assessment (EIA). These documents are noted here for environmental specialists.ENVIRONMENTAL ENGINEERING IN THE DESIGN OF MINING PROJECTS Issue Date: December 2009 Abstract—The application of environmental engineering (including pollution control) from the inception of a project’s study phase through final completion has a significant effect on the project outcome. common misunderstandings and confusion over the distinctions between an EIA and environmental engineering design can have a detrimental effect on the development of mining projects. the environmental engineers exchange project information during the owner’s environmental impact assessment (EIA) process. and federal regulations). While some universities have been developing environmental engineering programs within the last two decades. environmental impact. in 1992. Brazil. owner policies. held in Rio de Janeiro. (Different jurisdictions require similar documents such as environmental impact statements. mining projects INTRODUCTION General Rachel Carson’s book. environmental impact reports. water. some multinational mining companies and/or financial institutions were preparing EIAs for projects even if such documentation was not required by the host country. many other countries or jurisdictions within countries were developing pollution control regulations. or estudios de impacto ambiental. Murray jamurray@bechtel. replacing the smaller programs. state. or hazardous wastes.

In addition. and the applicable regulations can change rapidly at the discretion of elected or appointed political bodies. and construction (Bechtel) global business unit (Bechtel) Mining & Metals (GBU) National Environmental Policy Act nongovernmental organization operation and maintenance total installed cost United Nations Conference on the Environment and Development Pollution control and other mitigation capital costs can range from 3% to more than 50% of the TIC. While this requirement may be waived for major heavy industrial capital projects. An Environmental Engineer on a project has critical schedule and budget duties. The seamless integration of environmental engineering into the execution of a major capital project is important to the owner’s financial and operational success and to the efficiency of an engineering. the term Environmental Engineer (as indicated by capitalization) is used to refer to a Bechtel environmental engineer engaged in the Mining & Metals (M&M) Global Business Unit’s (GBU’s) engineering design activities as described herein. Public hearings must be held before the EIA is approved or permits are acquired. engineering discipline codes are written by engineers for engineers and may be applied uniformly throughout a country or in a region that consists of multiple countries. AND TERMS EIA EPA EPC GBU M&M environmental impact assessment (US) Environmental Protection Agency engineering. the overall environmental requirements applicable to various projects can differ based on unique sitespecific objectives. starting from the UNCED Treaty of 1992 and culminating in the finalization of the current regulatory framework in 2002. These codes may be more than 100 years old and typically can be changed only after a thorough technical peer review. Bechtel Technology Journal In 1994. ACRONYMS. such as mechanical (boiler). and seismic. typically more than 100 separate environmental permits or other documents must be approved by various governmental agencies before construction and/or operations can begin. This paper also describes the role of formal education and the need for on-the-job training. these are governed by the General Law for the Environment (2005). While the environmental requirements contained in other engineering codes incorporated only by reference are equally binding and enforceable. Environmental regulations that govern the Environmental Engineer’s work are somewhat similar to the codes applicable to other disciplines. procurement. the pollution control and other mitigation capital costs can range from 3% to more than 50% of the total installed cost (TIC). Even requirements for projects separated by only a few kilometers can vary. Depending on the type of project. procurement. some permits are also on the critical path for the release of funding by the financial institution and/or the owner’s board of directors.ABBREVIATIONS. For most projects. which was further institutionalized with the promulgation of the Environmental Impact Bylaw in 1997. a building permit with an associated plan check from the local government department may be required. However. environmental engineering is a new field that is changing rapidly. Chile established the General Law for the Environment. as well as the oversight functions of the regulatory agencies. the Environmental and Natural Resources Code (1990) established the types of activities that must be performed in an EIA. Argentina’s environmental policy developed in a similar manner. and approaches of the M&M GBU to describe the role of the Environmental Engineer on major projects and to contrast that role with the important work performed by other engineers and scientists with environmental training. Environmental Regulations and the Environmental Engineer For the purposes of the discussions in this paper. It is likely that the other Bechtel GBU environmental engineers have similar experience. lessons learned. only some of which are similar to those of any other engineering discipline. the application or checking fees usually are not. NEPA NGO O&M TIC UNCED written by legal professionals in a manner that can be difficult for individuals outside the legal profession to understand. electrical. The most obvious difference is that the environmental regulations often are 58 . For most engineering disciplines. In Peru. environmental regulations are quite different from traditional engineering codes. Furthermore. On the other hand. compared with other engineering disciplines. This paper uses the observations. and construction (EPC) project’s planning and works.

OWNER’S PERSPECTIVE Overall Project Duration. Following the operational phase. various organizations define it differently. exploration. Environmental Activities by Project Phase December 2009 • Volume 2.g. often with a new beneficial use of B the land. it would be useful to briefly describe how M&M Environmental Engineers view an owner’s project. Compliance Closure 100 to 10. the facilities are removed or other closure activities are implemented. process royalties. Compliance. and startup costs) BECHTEL’S PERSPECTIVE Bechtel Environmental Support (for Execution) • Business Development • Engineering • Procurement • Project Controls (Estimating Cost and Schedule) Study Study • Bid/No-Bid Input • Bid/No-Bid Input • Go/No-Go • Generic Recommendation Pollution Control Requirements • Identify Major and Costs Pollution Control Requirements • Define Areas for Baseline Studies Study to Basic Engineering • Bid/No-Bid Input • Prepare Description of Environmental and Pollution Control Facilities • Cost and Schedule Input • Mitigation Planning • Coordination with Environmental Consultants Project EPC • Bid/No-Bid Input • Convert EIA Requirements and Bechtel Design Guides to Design Criteria • Input to Pollution Control Equipment/Facilities Material Requisitions • Construction Contracts Technical Data • Input to O&M Procedures • Convert EIA Requirements to Construction Environmental Control Plan • Construction Contracts’ Field Requirements 1% to 5% TIC O&M Remediation • Bid/No-Bid Input • Bid/No-Bid Input • Construction • Go/No-Go • Environmental Recommendation Compliance During Constructibility Review • Define Cost and Planning for Construction Environmental Compliance Plan Environmental Level of Effort $1K to $30K $10K to $30K $20K to $50K THIRD-PARTY PERSPECTIVE Environmental Consultants (and/or Unbundled Bechtel Services) Environmental Level of Effort None $20K to $400K $100K to $5M >2% TIC Figure 1.000 –2% • Monitoring. Reporting. a typical project undergoes a series of phases: study (conceptual through feasibility) through execution (construction and operation). Reporting. years Nominal Cash Flow * • Owner Environmental Needs Conceptual 1 to 10 –2% • Go/No-Go— A Judgment Call Pre-Feasibility 1 to 10 –5% • Generic Pollution Control Requirements and Costs • Baseline Studies Feasibility and Financing 1 to 10 –15% • Major Equipment and Facility Specifications (for Permit Input) • Capital and Operating Costs • Mitigation Plans • Baseline Documentation • EIA Approvals • Major Permits $100K to $5M Engineering and Construction 1 to 4 –200% • Pollution Control Equipment and Facilities • Monitoring and Reporting • Federal and Local Permits • Engineering and Construction Compliance Operation 5 to 50 1.000% • Monitoring. property acquisition. early staffing. As seen in Figure 1. Number 1 59 .BACKGROUND efore discussing the role of environmental engineering on major EPC projects. In Figure 1.. PROJECT PHASE An Environmental Engineer on a project has critical schedule and budget duties. The cost engineer is the internal target customer for the engineering efforts and provides input to the owner’s go or no-go decision(s) about advancing to the next phase. the study phase is essentially defined by the type of capital cost estimate being developed for the study. excluding Owner’s costs (e. Several items are noted in reference to Figure 1: • Although the study phase is quite similar from company to company. the property is in the postclosure phase. After the end of the operational phase. expanding the current phase.1% to 5% TIC * 100% = TIC for typical Bechtel Engineering and Construction scope. or withdrawing from the investment prospect. Agency Approval for Abandonment Environmental Level of Effort $10K to $30K $10K to $400K 3% to 60% TIC 2% to 10% TIC 0.

Since Bechtel has a rather extensive library of cost information for M&M projects. most costs are for installation of the pollution control equipment and facilities. • The owner’s environmental requirements and efforts are shown. and waste-handling facilities. As can be seen from the first goldcolored row in Figure 1. and defines a significant portion of the capital expenditures. access corridors for road. Bechtel has also participated in other types of project scopes that entail facility operation and maintenance (O&M) or the remediation of US Department of Energy sites that are a legacy of the Manhattan (atomic weapons) Project. while the mine. and defines a significant portion of the capital expenditures. The main point to be taken from Figure 1 is that the Environmental Engineer’s role starts at the earliest phases of a project. the total job might continue for a decade. • The owner’s typical environmental costs as shown in the first gold-colored row are given as a factor of the TIC. and ports require only a limited engineered definition to obtain the degree of accuracy required for the capital cost estimate. our involvement in some of these projects has continued for several decades. During the prefeasibility and feasibility study phases. and utilities. A project might go through several different owner companies because smaller companies cannot afford the cost of a subsequent phase. • Bechtel’s typical services costs for environmental support are shown in the second gold-colored row. affects the critical path. In addition. affects the critical path.The Environmental Engineer’s role starts at the earliest phases of a project. In some cases. and the cost estimates are often factored. by phase. monitoring. The red row shows the requirements to perform construction activities while maintaining environmental compliance. There is little need to optimize the designs because the purpose is to develop the rough capital and operating cost estimates that are the basis from which to start defining the minimum net income needed to pay for the infrastructure and the operating facilities versus the size of the ore body needed to support a production rate necessary to generate the required income. rail. the international banking community can require that the selected companies have demonstrated. THE ENVIRONMENTAL ENGINEER’S ROLE Conceptual Design Phase During the conceptual design phase. camps. Finally. the cost of each successive phase increases significantly. waste rock. most of the environmental-related services costs expended on a project are shown in this row. • As a consequence of the adoption of more advanced environmental requirements. with a relatively small cost for permit. and/or initiate the environmental baseline studies that are part of the pre-feasibility study phase. in the green row. When Bechtel’s role includes supporting the owner’s project development studies. and tailings disposal areas. most of the basic production facility capital costs can be factored from previous projects. the individual cost estimates needed to develop the overall capital cost estimate to the requisite degree of accuracy for the conceptual design phase can be produced with a relatively small amount of engineering effort. smaller companies that cannot afford to pay for the baseline studies and environmental approval processes no longer take on these projects. little engineering detail is required. The cost estimate accuracy required to make these initial determinations is low. until the project can attract financial backing from the international banking community. 60 Bechtel Technology Journal . Bechtel’s involvement may last only a few years when Bechtel’s role is limited to the EPC phase. and reporting requirements associated with the construction. production. • On M&M projects. These are complemented by the Bechtel Environmental Engineering support shown in the blue row. only the major. the third gold-colored row shows typical services by outside (third-party) consultants that prepare the EIA baseline study and impact evaluation documents. as well as practical definitions of the infrastructure requirements. characterize the associated waste rock. In general. During the engineering and construction phase. the overall engineering role includes providing practical definitions for the mining. multinational mining houses have the internal resources necessary to assume the financial risks through the feasibility study and reporting phase. most of the costs are for the EIA work. Therefore. large-project operating experience that is often not found outside the multinational mining companies. the contingency is high. The smaller companies may need to engage larger companies to provide the financial resources for more extensive drilling efforts to define an ore body. the owner’s involvement in the project life cycle could last from several decades to more than a century. In contrast.

In addition. The project footprint has to be defined broadly enough to include variations among the alternatives. and recommendations for addressing future environmental items that might affect schedule or capital costs during the next phase. and measures potential economic returns. the extent of pollution control facilities. the engineering study is prepared to assist the cost engineers responsible for developing capital and operating costs for the study report. In most cases. Emission and effluent inventories are estimated based on a preliminary screening of alternatives and partially completed metallurgical process information. evaluation of pertinent commodity market(s) and factors related to the project revenue stream. but be narrow enough to preclude excessive baseline study costs and/ or perception of impacts. utilities. Depending on the experience of the Environmental Engineer. or one or more of the critical facilities might require that some preliminary engineering be performed. December 2009 • Volume 2. with most efforts consisting of assisting the cost estimating engineers with factoring in information from previous projects and the historical database. and environmental approvals. Bechtel prepares the capital cost estimate and frequently prepares operating cost estimates. Also. Frequently. Pre-Feasibility Study Phase During the pre-feasibility study phase. mineral rights. A primary role can be to assist the owner with the early identification of potential environmental issues and/or fatal flaws. to prepare the overall project schedule. let alone finalized. These sections include Bechtel’s assumptions about the environmental setting. The Environmental Engineer needs to prepare project and pollution control descriptions so that the EIA scientists can start or advance their baseline studies before all of the project information is known.The Environmental Engineer’s role can be quite limited in this phase. The Environmental Engineer works with the owner’s environmental team to suggest EIA and permit acquisition strategies. an engineering study is usually prepared to examine alternative approaches for developing the resources and to provide the associated capital and operating costs for these alternatives. costs for any special control equipment. In most cases. Some projects cover more than a hundred alternatives for improving rates of return and containing risks. The same is true for the project infrastructure. Some pilot testing is performed to determine mineral recovery efficiencies. These sections include Bechtel’s assumptions about the environmental setting. These descriptions often have to include several of the alternatives before the final project is defined. extent of pollution control facilities. the long-lead-time activities for the EIA and permit acquisition processes have to be started in this phase. The Environmental Engineer prepares the pollution control sections of the conceptual design report. drawings. Additional exploration drilling is used to acquire detailed information on the ore body and its extent. these adjustments might be handled as an additional percentage allowance. and the costs of any special control equipment. the EIA scientists are employed by a third-party contractor engaged by the owner. The bankable document outlines project risks. and process patents. It includes a certified evaluation of the project ore reserves. a concentrator project located in the Santiago Metropolitan Region or agricultural areas would require more extensive air and water pollution control facilities. the Environmental Engineer continues to work with the owner’s environmental team on the EIA and permit acquisition strategies. Feasibility Study Phase During the feasibility study phase. and transportation costs. the Environmental Engineer’s input to the engineered facilities in this phase of a project might be rather limited. and discussions for those individuals tasked with preparing a bankable document to be used by an international financial institution (or other funding source) in determining whether to invest in the project. delineates methods to eliminate those risks. if several previous concentrator projects were sited in remote areas of Chile’s Atacama Desert. Internally. The Environmental Engineer needs to prepare project and pollution control descriptions so that the EIA scientists can start or advance their baseline studies before all of the project information is known. the engineering study is specifically directed toward providing information. pro forma contracts for the reagents. the Environmental Engineer details to the cost engineers how the subject plant might differ from the plants in the historical database and how the factored costs can be adjusted accordingly. The objective of this phase is to eliminate several of the alternatives and then define the scope of one or a few of the alternatives to be carried forward to the next study phase. information regarding ownership of the land. Number 1 61 . operating costs. Again. The Environmental Engineer prepares the pollution control sections of the pre-feasibility design report(s). For example. status of EIA and major permit acquisition activities. capital costs.

On M&M projects. the Environmental Engineer reviews the EIA documents for consistency among (a) various baseline studies and the facility footprints. drawings. and the technical or economic feasibility of alternative concepts proposed by nongovernmental organizations (NGOs). adding data on new conditions imposed by the EIA and permit approvals. in conjunction with a process design criteria document. constitutes the performance basis for the other disciplines’ designs. Written primarily for internal use by the other engineering disciplines. and the sampling points. The purpose of the environmental section of the feasibility study is to summarize the environmental aspects of the project. The Environmental Engineer often prepares the environmental sections of the bankable document. Although the baseline studies and impact evaluations are usually performed by third-party specialty contractors hired by the owner. environmental. Using this information. The pollution control designs are integrated into the overall design and are not just a pollution control unit operation added on at the end of the process by the environmental engineering discipline. the environmental design criteria establish the minimum sizing requirements for the ponds. engineers prepare criteria. the adequacy of the project’s proposed pollution control measures (to verify that they comply with project alternatives suggested by agency personnel or commissioners). and financial success of the project. the environmental design criteria delineate the grams-per-second limits of dust emitted and the concentration of dust.However. the mechanical engineer uses the discipline’s guidelines to calculate the air flow and coordinates with the layout/plant design discipline to route the ductwork between the platework at the transfer point and the baghouse. These requirements are usually defined by a storm event and freeboard and the discharge concentration and turbidity limitations. the Environmental Engineer has to understand the pollution control system and the agency review process well enough to make an approvable application. therefore. the civil engineer plans the diversion and/or interception channels and determines the locations and number of ponds. In this manner. In both of these examples. (b) the engineered pollution control and other mitigations measures. These duties constitute the traditional role of engineers on an EPC project. identify risks that could materially and adversely affect the technical. such documents often have to be prepared before the designs are complete.. 62 Bechtel Technology Journal . calculations. If the air regulations require a baghouse. Construction forces also use these documents to execute the project. Detailed Engineering and Construction Phase During the detailed engineering and construction phase. the work is divided so that each engineering discipline is responsible for the work that it traditionally performs. Next. the mechanical discipline designs a belt conveyor transfer station in accordance with the metallurgical/process engineering discipline’s flow requirements and the environmental engineering discipline’s pollution control requirements. For example. and (c) the engineered features and the capital or operating cost sections. etc. Bechtel is seldom responsible for the overall bankable document submitted to the financing institution. In another example involving the civil engineering discipline’s responsibility for sedimentation ponds and site drainage. the pollution control designs are integrated into the overall design and are not just a pollution control unit operation added on at the end of the process by the environmental engineering discipline. they are prepared in a manner that precludes these disciplines from having to conduct their own respective investigations into the EIA/ regulations or references thereto. the specific instrumentation. The Environmental Engineer issues an environmental engineering design criteria document that. the engineering discipline tasked with designing a specific facility is also responsible for designing the pollution control measures for that facility. The Environmental Engineer supplements the environmental engineering design criteria issued during the feasibility phase. Special conditions resulting from the approval process have to be incorporated into the design criteria and the compliance matrix. Both of these documents are technical interpretations of commitments made by the owner during the EIA and associated processes. To maintain the schedule. and describe the mitigation provisions that have been included in the estimated cost basis. specifications. and prepares a project environmental compliance matrix. The Environmental Engineer also reviews alternative design concepts proposed and/or evaluated by the parties conducting the EIAs. The Environmental Engineer also prepares the technical documentation for the owner’s submittal to the pollution control agencies. for use in purchasing equipment and bulk material.

Divergent Demands In the Western Hemisphere. industrial hygiene. In Chile. The Environmental Engineer also helps the cost engineers estimate the closure costs. the Environmental Engineer prepares brief descriptions of the EIA and other documents. These graduates are very knowledgeable about how to evaluate. Operation and Closure Phases During the operation and closure phases. Whereas individuals graduating with degrees in electrical. or civil engineering (as well as fellow graduates in other disciplines or fields) have a general sense about their prospects after graduation. these two divergent educational philosophies can lead to a culture clash when engineers from the different traditions are brought together in a single. This foundation prepares In Chile. and Argentina. DEVELOPING THE ENVIRONMENTAL ENGINEER Imprecisely Understood Role The Environmental Engineer’s role on a major capital project is often imprecisely understood by the general public as well as by project management. standalone departments established to meet the demand for engineers in these disciplines coming from the industry and the environmental regulators and policy makers. environmental laws. Furthermore. natural resources management. Europe. These descriptions may not be the actual plans. typically under the leadership or grantsmanship of individual professors within those departments. They cover a wide variety of environmental subjects. and guarantee clauses. Since the project might operate 25 to 100 years into the future. code checking. plant closure. encompass responsibility for regulations that are not managed by other engineering disciplines or project functions. Peru. land use planning. the educational programs for Environmental Engineers are very similar. there could be capital or operating cost implications if these costs must be secured through lines of credit or bonding by a third party. Environmental Engineering and Environmental Science are separate. Environmental Engineers also assist with startup and commissioning of the pollution control equipment because they know how this equipment works and how the overall plant production system functions and are knowledgeable about the commissioning testing and reporting requirements of the governing agencies. technical specification. Argentina. an environmental engineering degree is usually awarded by the Civil Engineering department because large public works (water supply and sewerage) programs were traditionally run by civil engineers. the Environmental Engineer has to understand the other disciplines’ work practices well enough to be able to clearly inform those disciplines of the engineering requirements without using the legalistic writing style of the EIAs or environmental regulations. Differing Educational Philosophies In educational institutions in North America. mechanical. operations monitoring and reporting. Quite often. Needless to say. A degree in environmental sciences is usually awarded by another university department (often dealing with natural sciences or resources). pollution control. if requested. Australia. At this time. technology. they could constitute representative plans acceptable to the respective agencies and may remain in effect until the owner’s plant environmental personnel come onto the project.Of course. and site remediation. final closure plans can be prepared based on plant environmental personnel staffing levels and more current knowledge of the plant. the scope is not the same from university to university. e. Number 1 63 .g. and Australia. and Europe. with emphasis on the basic sciences. including an operating plan for the mine and plant pollution control systems and facilities. and to participate in the design coordination and checking to maintain the quality of both processes. and control environmental policies and regulations. with the goal of creating a sustainable development program. and Peru. plan. Environmental Engineers tend to be far less certain about their post-graduation plans. socioeconomic mitigation. guide.. plan approvals. Obviously. impact evaluation. multinational design office. an Environmental Engineer has to have sufficient knowledge of the other engineering disciplines’ technical design details to be able to assist with preparing the design. and regulations. standalone departments established to meet the demand for engineers in these disciplines coming from the industry and the environmental regulators and policy makers. Environmental Engineering and Environmental Science are separate. sustainability. data sheets. These descriptions may also include feasible closure and post-closure plans for the project. possibly reflecting the interests of those individuals. the role comes to December 2009 • Volume 2.

groundwater and surface water pollutant travel and attenuation. plan. Depending on the type of project. water and air contamination. more often than not. This means that on-thejob training is required to prepare graduate Environmental Engineers to perform EPC work processes. regardless of discipline. enforcement agencies. and control environmental policies and regulations. the habitat of endangered species. etc. Some of these graduates can be trained in the EPC work processes. more importantly. analytical environmental graduates and the intense. Therefore. However. and report impacts to community planners. to compensate for the lack of Environmental Engineers. • Second. and operate the world’s water supply and sewerage systems. On most projects. Also. and. guide. These documents present the requirements in a manner that can be integrated into each discipline’s work process. This can lead to a different type of culture clash between the reflective. can and will take ownership of the pollution control equipment and facilities instead of passing the problem to an Environmental Engineering group to handle. The emphasis given by the universities is largely understandable. analyze. and smelting projects. that engineers. construct. and regulatory policy makers. On-the-job training is required to prepare graduate Environmental Engineers to perform EPC work processes. only one or two Environmental Engineers within a project EPC organization work with their counterparts in the owner’s organization to define what will actually be constructed. cyclical engineering backlogs make developing and retaining trained personnel problematic. beneficiation. Compared with the number of engineers engaged in environmental design. budget/schedule-driven EPC team. CONCLUSIONS E nvironmental considerations and activities form a substantial portion of work performed on M&M projects worldwide. some of the permits are also on the critical path for the release of funding by the financial institution 64 Bechtel Technology Journal .the graduates to work with consultants and agencies to evaluate.) in the legal requirements applicable to their field. there is a significant demand for engineers to design. all projects must have an approved EIA before release of funds. only a relatively small number of companies throughout the world (including Bechtel M&M and its competitors) undertake large EPC efforts such as mining. university curricula emphasize the need for graduates to be able to monitor. universities do not emphasize formal curricula for such engineers. there is the lack of environmental engineering graduates who can perform EPC design work because most of the engineers graduating from universities have been trained to do environmental assessments. As a result. a larger number of environmental scientists evaluate project impacts on the flora and fauna. The benefits of this approach are that fewer Environmental Engineers are needed and. and/or board of directors. groundlevel air pollutant concentrations and visibility. or that an EPC engineer’s specialty must be modified to ensure that he or she understands the environmental compliance issues. Bechtel M&M’s approach has been to train engineers who have expertise in a certain area (dust control. However. the pollution control and other mitigation capital costs can range from 3% to more than 50% of the TIC. The primary tools are the environmental engineering design criteria and the environmental compliance matrix. There are two key conclusions from the discussions in this paper: • First. Essentially. soil remediation. More than 100 separate environmental permits and other documents must be approved by various governmental agencies before the construction and/or operations can begin. Conversely. this training does not cover EPC work processes or the budget and schedule information needed to prepare the studies required to develop and finance projects. As a consequence. proper communication tools need to be in place to inform all the engineering disciplines working on a project of the environmental requirements. etc. While many scientists and engineers who are highly skilled in the environmental arena contribute to a project’s environmental documentation. This approach has been successful but time consuming. there is little demand for the type of Environmental Engineer described in this paper. to ensure that an Environmental Engineer is able to work as effectively as possible.

cement. and subway and railroad tunnel ventilation. fertilizer. She is also very knowledgeable about the environmental legislation of Chile. precious. environmental impact procedure and its use in the mining industry. and smelting of light. Jim earned his MS and BS. and disposal. December 2009 • Volume 2. Chile. and Argentina. and occupational health. James A. He also authored or co-authored 16 technical papers. She assisted the local authorities in Antofagasta. Mónica has presented and published more than 18 technical papers on a wide variety of environmental topics. both in Mechanical Engineering. ports. to a lesser extent. Chile. Chile. and quality. after the high-magnitude earthquake that affected the city in 1995 and was recognized by its mayor for her contributions.” in Mining Environmental Handbook: Effects of Mining on the Environment and American Environmental Controls on Mining. and solid waste management in northern Chile. Number 1 65 . Before joining Bechtel. Jim has performed environmental engineering activities in connection with the mining. from Stanford University. Murray retired in 2008 from Bechtel’s Mining & Metals Global Business Unit after serving as the GBU’s chief environmental engineer for approximately 25 years. audit. and base metals. solid waste treatment. coke (metallurgical and petroleum). and. recovery and discharge permitting. beneficiation. such as technological alternatives for wastewater management. safety. the legal regulations of Mexico. Jim holds four US patents and six related foreign patents. He continues with M&M as a senior principal engineer in an in-house consulting role supporting pollution control engineering and environmental permit acquisition programs for a range of commodities and technologies. and mine closure/post-closure management/procedures. She specializes in environmental regulations and permitting. heavy. Peru. pipelines. industrial minerals. water treatment. “Economic Impact of Current Environmental Regulations on Mining. She is functionally responsible for environmental engineering executed from this office and provides environmental expertise to major copper projects. She has completed specialization courses and internships in environment. Mónica develops environmental design criteria and verifies that projects being developed in the Santiago office comply with environmental regulations (compliance matrix) and client environmental requirements. He is a licensed Professional Mechanical Engineer in California and a Diplomate of the American Academy of Environmental Engineers. in California. petroleum and petrochemical. Jim authored Chapter 15. hazardous waste treatment. storage.BIOGRAPHIES Mónica Villafañe Hormazábal is the chief representative Environmental Engineer for Bechtel’s Mining & Metals business based in Santiago. including 3 years with Bechtel Chile. coal. fossil power. tailings dams. Mónica has 26 years of experience. Jim was manager for air pollution control at Kaiser Engineers. Mónica holds a degree in engineering sciences and is a Civil Engineer from the University of Concepción.

66 Bechtel Technology Journal .

and technology-based equipment and systems supply The primary objectives of ACE’s mandate to develop a simulation-based approach to validating lean plant configurations were to: • Deliver certainty of outcome • Make projects and operating plants lean. lean design. baked. smelter. more efficient plant for storing and handling carbon anodes. and conveying the green. hydrocarbons. To achieve excellence. Six Sigma. maintenance. Robert I. highly valued. rodding. and appropriate anode inventories were targeted and achieved. McCulloch rimccull@bechtel. but deployed to projects worldwide) to train. modeling. simulation. knowledge-based teams (headquartered in Montreal. This paper illustrates a simulation-based approach to validating the capability of a lean configuration for echtel’s Aluminium Centre of Excellence (ACE) Knowledge Bank is the repository of the company’s institutional knowledge. and lessons learnt on the design and construction of smelter projects. Canada. The measure—a lower life-cycle cost for the optimised lean design—was achieved. and upset operating conditions. All rights reserved. modeling and simulation are used during the basic engineering phase to help designers and engineers visualise the impact of a proposed solution before it is implemented. The results demonstrate that adequate anode inventories could be maintained under all expected operating. smelting INTRODUCTION B Robert Baxter rfbaxter@bechtel. 67 Trevor Bouk tbouk@bechtel.SIMULATION-BASED VALIDATION OF LEAN PLANT CONFIGURATIONS Issue Date: December 2009 Abstract—Bechtel’s Aluminium Centre of Excellence (part of the company’s Mining & Metals Global Business Unit) has developed advanced simulation modeling methods and tools that can be used to validate configurations for aluminium smelter plants. Integral to Bechtel’s continuous improvement process. and transient conditions for the proposed lean carbon plant configuration. potline. PhD ltikasz@bechtel. and rodded anodes produced by a smelter’s carbon plant to support the operational needs of the potline. Compared with the baseline design offered by leading technology suppliers to the aluminium industry. a single anode stacker crane. and cost-efficient • Deliver value by applying simulation knowledge and skills to the configuration aspects of smelter projects ACE used discrete element modeling of process elements to predict the dynamic response of the system to ensure that the proposed lean configuration can meet customer needs during normal. [1] ACE applies © 2009 Bechtel Corporation. maximum. reliable. . paste plant. the result is a safer. Reduced storage space. storing. with all the technical specifications and operational strategies implied. ACE: • Performs feasibility studies and leads development of project basic engineering for Bechtel primary aluminium projects globally • Maintains a cadre of primary aluminium technology specialists to provide state-ofthe-art knowledge and leadership to studies and technical support to projects • Evaluates primary aluminium industry technology-based projects and products • Develops and maintains relationships with primary aluminium industry leaders in technology supply. and assign staff that enhance Bechtel’s ability to execute world-class primary aluminium industry projects. technical specialty. pallet Laszlo Tikasz. technical capability. organise. Keywords—aluminium. leaner (particularly in terms of eliminating waste). historical information. anode.

cost-efficient carbon plant configuration include: • Capturing and articulating customer needs (as opposed to wants).000 metric tons (385. • Developing and validating the lean plant configuration early. • Demonstrating the capability of the proposed lean configuration to mitigate the identified risks and communicating the results to clearly address customer needs. The carbon plant receives shipload quantities of petroleum coke and liquid pitch that are blended and formed into metric-ton-sized green anode blocks. with proper regard for the system’s capability to be reliably operated and safely maintained. Pitch Tanks Coke Silos Paste Plant Green Anode Cooling Anode Handling and Storage Rodding Pallet Storage Area Fume Treatment Centre Anode Bake Furnace Bath Treatment and Storage Figure 1. and operational requirements of carbon plants at aluminium smelters have been targeted by Bechtel’s continuous improvement process as areas for lean optimisation. MTBF MTTR R&D VSM BACKGROUND T he configuration. and having a high certainty of the outcome. Blending and creating usable prebaked anodes is a 2-week operation.000 ºC (1. and ultimately the customer’s needs can be achieved over all operating conditions. green anodes are baked in a furnace at temperatures in excess of 1. during the project definition phase.800 ºF). ACRONYMS. The challenges to achieving a lean. the lean configuration must convince the process owner that system stability. product quality. inventory. Proposed Lean Carbon Plant Configuration 68 Bechtel Technology Journal . AND TERMS ACE BDD CAD FMEA M&M (Bechtel) Aluminium Centre of Excellence basic design data computer-aided design failure mode and effect analysis (Bechtel) Mining & Metals Global Business Unit mean time between failures mean time to repair research and development value-stream mapping To drive off and burn volatile hydrocarbons and to improve the physical properties of the anode. • Identifying and quantifying risks associated with driving a lean configuration.ABBREVIATIONS. • Understanding and respecting industry experience and practices.800 tons) per year of molten metal and consumes more than 450 metric tons (almost 500 tons) per day of baked carbon anodes. With appropriate countermeasures. Design changes The lean configuration must convince the process owner that ultimately the customer’s needs can be achieved over all operating conditions. The baked anodes are then rodded with electrical connections and transferred to the potline for consumption in the reduction process. A typical aluminium smelter produces approximately 350.

and operation phases destroy value. a model-based approach was developed and integrated with the lean improvement process. Number 1 69 . SIMULATION-BASED APPROACH— KEY ELEMENTS FOR SUCCESS Understanding of Customer Needs A successful lean design begins by defining customer needs under the following categories and task requirements: • Specify the Process Data—Process data for the overall smelter and subsystems is captured and presented in the basic design data (BDD) mass balance model (see Figure 2). Figure 2. This model is a standard tool A successful lean design begins by defining customer needs.and variations that occur later during the execution. To mitigate and overcome these challenges. The resulting enhanced process was used to develop a lean carbon plant configuration (see Figure 1) and to predict and validate the proposed system’s capability to meet operational maintenance needs. Sample Basic Design Data Model December 2009 • Volume 2. startup.

The process of working with the process owner to develop and document the work design defines the critical customer and supplier interfaces and consumer needs. [2. • Define Questions that the Model Must Answer—With input from the process owner. constructing process maps and flow charts forces alignment between the process owner and the design engineer. It helps to close the information. Ultimately. • Develop and Document the Work Design— A detailed understanding of how a system will be operated. or recover from a transient event. The example shown in Figure 2 is the BDD model used to define and summarise the key process data for the project that is the subject of this paper. Read Schedule Load Closest Butt Pallet Is Full Pallet Available? N D Move to Butt Storage Area (Via Assigned Route) Y Unload Butt Pallet Load Pallet Move to Pot Location (Via Assigned Route) Second Pallet? N Move to New Anode Storage Y Unload Pallet Execute Bath Bin Delivery Logic 5 N Sequence Completed? Y End Figure 3. An example of just one of the subsystems for moving anodes from the carbon plant to the potline is shown in Figure 3. contributes to continuous process improvement. anode production and system maintenance). Constructing these maps and charts provides a solid foundation for the modeling phase. for example. sustain inventory. reliably deliver product. and staffed. this step forces alignment between the process owner and the design engineer. maintained. The questions must be quantitative or binary so that the system’s capability can be determined. Ultimately. Questions should be based on the system’s capability to. concise questions that the dynamic model needs to answer are developed. and is invaluable to the learning process. and planning gaps that typically exist early in a project. data. • Construct Process Maps and Flow Charts—These logic visualisation tools are used to capture the inputs resulting from the above activities and to analyze and discuss the process being evaluated (in this case. is an essential input to the lean system design.that ACE uses to coherently summarise and communicate key process data to the team. The flow of the value stream map created during this phase will later help determine where improvements are necessary. Flow Chart—Pallet Delivery 70 Bechtel Technology Journal . 3] Risk Assessment Failure mode and effect analysis (FMEA) is used to identify design and process risks associated with the proposed lean configuration and to quantify these risks in terms of their severity. coupled with the desired organisational culture. It also forms the basis for model validation. • Develop Key Metrics for Success—Simple measures must be developed for each question to determine what output variable is to be measured and what the criteria are for acceptance or failure.

Figure 4 lists Lean’s eight forms of waste. All of these core competencies are essential to ensure the success of the overall modeling activity. R Unused Intellect 7 2 Over-Production WA S T E Excess Inventory 6 3 Rework Over-Processing 5 4 Excess Motion Figure 4. A process owner’s confidence in model inputs and outputs is paramount for the owner’s acceptance of a proposed lean configuration. support sensitivity analyses and answer key questions regarding a system’s capability to meet customer needs. This FMEA activity is performed with the process owner (the operations team in this case). from the end customer’s point of view. To achieve this desired outcome. Lean’s Eight Forms of Waste December 2009 • Volume 2. testing of model input parameters and boundary conditions. More importantly. and detectability. ACE specialists apply any available tacit knowledge and other relevant information to the initial analysis. sensitivities. The modeling effort for each application must start with project-specific customer needs and inputs. Mitigating actions and countermeasures are then developed and applied to the models. and limits The ACE Knowledge Bank provides codified information captured from an extensive suite of aluminium smelter projects executed by Bechtel (and others). INTEGRATED MODEL-BASED LEAN PROCESS ombining the best of Six Sigma and lean manufacturing methods is an established and widely accepted improvement process. which range from mass balance spreadsheets to discrete dynamic simulations. The development of countermeasures to mitigate the risks identified requires advanced simulation and modeling skills to understand cause-andeffect relationships and to identify a problem’s root cause.likelihood of occurrence. The models. which. Equipment and system reliability-based risks are entered into the models in the form of probabilities assigned to process units as mean time between failures (MTBF) and mean time to repair (MTTR) parameters. and the results are evaluated with the process owner. an ever-growing. model outputs are extensively tested against the BDD and known baseline performance from similar operating systems before the model is used to predict system performance. As a result. comprehensive model library has been developed and covers the main process sectors of an aluminium smelter. each model must be verified and validated before it is used as a predictive tool. Other risks associated with operator error and external factors are identified and quantified with the process owner and entered into the model as worst-case scenarios. Core Competencies Key core competencies required to analyze the system performance predicted by the model include: • In-depth knowledge of a carbon plant’s subsystem technologies • The overall system operational and maintenance requirements • Knowledge of system break points. and validation of model dynamic outputs are essential steps in model development. Lean focuses on the relentless pursuit of identifying and eliminating waste. Though extensive. adds no value to a product or service. While Six Sigma reduces variation and shifts the mean to improve the output of a process. Number 1 71 . the model library serves as a collection of building blocks with causes and effects that may guide the creator on building future models. Lean focuses on the relentless pursuit of identifying and eliminating waste. Simulation tools complement and enhance improvement results by incorporating system C Excess Conveyance 8 1 Waiting ADVANCED PROCESS MODELING AND SIMULATION CAPABILITY ecently. process modeling and software simulation of systems have become integral parts of smelter studies and projects. Verification.

and develop future-state VSM • Control—Develop control strategy. and dynamic value-stream mapping (VSM). structure. process model. cost-efficient plant. Model-Based Lean Approach (After El-Haik and Al-Aomar [4]) reliability. test control plans. implement control plans. simulation experiments. Lean Measures DEFINE Process Analysis. These tools provide a cost-effective. Model Building MEASURE Simulation tools complement and enhance improvement results by incorporating system reliability. Applying the simulation tools intended to corroborate the outcomes of proposed improvements is done in the Analyze. Dynamic simulations also increase confidence that the proposed solution will deliver a lean. and Control phases (see Figure 5). building. Process Optimisation IMPROVE N Satisfied? Tuning Loop Y CONTROL Implement Figure 5. variability. and process flow • Improve—Optimise process parameters. and monitor performance over time Conceptualising. Parameter Study ANALYZE Simulation Tools Lean Techniques. apply lean techniques. and risk into the design and optimisation process. flexible way to reduce and even eliminate scope changes and design variations in the proposed system beyond the project’s early definition phase. Improve. and variables • Measure—Quantify current state. Inputs N Complete? Y Experiment Design. lean measures. and validating the process model are linked to the Define and Measure phases of the Six Sigma process. and risk into the design and optimisation process. identify sources of variation and waste • Analyze—Examine the plan and design. 72 Bechtel Technology Journal . There are five steps to these software simulation exercises: • Define—Characterise project scope. validate improvement.Project Scope. variability.

definition of a fire train. Once the simulated model verification and validation testing was completed. The objective was to determine if a simulated group of sufficiently cooled. we created an Excel spreadsheet model built with simple interfaces and integrated detailed operating logic (see Figure 6). We kept all main process parameters adjustable (for example. The two we used were: • Anode baking furnace fire-train model (built and animated in a Microsoft® Excel® spreadsheet format) • Carbon area operation. and empty pit locations for an anode baking furnace. Fire Movement Animation—Excel Model An iterative tuning loop refines the system operating parameters that define the optimum design for the required inventories to be carried. and initial positions). Number 1 73 . By simulating several months of operation. cycle times. fire move direction. Combined Anode Plant Facilities We built a dynamic simulation model of the anode plant facilities to validate the basic operating MODELING AN IMPROVED CARBON AREA CONFIGURATION arious simulation modeling tools can be used to validate carbon area configuration and operation. empty pits could be made available for an extended period so that maintenance and repair activities could be performed safely. location of burners and bridges. required baking time. movements. To address the issue. and the robustness of the proposed lean configuration. the model helped us to develop and debug the operational logic.flexsim. the types and extents of countermeasures]) Anode Baking Furnace Fire-Train Logic Validation—Excel-Based Model We dynamically analyzed the operating sequence.Fire Move 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 2 2 2 2 2 2 2 2 1 1 1 1 11 6 1 9 1 8 1 7 1 6 1 5 1 4 1 3 1 2 1 1 3 Sections 1 12 30 1 1 13 54 1 1 14 78 0 1 1 B B B E M H H H H Z B B B E Z H H H M E B B B Z H H H M 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 2 2 2 23 70 1 2 22 46 1 2 21 22 1 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 33 62 1 3 32 38 1 3 31 14 1 3 3 Sections Code 1 2 3 4 Status Empty Forced Cooling Cooling Full Fire Preheating Packed Fire Move Days: 1 Hours: 14 5 Time Elapsed (h): 38 Figure 6. a discrete-event model (built using Flexsim™ dynamic simulation software from Flexsim Software Products [www. we transferred the operational logic to the more detailed dynamic discrete event model for the anode baking furnace. fire configuration. fire V December 2009 • Volume 2.

we incorporated only the summary logic for green anode consumption and baked anode production into the discreteevent software simulation model. demand)..Pallet Storage Rodding Shop Paste Plant G&B Anode Storage The objective of the modeling effort was to identify potential fatal flaws. minor dips below 10% during random breakdowns (8 hours for either green or baked anodes) in the anode storage facility 74 Bechtel Technology Journal .e. We designed the model to answer the following core plant safety. Metrics for Success We used the following metrics to determine whether the core plant safety. design. if any. Anode Baking Furnace Figure 7. system weak links. system weak links. The objective of the modeling effort was to identify potential fatal flaws. with short. We designed the model to simulate the production and subsequent handling and storage of green anodes up to the anode baking furnace. and other operational conditions that could interrupt or delay the process of delivering baked anodes to the rodding shop and rodded anodes to the potroom. Since the anode baking furnace operation had been previously modeled and proven using other software tools. do not interrupt pot-tending activity • Maintain minimum anode inventory for normal operation. weak links existed. Modeling of Proposed Lean Carbon Plant Facilities capability of our proposed lean configuration (see Figure 7). baking furnace. The model also simulated the handling and storage of anodes from the baking furnace to the anode rodding shop and then on to the pallet storage area before their removal for use in the potrooms. or rodding shop. and operation questions had been correctly and accurately answered and whether the proposed optimised configuration was suitable for the project: • Feed potroom based on “pull” (i. and operation questions: • Could the proposed anode supply system sustain normal potroom operation without interruption? • Does the proposed storage capability (combined indoor and outdoor) of green and baked anodes support the baking furnace and paste plant operating and maintenance plans as specified in the BDD? • Could a single automated stacker crane reliably manage the inventory for a common green and baked anode storage facility? • Could the proposed system recover from a transient event within reasonable time to sustain the potroom demand for carbon without depleting the rodded anode inventory in the pallet storage area? • Does the rodded anode buffer (pallet storage area) between the reduction area and the rodding shop have sufficient capability to ensure that cooled product was available to sustain the scheduled rodding shop operations? • Is the system capable of sustaining the carbon supply operation over the long term when equipment reliability and system availability are considered? We identified what. design. and other conditions that could interrupt or delay the process. or that could render the system unable to recover from potential transient events in the paste plant.

Green and Baked Anode Storage Area—Detailed applied to all components of the anode handling system (conveyors. elevators. as • When the cold butt and pallet inventories driven by random functions and based on were monitored. Color codes marked the status (red = hot butt. thus. conveyors.• Do not deplete rodded anode inventory in the pallet storage area during critical events • Demonstrate the system‘s capability to recover within a specified period without disrupting production in the potroom or rodding shop Model Inputs Model inputs included the following information: • The work design (working and down periods) applied to the paste plant. Hot Butt Cold Butt Rodded Anodes Butt Pallets Stored: 274 (Hot — 203. Model Granularity yellow = cold butt) and pallet type (grey = Addressing the whole carbon area. implemented by MTBF and MTTR. area that presented significant optimisation stacker crane. we set rodded anodes) (see Figure 9). opportunity. it was fully validated for example: Before the model was used to run any production scenarios. and turn tables) as defined (stacker crane. it was fully validated using data from the BDD. anode baking furnace. a simple. production scenarios. pallet-based historical data representation of anodes was applied. elevators. Number 1 75 . and potlines as defined in the project BDD • Preventive maintenance schedules Figure 8. rotating by the project BDD units. anode blocks. • Green and baked anode storage was an accumulating conveyors. Model Simulation Validation We also used modeling blocks and complete Before the model was used to run any sector models with different granularities. Pallet Storage Area—Simplified December 2009 • Volume 2. rodding shop. and other outputs) were • Component breakdowns captured and modeled individually (see Figure 8). the model granularity in accordance with the particular interest in the sector studied. anode tilters. pushers. Cold — 71) Anode Pallets Stored: 190 Figure 9. all major components anode turners.

we observed the impacts on the pallet storage area and the green and baked anode storage area (see Figures 10 and 11. To test restart problems. anode storage and handling. the green and baked anode storage area was able to accommodate the storage of baked anodes that could not be sent to the rodding shop. During these transient events. a reduction of inventory costs. Using these known data inputs. After all sections were tested. we made a full model run to simulate a year of production. and pallet storage) individually with inputs from the BDD and constants for availability and reliability. Next. respectively). Effect of Transient Events in Rodding Shop In selected model runs. and 24 hours. we concluded that reasonable transient events within the rodding shop have no effect on potroom production. without affecting baking furnace production. storage. we concluded that reasonable transient events within the rodding shop have no effect on potroom production.using data from the BDD. we introduced transient events into the model. For example. with all data per the BDD. we also parametrically checked predicted outputs against the ACE Knowledge Bank. we were able to modify and debug each section until it reliably produced the predicted outputs and mass balance. and the elimination of the second stacking crane—we developed countermeasures that included scope changes or actions such as: • Developing a planned outdoor green anode storage area to accommodate the paste plant’s annual shutdown • Allocating identified critical spare parts on site • Taking steps to increase equipment reliability and reduce MTTR • Configuring the equipment so that handling. we reintegrated the model and then tested it again under known conditions in order to: • Test the handshakes between the various sections • Verify that repeatable results could be obtained against known outputs Finally. We introduced reliability and statistical variability into the model runs only after the model was fully tested. Pallet Storage Area Capacity 100 90 80 70 Percent 60 50 40 30 20 10 0 1 101 201 Hot Butt Pallets 301 Time (Number of Shifts) Cold Butt Pallets 401 Rodded Anode Pallets 501 Rodding Shop Down for 32 Hours Full Recovery in 2 Weeks Figure 10. we shut down the rodding shop for 32 straight hours (8 scheduled plus 24 unscheduled). Countermeasures To manage the planned long-term system interruptions that would be needed to perform certain proposed actions—a reduction of the covered storage building area. We then compared the outputs over this time period with the predicted 1-year values in the BDD. As a worst-case scenario. From these model runs. Changes in Pallet Storage Area Capacity 76 Bechtel Technology Journal . rodding shop. anode baking furnace. we increased the restart time by 8 hours. The normal scheduled rodding shop maintenance shift is 8 hours. and conveyance operations could be performed manually From selected model runs. We tested each section of the model (paste plant and anode cooling. we simulated an extended shutdown of the rodding shop. 16 hours. To improve the certainty of model predictions.

transient. be implemented for the aluminium smelter project. anode baking. and rodded anodes include: – Significant reductions in green and baked anode inventories – An anode storage configuration sharing a common area for green and baked anodes and serviced by one stacker crane – Reduced conveyor lengths and handling operations so that customer and supplier connections are direct and short – Reduced maintenance costs as a result of simplifying and reducing the handling and conveyance equipment • Projected Cost Savings—Based on the data generated by the study team. Results Based on the proposed aluminium smelter plant design. and pallet storage were linked by an overall anode handling. For the lean carbon plant configuration discussed in this paper. indeed. store. Model inputs involved a rigorous approach to defining customer needs. transient. December 2009 • Volume 2. • Reliability and other risks were applied as worst-case scenarios. rodding.100 90 80 70 Percent Full 60 50 40 30 20 10 0 1 Baked Anodes Accumulate in Storage Full Recovery in 2 Weeks Empty Rows Indoors 101 201 301 Time (Number of Shifts) 401 501 Figure 11. and waste elimination (redundant material handling equipment and storage capacity. and extreme operating conditions. simulation may be considered an essential design component. simulation submodels of the facilities used for anode fabrication. Changes in Green and Baked Anode Storage Area To investigate the availability of existing countermeasures outside the plant (and possibly the company) for emergencies having a low probability of occurrence but a high severity. and convey green. and extreme operating conditions. CONCLUSIONS I ntegrating simulation-based tools with Lean and Six Sigma quality improvement methods is an effective approach to validating and improving lean configurations. as validated by the discrete event modeling methodology described in this paper. Scenarios were performed under projected normal. • Recommendation—We recommend that the proposed lean carbon plant configuration. along with the identified countermeasures. we elevated risk to a corporate level. Number 1 77 . anode storage. we used the following general methodology: • Scenarios were performed under projected normal. for example). with owners expressing the need for competitive life-cycle costs. This assessment included considering countermeasures such as supplying anodes from sister plants or from outside suppliers. would be approximately US$2 million in capital cost savings and US$400 thousand in operating cost savings (expressed in terms of present value). work flow. the dynamic model results predicted that all defined metrics for success could be met. the cost savings that would be realised by using the proposed optimised lean anode storage facility. storage. baked. and conveyance system model. • Benefits—The benefits of adopting a validated lean carbon plant configuration— compared with using conventional designs—to handle. including process design. To implement our simulation-based approach to validating the lean carbon plant configuration.

2003. and worst-case scenarios are identified and quantified for the proposed system. [4] 78 Bechtel Technology Journal .” Proceedings of the 45th International Conference of Metallurgists (Aluminium 2006).• The system dynamic response was recorded. and system break points. • The simulation software model is fully verified and validated before it is used as a predictive tool. Our advanced simulation and 3D modeling methods and tools enabled us to create.T. develop. “Decision Support Systems and Intelligent Systems. Such collaboration is a learning process by itself and serves as a solid foundation for the modeling phase. a single anode stacker crane. R.prenhall.metsoc. Upper Saddle River. sensitivities. Hoboken. R. In any simulation effort. Inc. B. pubbooks/turban2/. Canada. Tikasz.pdf. access via http://www. NJ. Montreal. and R.” Wiley-Interscience.I. • Reliability. and J. Baxter. cost-efficient configuration early enough in the project development cycle results in significantly added value. October 1– contents/1-894475-65-8. NJ. Thereby. Quebec. and integrated.F. “Global Delivery of Solutions to the Aluminium Industry. have been defined and A substantial productivity and cost-savings benefit of this early and ongoing collaboration is that it can reveal lessons learnt that may not otherwise be readily evident. access via http://www.M. • Uncertainties mitigated. Inc. baked. and output analysis. McCulloch. 2001. scope stability during project execution. • Alignment with the process owner is maintained at each step to ensure that customer needs are understood. and validate a sequence of operations that add value to the customer with the least amount of waste. Taking the holistic approach presented in this paper to developing and validating a lean. COM 2006. A simulation-based approach delivers confidence to the process owners and project team that: • The proposed solution will meet or exceed expectations. June 9–11. maintenance. and reliable startup and operational performance. Typically.. access via http://cwx. Al-Aomar. We targeted and achieved reduced storage space. This is an essential step for acceptance of the proposed lean configuration. operational risks. MetSoc of CIM. overall system operational and maintenance requirements. REFERENCES [1] C.. test. it is essential to follow the key elements for success identified near the beginning of this paper. Turban and J. and appropriate green. 2006. testing. • Findings were fed back to designers and process owners and then analyzed against the lean design criteria.” 6th Edition. pp. Microsoft and Excel are registered trademarks of Microsoft Corporation in the United States and/or other WileyTitle/productCd-0471694908. it builds confidence in the model results. Quebec. Canada. [2] [3] • Value can be delivered in the form of scope reductions. Read. these lessons remain overlooked until they are rediscovered after the plant is built. Aronson. and limits is critical to the modeling success. including: • Reduced waste • Lower capital and operating costs • Improved productivity • Assured reliability • Certainty of outcome • Alignment with customer needs A simulation-based approach delivers confidence to the process owners and project team. “Simulation-Based Lean Six-Sigma and Design for Six-Sigma. and rodded anode inventories. John Wiley & Sons. “Process Supervision and Decision Support Performed by Task-Oriented Program Modules. to ensure that: • Customer needs are fully defined and captured in a manner that can be transferred into the simulation model and measured against defined metrics for success. captured.wiley. El-Haik and R. Montreal. Horvath. and transient operating conditions for the proposed lean carbon plant configuration.E. These core competencies must be applied throughout the simulation model development. Bui. An in-depth knowledge of subsystem technologies. TRADEMARKS Flexsim is a trademark of Flexsim Software Products Inc. Prentice Hall.html. 31–44. We further demonstrated that adequate inventories could be achieved under all operating.” presented at the 4th International Conference on Industrial Automation. E. L.

both in Ontario. he was the carbon lead on the Ras Az Zawr aluminum smelter FEED study and the lead carbon area engineer on the Fjarðaál project in Iceland. and related infrastructure or systems. He provides expertise in the development of lean plant designs. Canada. Canada. turnkey projects for the carbon and reduction areas of aluminum smelters. and Kitimat aluminum smelter studies. Montreal. Hungary. where he was responsible for the overall design and layout of the carbon facilities as well as involved with the onsite construction and startup. and environmental air emission control systems for aluminum smelter development projects. Bob is one of Bechtel’s technology leads for the Ras Az Zawr. He also supported the installation. and expansions. Before joining Bechtel. casting facilities. Chile. with 17 years of experience in the mining and metals industry. Massena. Canada. Laszlo holds a PhD in Metallurgical Engineering from the University of Miskolc. Bob has over 40 years of experience in engineering and project management with Bechtel. PhD. is the senior specialist for Bechtel’s Mining & Metals Aluminium Centre of Excellence in Montreal. process modeling. McCulloch is manager of Bechtel’s Mining & Metals Aluminium Centre of Excellence in Montreal. Trevor has also performed lead roles on multiple projects. and simulation. upgrades. Trevor also provided technical and process troubleshooting services to operating smelters. ranging from single machines at existing plants to multiple systems on large greenfield projects. Bob holds a BEng in Civil Engineering from McGill University. Ontario. Canada. Canada.BIOGRAPHIES Robert Baxter is a technology manager and technical specialist in Bechtel’s Mining & Metals Aluminium Centre of Excellence in Montreal. where he was responsible for the technical development and execution of lump-sum. Number 1 79 . Before joining Bechtel. and Santiago. He has 29 years of experience in advanced aluminum process modeling and is an expert on aluminum production and transformation. both in anode rodding shops and casthouses. December 2009 • Volume 2. including reduction technology. Bob holds an MAppSc in Management of Technology from the University of Waterloo and a BS in Mechanical Engineering from Lakehead University. Trevor Bouk is a technical specialist in Bechtel’s Mining & Metals Aluminium Centre of Excellence in Montreal. He is currently the carbon area specialist providing technical and process expertise for the development of the carbon facilities required to support the aluminum electrolysis process. and is a licensed Professional Engineer in that province. Hungary. Bob was senior technical manager for Hoogovens Technical Services. Trevor has a BE in Mechanical Engineering from McMaster University in Hamilton. Canada. In addition. where he had lead project management roles on two major projects. Toronto. including 20 years of experience in aluminum electrolysis. as well as in smelter expansion and upgrade studies. Bob is also responsible for the execution of aluminum industry projects and studies assigned to Bechtel’s Montreal office. startup. His experience includes projects in the Canadian Arctic and management assignments in Montreal. Bob is a member of the Association of Professional Engineers of Ontario and was previously a member of the Canadian Standards Association Committee on Structural Steel and a corporate representative supporting the Center for Cold Oceans Research and Engineering in Newfoundland. His Doctor of Technology in Process Control and MSc degrees in Electrical Engineering and Science Teaching are from the Technical University of Budapest. He is a recognized specialist in smelter air emission controls and alumina handling systems. Laszlo has developed flexible process models and studies to provide information needed to support engineering and managerial decisions on aluminum smelter designs. and is a licensed Professional Engineer in that province. He recently returned to Canada after several years in Australia. Laszlo Tikasz. Trevor was involved with the design and supply of automated process equipment to the aluminum industry. Trevor provides support for aluminum smelter development studies and projects. materials handling. Quebec. Before joining Bechtel. primarily for projects in the mining and metals industries in Canada. carbon plants. Most recently. He has global responsibility for aluminum smelter technology projects and studies. Canada. Laszlo worked in applied research and industrial relations at the University of Quebec and the Hungarian Aluminium R&D Center. and early operation of many jobs. Bob has 26 years of experience in the mining and metals industry. Canada. Robert I.

80 Bechtel Technology Journal .

(4) flow conditions approaching sampler boxes and drop pipes. Berkoe jberkoe@bechtel. To illustrate this point.673 tons) per day may be required to process more than 120. Pumping raises the slurry into tanks at specific stages of the process.000 m 2 [183. the hydraulic design requirements for properly handling large slurry flows are critical to the overall plant Fred A. One of the consequences of the move toward increased capacity is the need to review design approaches and criteria for slurry handling. 81 .com Sergio A. then. hydraulic design.000 gpm) and comprise solids concentrations of up to 70% by weight. José M.000 metric tons (82. hydraulic jump.673 tons) per day of ore at about 1% copper content to produce a concentrate containing approximately 30% copper. Locher. slurry. and potential added value of the approaches are addressed. The same plant must also function effectively when turned down to process only 56. computational fluid dynamics (CFD). (2) bends in launders. concentrate. slurry handling. All examples presented are drawn from copper concentrator design. Robert H. PhD rjanssen@bechtel.000 metric tons (82. a plant with a nominal capacity of 75. This paper presents a number of hydraulic design approaches adopted by Bechtel’s Mining & Metals Global Business Unit as base metal concentrator plants have grown larger and slurry flow volume has The buildings containing these operations are large (17. (3) minimum launder slope for coarse solids transport. The discussion centers on the following key design elements: (1) combining high velocity supercritical flows. Since the mineral ore. but also consider in the design a minimum flow rate less than half that of the maximum. as is common in most hydraulic engineering applications. To allow the mineral slurry to be transferred from one process step to another by gravity in pipes or open channels (launders). launder. with an eye toward accommodating larger flows than have been handled with previous designs. Zamorano Ulloa sazamora@bechtel. Janssen. and gangue (commercially worthless mineral matter found with the valuable metallic minerals) passes through different processing steps as a slurry.000 ft2]).000–35. All rights reserved.000 tons) per day during an outage at the mine or of a mine conveyor. Adriasola jmadrias@bechtel.000–377. supercritical flow INTRODUCTION Overview—Slurry Handling in a Copper Concentrator A single line in a modern copper concentrator plant typically receives about 75.250 metric tons (62. Design concepts. Individual ore-water slurry flows can be up to 15. the responsible engineer must consider a design range that may extend from 160% of nominal design flow down to 75%. This aspect of design is an important consideration for transporting solids and is discussed later in this paper. concentrator. Keywords—base metal. A key requirement.000 metric tons (132. copper. in designing a slurry handling system is to not only design for a maximum flow Jon M. and (5) use of computational fluid dynamics modeling for distribution boxes. depending on conditions. In performing process design. and guideline plots for preliminary design are provided.000 m3/hr (66. example applications.IMPROVING THE HYDRAULIC DESIGN FOR BASE METAL CONCENTRATOR PLANTS Issue Date: December 2009 Abstract—Mine owners—particularly copper mine owners—are seeking the economic benefits realized from adopting larger-scale mineral concentrator plants with increased process capacity. sediment transport. PhD falocher@bechtel. concentrators are designed on a downward slope from the initial grinding process to the final concentrate and tailings © 2009 Bechtel Corporation. The remaining ore is disposed of as tailings in engineered storage facilities. and significant capital savings in civil-structural works can be achieved by optimizing this slope and the overall elevation difference in the plant.277 tons) per day of softer ore during the initial several years of mine life.

With these changes it has been necessary to review the hydraulic design criteria and methods used in the recent past and develop an updated toolkit for hydraulic design reflecting the new process requirements. as described in Discussion 3 below.ABBREVIATIONS. Unique design considerations come into play with this regime. Represented by Applied SAG Mill Power [1] 82 Bechtel Technology Journal . which necessitates combining some slurry flows and splitting others. AND TERMS 2D 3D CAD CFD SAG USACE two-dimensional three-dimensional computer-aided design computational fluid dynamics semi-autogenous grinding US Army Corps of Engineers limit velocity Generally. High velocities are therefore required in pipelines and launders to avoid deposition and blockage. 25 20 MW per SAG Mill 15 10 5 0 1950 1960 1970 1980 Year 1990 2000 2010 Figure 1. at launder bends (see Discussion 2). [1] One consequence of the move toward increased capacity is that slurry-handling equipment must now be able to convey much larger flows than handled with previous designs. and at launder interfaces with other equipment (see Discussion 4). ACRONYMS. and Resulting Design Challenges Figure 1 illustrates the continuing trend of constructing larger copper concentrator plants to benefit from economies of scale. VL Trend Toward Larger Concentrator Plants. as well as a uniform slurry particle size distribution in the split(s). Not only must a system be able to accommodate maximum flow. Some of the splits require control of both gross volume and solids volume. Some of the sequential process steps occur in parallel trains. Copper Concentrator Size Trend. it must meet the key requirement of transporting solids particles at all times. the hydraulic design of in-plant gravity slurry handling systems is based on Newtonian hydraulic engineering principles regarding turbulence. The required high velocities result in launders for slurry handling that are typically designed for a supercritical flow regime. The responsible engineer must consider a design range that may extend from 160% of nominal design flow down to 75%. This trend is represented by the increasing power applied to single semi-autogenous grinding (SAG) mills. particularly when combining flow streams (see Discussion 1). Designing distributors and slurry transfer boxes for high turbulence is important to ensure that solids remain suspended and uniformly distributed (see Discussion 5).

Here. Ninety-degree changes in launder alignment were also handled with drop boxes because of unsatisfactory experience with bends in launders. Vertical Combining of Flows In-plant arrangements incorporate multiple slurry launders and pipes. Bechtel has had to design launder contractions in the combining launder to achieve the correct flow conditions for the two joining streams. more elevated equipment. but it also created problems with splashing and overtopping of the main launder when the vertical stream joined the supercritical flow in the collection launder. The design of supercritical flow expansions and contractions is well documented in the literature. Prior to today’s modern systems with large flows. if the flow in the joining launder is less than the design flow rate. The surface waves generated in supercritical open channel flows can be thought of as a visual representation of the shock wave patterns in gas dynamics. Just as the design of supersonic aircraft (recalling the Concorde) mandated a shape significantly different from that of the subsonic Boeing 747 or Airbus 380. the design of bends and junctions for supercritical flow must consider free-surface waves and shapes very different from those of subcritical flows. Bechtel has successfully designed 90-degree bends in the joining launder upstream from the junction so that the two flows join as parallel streams. A separated flow leads to excessive cross-waves in the launder. the analogy with gas dynamics is the transition from supersonic to subsonic flow with a shock wave. in supercritical launder flows. and zones in which sanding followed by plugging can occur. Not only did this approach require a difference in elevation that translated into taller buildings. Fortunately. Generally. poor flow distribution in the launder downstream from the expansion. it was common to join launders converging at 90 degrees (or nearly at right angles) by dropping the combining flow vertically into the main collection launder with a drop box. For this to work. not abrupt. resulting in a cost savings of about US$600. To help visualize the difficulties. to avoid flow separation from the walls of the transition. so a properly designed junction operates satisfactorily without launder covers. Launder Junctions at Grade Joining launders at the same elevation with a straightforward tee connection simply does not work. combining them in slurry systems poses a particularly difficult design challenge. Launders must be designed so that a hydraulic jump (a transition from supercritical to subcritical flow with an attendant increase in the flow depth) does not occur and cause overtopping of the sides of the launder.UPDATING THE HYDRAULIC DESIGN TOOLKIT: KEY DESIGN ELEMENTS Discussion 1—Combining High Velocity Supercritical Flows General Given the supercritical flows in the launders. with guidance from the literature on supercritical junctions for flood control channels by the US Army Corps of Engineers (USACE). for all practical purposes it remains satisfied over a range of flows encompassing the plant operating conditions. In one Bechtel project. this condition is satisfied only at the design flow rate. since the cross-waves are very dependent on the approach flow Froude number (a dimensionless parameter that characterizes the importance of gravitational effects such as waves in open channel flow).) For 90-degree connections. Expansions in supercritical flow are generally the least susceptible to unacceptable cross waves. If the angle is less than 90 degrees.000. The full range of flows should be considered in the design. [2] (These designs are complex and are not discussed here. there are designs that can be used successfully. In addition. Many in-plant processes are composed of several individual The required high velocities result in launders for slurry handling that are typically designed for a supercritical flow regime. and longer structural support columns throughout the facility. eliminating the vertical changes in elevation by suitable design of launder connections at grade reduced the height of the entire mill building by more than 600 millimeters (24 inches). Number 1 83 . which often varies significantly with approach flow velocity and depth. Waves and splashing in supercritical flows require increases in the height of launder sides and also affect the overall layout of the system. Particular attention needs to be paid to crosswaves in contractions. an analogy can be made between subcritical and supercritical flows in launders and subsonic and supersonic flows in gas dynamics. if the velocity and depth requirement is met at the design flow. In some cases. the junction downstream from the two joining streams acts as an expansion. with the change in pressure across the shock corresponding to the change in flow depth. the flow depths and velocities of the two streams should be nearly equal. December 2009 • Volume 2. Expansions should be gradual.

Q2 2 φ1 φ2 y Wsinθ y1 ρ1. each one discharging its overflow at discrete points into a launder to be transported to the flotation process. and sort particles in a liquid suspension based on particle densities) work in parallel. separate. General Sketch of Vertical Combined Flows Showing Main Variables Involved [3] units that deliver separate slurry streams downstream or upstream in the production line. which can lead to spillage and uncontrolled overflows. For a defined control volume. The vertical combining of high velocity supercritical flows is a typical case of rapidly varied flow. The underflow from the hydrocyclones is returned for further grinding. grinding area hydrocyclones (devices that classify. the momentum equation can be stated as follows: [3] ρ Q2 1 1 A1 Where: ρ Q A g η φ2 θ W τ0 L = = = = = = = = = = + ρ1 • g • η1 • A1 + ρ Q2 2 2 • A2 cos (φ2 − θ) + W • sin θ = ρ Q2 3 3 A3 + ρ3 • g • η3 • A3 + τ0 • Pm • L (1) fluid density in a section (designated by numerical subscript) flow in a section (designated by numerical subscript) cross-section area in a section (designated by numerical subscript) gravity constant vertical distance between flow surface and mass center of cross-section area (designated by numerical subscript) slope of incoming flow jet with respect to the horizon slope of main collection launder weight of fluid contained in control volume mean shear stress in collection launder mean wetted perimeter in control volume length of control volume Pm = 84 Bechtel Technology Journal . V2. For example. Q1 y3 ρ3. The challenge for Bechtel is to improve the hydraulic design of this equipment. an approach using the momentum equation is needed. Q3 τ0 W L 1 3 θ A poorly designed supercritical combining flow can result in the formation of a hydraulic jump. θ Collection Launder (Constant Width = B or Diameter = D) Figure 2. modern plants. Figure 2 is a graphical depiction of vertical combined flows and the main variables involved.Incoming Launder X ρ2. therefore. which can lead to spillage and uncontrolled overflows. which traditionally has been designed based on old “rules of thumb” that result in conservative over-sizing and/or poor hydraulic performance in large. a poorly designed supercritical combining flow can result in the formation of a hydraulic jump. In particular. A sound hydraulic design leads to improved hydraulic performance under these critical conditions.

the flow tends to accelerate to reach the normal depth. V2 is calculated separately. In cases like this. In Figure 3(a). Q1. section 3 presents accelerated flow conditions. This means that there is a third possible water surface profile. In solving Equation 1. while subcritical flow is associated with the risk of sanding. maintaining the supercritical condition is essential to keep the sediments moving. and the solution is found without varying y1 (flow depth in section 1. the confluence of these flows produces the increase of y3 to reach the momentum equilibrium. while subcritical flow is associated with the risk of sanding. Downstream of section 3. Three Possible Solutions for Vertical Combining of Flows [3] December 2009 • Volume 2. Figures 3(a) and 3(b) show combining of flows with enough momentum to ensure that section 1 is not influenced by downstream conditions. the left side of Equation 1 (momentum in the direction of the flow) is not enough to reach the momentum equilibrium. 1 3 Section 1—Not Influenced by Downstream Conditions. Section 3 and Upstream Until Hydraulic Jump— Decelerated Flow Conditions (c) Figure 3. and φ2 are known data. Hence. or D1 (launder diameter in section 1) and D3 (launder diameter in section 3) for U-shaped or circular launders. In cases different from those above. as are B1 (launder width in section 1) and B3 (launder width in section 3) for rectangular launders. Especially when dealing with slurry flows. Here. it is important to check the maximum velocity in section 3 to control local abrasion. In Figure 3(a). Q2. Figure 3 illustrates three possible solutions for the vertical combining of flows. which should be avoided where possible. In these cases. decelerated flow conditions are presented. From section 3 and upstream until the hydraulic jump. section 3 presents decelerated flow conditions. The subcritical conditions force a hydraulic jump to occur upstream at a distance X that depends mostly on the slope of the collecting launder. the confluence produces the decrease of y3 to reach the momentum equilibrium. generating subcritical conditions. gradually varied flow calculations are usually required). These hydraulic conditions can be established by solving Equation 1. as shown in Figure 3(c). In Figure 3(b). Figure 3(c) shows combining of flows with section 1 influenced by downstream conditions. This is the preferred hydraulic design condition. which is equal to y1a—the depth of flow approaching section 1—in these cases). Section 3—Decelerated Flow Conditions (a) y3c y3N y1a y1 = y1a y3 1 3 Section 1—Not Influenced by Downstream Conditions. y1 increases. ρ 2. taking into account just the approaching depth y1a.Typically. the flow tends to decelerate to reach the normal depth. especially in the presence of slurry flows. the y3 value (the flow depth immediately downstream of the confluence) is of interest to ensure that the launder has been designed with enough freeboard and to check that the flow remains supercritical. Section 3—Accelerated Flow Conditions (b) Hydraulic Jump y0 = y1a 0 yj y1 > yR> y1a X y3 3 y3c y3N 1 Section 1—Influenced by Downstream Conditions. while in Figure 3(b). considering special arrangements in each case (for this purpose. Number 1 85 . θ. ρ1. it is important to have enough freeboard and to maintain the supercritical condition. Downstream of section 3. y3c y3N y1a y1 = y1a y3 Maintaining the supercritical condition of flow is essential to keep the sediments moving. it is not possible to reach the momentum equilibrium without changing the depth y1.

to check space availability (three-dimensional [3D] models) and estimate costs. Discussion 2—Bends in Launders Introduction Any changes in launder alignment with supercritical flows causes shock waves in 86 the channel that can lead to splashing and overtopping of the sides of the launder. Bechtel Technology Journal . and the curves were fitted to these data points. The data points were generated based on numerical solution of Equation 1. The example presented is for a receiving launder with a 1% slope and an incoming flow velocity of 3.00 0.00 Q1/Q2 = 2.000 m3/s 0.50 0. Adhering to the methods outlined in this paper makes it possible to design bends that offer assurance that launder covers are not necessary. and the launder sides will not be overtopped by splashing.20 Q1/Q2 = 1.0 m/s φ 2 = 45° ρ1 = ρ 2 0.032 to 2. and a negative wave forms at the inside.25 m (1. When the flow enters the bend.50 < B1 = B3 < 1.00 1.40 1. The approach flow velocity is V1.40 0.025 to 2.10 0.60 0.0 m3/s (0.25 m 0 0 0.8 ft/s) that enters with an angle φ2 = 45°. General Characteristics of Supercritical Flow in Launders Figure 6 shows a definition sketch for supercritical flow in a launder bend.40 y1a /B 0. Failure to recognize the problems with supercritical flow in launder bend design has resulted in extensive use of launder covers and continual operation and maintenance problems that have cost the owner both time and money. β represents the angle that the shock wave makes with the tangent to the circular curve. the depth is y1.20 0. A positive wave forms at the outside of the bend. Bechtel has developed practical design guideline tools for the plant designer to use during the first steps of studies and design.60 0.1.30 0.25 1.50 1.10 ft) (same width for section 1 and section 3).80 Q1/Q2 = 5. when design changes are common.70 0.60 Q1/Q2 = 0.025 < Q1 and Q2 < 2. and width B varies from 0. Design Curves for Rectangular Concrete Launder Besides the detailed approach presented above.80 Figure 4. Flow rates Q1 and Q2 vary from 0. the bend will not sand up. y3 /B 0.00 Q1/Q2 = 4. and the approach flow Froude number is F1 = V1/(gy1)1/2. The height of the waves cannot be determined with Manning’s equation (an equation used to analyze open channel flow) or any of the methods used for subcritical flow. Figures 4 and 5 show design curves for a rectangular concrete launder that can be used as a first approach to evaluate whether flow conditions upstream of the confluence are affected by the incoming flow and to provide a preliminary estimate of flow depths for launder sizing.64 to 4.00 Failure to recognize the problems with supercritical flow in launder bend design has resulted in the extensive use of launder covers.6 yd3/s). Launder bends are a case in point. two shock waves form due to the change in direction.0 m/s (9.50 to 1.80 Q1/Q2 = 0.20 Slope = 1% V2 = 3.

Regions Influenced and Not Influenced by Downstream Conditions 2 Δ h/ c b a V1 B β1 D 2 Δ h/ a′ b ′′ c c′ a′ b′ b C Sec .20 0 0. where g is the 87 December 2009 • Volume 2.025 < Q1 and Q2 < 2.50 < B1 = B3 < 1.000 m3/s 0.60 1.1. F–G F β1 b a c ma x min mi ma x n b V θ0 A′ β θ 2θ 0 3θ 4θ Figure 6. E C–D A C′ A rc B β C′ C ma x min D min max G Sec. directly across the channel from the maximum located at C.0 0. Supercritical Flow Conditions in Bends 0 As illustrated in Figure 7.25 m y3 /B 0. Slope = 1% V2 = 3.40 1. Layout for Compound Curves for Supercritical Flow Conditions in Bends Analogous to a highway curve that is banked or superelevated.50 Figure 5.40 0. the free surface in a launder bend is superelevated. Number 1 r c .50 1. The pattern of successive maximums and minimums then repeats along the length of the bend. the first maximum from the beginning of the bend occurs at C at an angle θ 0. Figure 7.50 2.80 1.60 Section 1 Not Influenced by Downstream Conditions Proper design of curves eliminates the need for launder covers. In subcritical flow.20 Section 1 Influenced by Downstream Conditions 1. as shown in Figure 7. the rise in liquid surface at the outside of the bend in a launder of width B is given by Equation 2.00 2.00 0.80 0.0 m/s φ 2 = 45° ρ1 = ρ 2 0.00 y1 /B 1. The first minimum on the inside of the bend occurs at D.

transport of solids in launders differs from transport in alluvial channels in two main respects: • Limit velocity in launders is the velocity required to avoid stationary deposits. [4] Discussion 3—Minimum Launder Slope for Coarse Solids Transport Launder Slope and Limit Velocity Key design parameters for slurry launders are related to the minimum flow velocity needed to transport solids particles and the minimum longitudinal launder slope needed to ensure that this velocity is maintained across the full range of input conditions.acceleration of gravity and V is the average velocity of the flow approaching the bend. this rise in liquid level is twice the value for subcritical flow in bends. Equally important. Limit Velocity for Stratified Fraction Transport Incipient motion of non-cohesive sediment particles in alluvial channels is typically evaluated based on the Shields number (a non-dimensional expression of the relative mobility of a sediment particle). This low liquid level can lead to a significant decrease in the ability of the liquid to transport the coarser fraction of the slurry. and an exit transition curve of radius rc . the fine particles are uniformly suspended in the liquid to form a pseudo-homogeneous equivalent fluid. whereas tailings launders may be designed with a large radius of curvature. In alluvial channels. The limit velocity required to maintain the stratified fraction in motion is discussed next.006 inch). when applied to transport of coarse solids. • In the heterogeneous fraction. As a first approximation. the maximum size of the pseudo-homogeneous fraction can be taken as 0. the decrease in liquid level on the inside of the bend is twice as much as for subcritical flow. Slurry Transport Mechanisms Slurry transport mechanisms can be divided into three fractions [5]: • In the pseudo-homogeneous fraction. In alluvial channels. discussed extensively in the sediment transport literature. and Taylor [6] can be used to estimate the limit velocity to maintain the solids particles in suspension. bed load transport is over an alluvial bed. the superelevation in the bend for a rectangular-shaped launder is the same as for the subcritical flow given by Equation 2. but for preliminary design. • In the stratified fraction. the effects of the cross-waves in a bend can be reduced by providing a compound curve that consists of an inlet transition curve with radius 2rc . the minimum flow depth on the inside of a bend should not be less than 100 millimeters (4 inches). An alternative design approach is presented here that allows finer solids to be transported as suspended load and coarser solids as bed load. Nalluri and Kithsiri [9] extended these investigations to develop empirical relations for determining the minimum velocity in a rigid bed channel to avoid stationary deposits of Sanding or plugging in launders requires consideration of both the flow depth (for example. the coarse particles are transported as bed load by sliding or bouncing along the bottom of the channel. The use of a compound curve is required where space limitations restrict the radius of curvature or the flow velocities in the launder are high. However. The particle size distribution for the slurry is required for a complete analysis. This leads to a higher liquid level on the outside of the bend. [7] However.15 millimeter (0. Methods traditionally used to estimate the minimum velocity in launders are generally based on the premise that solids 88 Bechtel Technology Journal . Process systems often require the compound curve. resulting in sanding at the bend and potential plugging of the launder. Experimental studies conducted at the University of Newcastle upon Tyne in the UK in the early 1970s [8] investigated the incipient motion and transport of solids in pipes and rigid bed channels. the intermediatesize particles are transported in suspension with a vertical concentration gradient. For rectangular launders. The design of bends in launders using the methods presented here is restricted to bends with a radius-of-curvature to launder-width ratio of rc /B = 10. Too sharp a bend leads to separation of the flow from the inside of the bend and potential hydraulic jumps at the bend. incipient motion is the start of motion of the deposited bed. with the potential for overtopping the launder sides. Δy = V 2B 2 grc (2) are transported as suspended load. With a compound curve. • Bed load transport in launders takes place over a rigid bed. in bends) and the flow velocity. In supercritical flow. The methods given by Green. Lamb. a central curve with radius rc . these methods result in excessively high limit velocities.

Also identified in Figure 8 are regions of different sediment transport mechanisms.solids.0 Bed Load Transport 5. However. it is more desirable to transport the larger particles as bed load.0 0 1 10 Solids Particle Size.30 mm 40. Design Value 1. depending on the actual launder flow velocity. Again referring to Figure 8. 10 millimeter (0.000 m3/hr 70% by weight 2. since it causes high wear on the launder lining. For example. The limit velocity for their methods is established at the limit condition of a stationary bed deposit.26.0 4. the relative density of the pseudo-homogeneous Table 1. it is more desirable to transport the larger particles as bed load. Ball Mill Feed Launder Process Design Data Process Parameter Minimum Discharge Solids Concentration Solids Relative Density Particle Size Distribution • d20 • d50 • dmax 0.0 Launder Design Limit No Transport 1.0 8. The launder design limit velocity (VL) occurs at When coarse solids are present. Based on the fact that 20% of the solids material is finer than 0. mm Suspended Transport Limit Velocity Bed Load Transport Limit Velocity 100 Figure 8.0 Suspended Transport Flow Velocity.15 millimeter (0. It can be seen that particles larger than approximately 3 millimeters (0.0 9.006 inch). m/s 6. flow velocities required to transport solids particles in suspension are shown to increase rapidly as the particle size increases. Using the equivalent fluid to transport the heterogeneous and stratified fractions. Number 1 89 . The analysis presented is for a rectangular launder with an internal width of 500 millimeters (about 20 inches). since the limit velocity required to suspend the particles is lower than the limit velocity for bed load transport.0 2. It is good design practice to avoid this condition by designing launders with a minimum design velocity at least 10% greater than the limit velocity.15 mm 1. numerical simulations are conducted to generate curves of limit velocities for suspended and bed load transport for the range of solids particle sizes plotted in Figure 8. Plot of Transport Regimes for Various Particle Sizes December 2009 • Volume 2.1 inch) will always be transported in suspension. particles smaller than 3 millimeters (0.0 7.8 10.1 inch) can be transported by either bed load or suspended load. Therefore. This is often impractical for launder design.0 VL 3.00 mm fraction is evaluated to be 1. Case Study: Launder Design Based on Foregoing Methodology The methodology just presented in Discussion 3 is used here to assess the hydraulic conditions in a ball mill feed launder in a copper concentrator. The process design data for the launder is given in Table 1.4 inch) solids particles require a velocity of 6 m/s (about 20 ft/s) to be transported in suspension.

5 ft/s). the hydraulic jump is either into the sampler box itself (upstream chamber) or into the launder (closer to or farther from the sampler box inlet).5 m/s (about 11. if needed. Discussion 4—Flow Conditions Approaching Sampler Boxes and Drop Pipes Sampler Boxes Sampler boxes (see Figure 9) are typically provided by specialized vendors who are not involved in the overall project and therefore may not know the specific hydraulic conditions of the flow approaching the sampler box. Therefore. This is similar to the case involving a hydraulic jump that is presented in the Vertical Combining of Flows subtopic of Discussion 1. Sampler approach flow conditions are typically calculated using the Manning or Darcy-Weisbach formulas when Newtonian fluids are involved and using gradually varied flow equations when needed. When the launder reaches the sampler box. at approximately 3. The hydraulic characteristics of the flow approaching the vertical discharge are calculated the same way as described previously for sampler boxes or by using typical equations for full pipe flow. Integrating these samplers into the plant layout without integrating the hydraulic design can lead to inadequate sampling performance. the supercritical flow encounters a hydraulic control characterized by subcritical conditions (the upstream chamber into the sampler box).2 inch) being transported in suspension. while coarser particles would be transported as bed load. Engineering design can then ensure enough freeboard to avoid spillages. The flow conditions necessarily change from supercritical to subcritical. producing a hydraulic jump. Drop Pipes Drop pipes are typically used for vertical discharges. launders and pipes with free surface flow (in this case.5 millimeters (0. the launder slope. a short distance is best. liquid level (1) is equal to level (2) plus head losses caused by the flow through the opening. If the hydraulic jump is into the launder. potentially leading to blockage of the launder. Use of the momentum equation can help determine whether the hydraulic jump is located outside or inside the box. Should the flow velocity be lower than this limit velocity. Thus. Integrating sampler boxes into the plant layout without integrating the hydraulic design can lead to inadequate sampling performance. it is necessary to determine the distance from the box inlet to the point upstream where it occurs. As described previously.the intersection of the suspended and bed load limit velocity curves. as well as the respective dimensional profiles. the limit condition of stationary deposits is avoided. In such cases. stationary solids deposits would form. Figure 9. Sketch of Typical Sampler Box 90 Bechtel Technology Journal . designing this launder for a minimum velocity of 4 m/s (13 ft/s) would result in particles up to about 4. To obtain enough turbulence to avoid settling of particles or segregation of particle size distribution. Depending on the momentum provided by the approaching flow. Launder or Pipe Inlet Free Surface Flow Supercritical Hydraulic Jump Sampler Operation Level 1 Hydraulic Control Subcritical 2 Outlet Weir Underflow Opening Note: Operation level (2) is constant for a given flow. and the geometric characteristics of the launder. approaching sampler boxes) are usually designed for supercritical conditions in view of solids transportation issues (see Discussion 3).

head losses can be calculated for the design flow required. CFD can be used during detail design to analyze expected performance of a distribution box and to optimize its design. and particle size) was not being obtained from the distribution box. During operation. advanced numerical solution techniques. Once again. the launder is also curved. Distribution boxes are used in many areas of a concentrator to distribute liquid and solids evenly among several process trains without incurring excess deposition (sanding) and successfully passing material of considerable size variation without segregation. it is necessary to determine its location and to verify if the flow condition affects solids transport. Non-uniform flow distribution of slurry—the solid particle component in particular—downstream of the distribution box can result in less-than-optimum production yields. Sketch of Typical Drop Pipe the hydraulic control could be the entrance of the vertical pipe. The distribution box baffling and size were selected to create mixing that. mesh). When a hydraulic jump occurs upstream of the orifice. and state-of-the-art graphic visualization. would tend to even out the distribution of the coarse fraction to the ball mills.e. Onsite plant engineer reports indicated that equal pulp distribution (volume. solids. open-channel launder through which slurry flows with increasing velocity due to a downward slope. The flow drops into the midsection of the box. Number 1 91 . modeled as an orifice (see Figure 10). A limited amount of data taken at the plant was made available for comparison with the CFD model. and then prescribes natural system boundary conditions. Site observations were also reported to provide additional guidance and feedback.Pipe Flow 1 Supercritical Hydraulic Jump Full Pipe Flow 2 Orifice Drop Pipe Figure 10. which induces a nonuniform velocity upon entrance into the SAG mill discharge sump. Considering the orifice diameter and assuming that there is full pipe flow immediately upstream of the vertical discharge. The following CFD analysis was based on a specific case study from an operating plant in Malaysia. December 2009 • Volume 2. which acts as a distribution box among four ball mills. the momentum equation helps determine whether the hydraulic jump is located right above the orifice or upstream into the pipe. where it continues under the baffle created by the center column and then exits via four openings into launders The momentum equation helps determine whether the hydraulic jump is located right above the orifice or upstream into the pipe. CFD software employs computer-aided design (CAD) tools to construct a computational grid (i.” A typical CFD simulation begins with a CAD rendering of the geometry. In this plant. CFD eliminates the need for many simplifying assumptions because the physical domain is replicated in the form of a computerized “prototype. the slurry flow spills off the launder into the middle of the distribution box and impinges on the back wall of the center column. CFD Model Description The CFD model was set up using design drawings obtained from the plant.. adds physical and fluid properties. Discussion 5—Use of Computational Fluid Dynamics Modeling for Distribution Boxes The Need for Computational Fluid Dynamics Computational fluid dynamics (CFD) is used to model fluid dynamics in three dimensions. The model was composed of an upstream. at minimum.

which represents the calculated two-dimensional (2D) position of the fluid surface in the box in a 3D view.000 Measured Particle Size. The SAG screen undersize distribution. The relatively uniform level indicates that the box is performing its function of mixing and distributing flow effectively to all four outlets. and that the resulting distribution of stream lines is weighted more heavily toward the front outlets (FR-L and FR-R in Figure 11). if the CFD solution is examined closely. FR-L BA-R CFD eliminates the need for many simplifying assumptions. % 60 40 20 0 1 10 100 1. Distribution Box Model feeding the four ball mills. Particle Size Distribution for Solids Plot 92 Bechtel Technology Journal . Sand buildup is an important operational issue for the slurry.000. where most of the turbulent mixing seems to occur. The distribution box model includes a portion of the launder that starts from a plane located sufficiently upstream of the box to prescribe a velocity boundary condition calculated from the launder model. the transient. The effect of sand buildup on the bottom of the box was accounted for in a simplified manner by shortening the section below the four outlets. the non-uniform aspects of the flow distribution to the outlets can be observed. Figure 13(b) shows flow–stream lines started from the launder exit. as provided by plant operators.FR-R The solids passing through the distribution box are not uniform in size. As mentioned previously. Figure 13(b) also shows how the back wall of the mid-section causes the flow to deflect toward the front region of the box. is shown in Figure 12. microns 10. which are calculated in the CFD post-processor. The progression of the flow inside the box is shown in this plot. sloshing behavior of the fluid is realistically captured. Figure 11 shows the resulting surface geometry of the distribution box model. However. and it is believed that the manner in which buildup occurs can affect slurry flow distribution.000 100.000 Figure 12. 100 80 Cumulative Distribution.000 1. This effect has been observed in practice as plant operators reportedly have had to “throttle” the front outlets (using valves) to achieve a more balanced flow rate between the front and back outlets. CFD Model Results Figure 13 illustrates the results of the CFD model of the distribution box. BA-L FR-L = Front-Left FR-R = Front-Right BA-L = Back-Left BA-R = Back-Right Figure 11. Figure 13(a) shows an iso-surface of fluid volume fraction. Each of the four outlets is identified for reference.

Number 1 93 . Size representations of 75. % 30 25 20 15 10 5 0 75 300 850 1.700 3.(a) Iso-Surface Profile of Slurry in Box (b) Flow-Stream Lines Inside Distributor Box Started at Launder Outlet The CFD solution replicates observed non-uniform aspects of the flow. 1 A mathematical model used to compute trajectories of a large number of particles to describe their transport and dispersion in a medium December 2009 • Volume 2. In this case. although the distribution for smaller particles is more uniform. The differences between the left and right sides are likely caused by the upstream curvature of the launder. and the corresponding results are shown in Figure 14. slightly more solids flow out the left side from both the front and the back of the box. 1. 300. This behavior was in agreement with observations at the plant.360 Total Particle Size. CFD Model of Distribution Box — Results 40 35 Particles Through Outlet.360 microns were used for the calculations. and 3.700. It appears that the trends are consistent between larger and smaller particles. The statistics were computed for the four outlets as shown in Figure 11. 850. microns Front-Left Front-Right Back-Left Back-Right Figure 14. Figure 13. Distribution of Solid Particles from Distributor Box Outlets Based on CFD Predictions The stochastic method of tracking particles is generally in good agreement with observations made at the site. These results augment the earlier fluid flow model results showing a higher percentage of flow to the front outlets. Lagrangian model 1 particle traces were executed in the CFD program for a range of particle sizes approximating the range of measured particles reported from site data.

emc. and A.html. 113.” Mining Engineering.asce. 2005/proceedings2005cont. McNown and P. May 15–17. Inc.CONCLUSIONS ith major mining companies continuing to plan for new or expanded mining and ore concentrating projects. ERDC/CHL TR-07-10. 3. Bechtel’s engineering specialists have developed and applied detailed methods of hydraulic analysis to meet the challenge of designing ever larger slurry handling systems for the August 1978. Boston. pp. pp. • T.htm. [8] [9] ADDITIONAL READING Additional information sources used to develop this paper include: • C.cgi?8700317. XVII (abstract). 111. pp. • T. Valiani and V. 912–916. ASCE. Paper E4.’” Journal of Hydraulic Engineering. 1976..erdc. Chile. and R. p.” Journal of Hydraulic ?fullText=a+new+launder+design+procedure. Canada. • Fluent. No. G. Novak and C. E4-45–56. Fort Collins. 3rd Edition. • J. ?aModele=afficheN&cpsidt=4474106.W.” 1st Edition. CO.” Invited Plenary Lecture. “Extended Data on Sediment Transport in Rigid Bed Rectangular Channels. October 21–24. R. http://www. “Correlation of Sediment Incipient Motion and Deposition in Pipes and Open Channels With Fixed Smooth Beds. The net result was a 600 millimeter (24 inch) reduction in overall building height at a savings of US$600. Sturm. pp. “New Developments in the Production of Non-Ferrous Metals. 1992. [2] [3] [4] 94 Bechtel Technology Journal . New York. Imrie.N. 871–875. 2001. http://chl. pp.cgi?0527792. C.B. 1985.A. Smith. http://cedb. access via http://nla. 117–119. October 2005. 401–411. WWWdisplay. Dresden. LLC. 5. access via cgi/ cgi/WWWdisplay. Reprint No.P. Viña del Mar.congresohidraulica. Addie. “Slurry Transport Using Centrifugal Pumps.cgi?8501175.R.M. The Ohio State University. Lin. September 18–21. A. pp.” Proceedings of the XIX Congreso Chileno de Ingeniería Hidráulica. 2009 (on CD). State University of Iowa Reprints in index2. No. Taylor. Caleffi. “Simplified Design of Contractions in Supercritical Flow. “A New Launder Design 9/5/4/ERDC-CHL%20TR-07-10.S. ASCE.D. 2005. Lamb.asce. Wilson.M.D. “Sediment Concentration and Fall Velocity. reducing the length of structural steel columns and the amount of elevated equipment needed. May” Report No. The approaches in this updated toolkit for hydraulic design are applicable to all stages of the project design life cycle. P. access via http://www. 10.” University of Saskatchewan Printing Services. A. “Open Channel Hydraulics. September 2007. Stockstill. Green. Kithsiri. Golden.” Springer Science+Business Media. see http://www. OH. Simons and F. J. No.onemine. Vol. 131. Saskatoon. and savings in civil-structural capital costs realized by avoiding the use of overly conservative designs. Proceedings of European Metallurgical Conference (EMC 2005).pdf. Clift.” Water Resources Publications. Sturm. March 1987. 425–427. [5] W K. Benefits afforded by the effective use of these methods include improved hydraulic performance and reliability. optimized system design. [6] [7] Bechtel’s engineering specialists have developed and applied detailed methods of hydraulic analysis to meet the challenge of designing ever larger slurry handling systems. H. ASCE. 1999.wrpllc.U. “Lateral Inflow in Supercritical Flow. from preliminary process plant layout design to system commissioning and operation. No. “Sediment Transport Technology – Water and Sediment Dynamics. access via http://cat..pdf. 6. “Closure of ‘Simplified Design of Contractions in Supercritical Flow.W. REFERENCES [1] W. Sellgren. pp. http://www. 2006. 851–856. “Brief Analysis of Shallow Water Equations Suitability to Numerically Simulate Supercritical Flow in Sharp Bends.W. 30. • T. “Hydraulic Structures.L. the effective design of launder connections in a mill building enabled the use of lower slopes and lower launder side-walls. Nalluri. Vol. CO.000 in total installed cost. http://www. In one example of how benefits may be derived.gdmb. Sentürk. 109/1952.” Proceedings of the 2nd Midwestern Conference in Fluid Mechanics. McGraw-Hill Book Company. US Army Corps of Engineers. Sturm. Nalluri and M.” Proceedings of the Third International Conference on the Hydraulic Transport of Solids in Pipes (Hydrotransport 3).” IAHR Journal of Hydraulic Research. Fluent 5 Software User’s Manual. “Confluencia Vertical de Flujos Supercríticos de Alta Velocidad – Modelación Preliminar y Análisis. Inc. D. D. http://cedb. MA. Coastal and Hydraulics Laboratory.bib-an40253556. Vol.” Journal of Hydraulic Engineering.usace.

1967–1968. virtual reality. is a principal engineer in Bechtel’s Geotechnical and Hydraulic Engineering Services Group. where he has also supervised and performed research related to the efficient use of water in urban environments. José has an MS in Hydraulic Engineering from Pontificia Universidad Católica de Chile. Inc. He currently serves as a technical specialist with the Mining & Metals Global Business Unit and participates in hydraulic and hydrologic engineering analysis and design on multiple projects. and dynamic simulation in support of Bechtel projects across all business sectors. and a BS in Civil Engineering from Michigan Technological University. and is chair of the organizing committee for the International Junior Researcher and Engineer Workshop on Hydraulic Structures. Adriasola. Fred received a PhD in Hydraulics and Fluid Mechanics and an MS in Mechanics and Hydraulics. Iowa City. he has pioneered the use of advanced engineering simulation on large. and a BSc (Hons) in Engineering Science – Civil Engineering from Durham University. hydroelectric. a civil engineer with 10 years of experience in hydraulic engineering. joined Bechtel in 2008. Robert H. study management. PhD. highway. and wastewater projects. rail. the Colegio de Ingenieros de Chile. mining. December 2009 • Volume 2. During his 21-year career with Bechtel. airport. Ann Arbor. Robert has a PhD in Civil Engineering – Environmental Water Resources from the University of California at Berkeley. technical oversight. Santiago. Fred served on the Intake Design Committee for development of the ANSI/HI9. Advanced Simulation and Analysis Group. Chile) since 2005. They include the National Academy of Engineering’s Gilbreth Lecture Award. and flood control channels. with over 35 years of experience in the hydraulic design of structures. His particular research interests are sediment and slurry transport. including spillways. analyses of non-Newtonian flows in pipelines and open channels. both from the University of Iowa. the Society of Mining Engineers’ Henry Krumb Lecturer Award. and power industries. engineering management. an MSc in Civil Engineering – Hydraulics from the University of Michigan. Escondida. and three Bechtel Outstanding Technical Paper awards. in the UK. Fred A. José has taught courses in fluid mechanics and urban hydrology and hydraulics at Universidad de los Andes (Santiago. Houghton. and the Karl Emil Hilgard Hydraulic Prize in 1975. Jon has presented and published numerous papers for a wide variety of industry meetings and received several prominent industry and company awards. with 20 years of experience in conceptual and detailed design.8 American National Standard for Pump Intake Design published in 1998 and is the author of more than 30 publications in technical journals and conference proceedings. evaluation of scour and sediment deposition in structures and conveyance systems. and the International Association for HydroEnvironment Engineering and Research. to be held in Edinburgh. Scotland. petrochemical. including the Ralco hydropower plant and the Los Pelambres. José’s technical knowledge and skills have been applied to hydropower and mining projects in Chile. is a member of the leadership team for the Hydraulics Structures Committee of the International Association of Hydro-Environment Engineering and Research. Locher. energy dissipators.BIOGRAPHIES José M. PhD. He is a licensed Professional Civil Engineer in California. among others. infrastructure. Number 1 95 . in May 2010. and methodology and design guidelines for slurry transport systems. where he has published more than a dozen technical papers. and is a licensed Professional Mechanical Engineer in California. Janssen. water.’s. is a principal engineer for Bechtel Chile. petrochemical. and construction of complex hydraulic and water resources infrastructure for worldwide power. complex projects encompassing a wide range of challenging technical issues and complex physical conditions. Robert is active in international hydraulics organizations. and Los Bronces copper concentrator plants. Cambridge. He was a member of ASCE’s Task Committee on Standards in Hydraulics and received the ASCE Freeman Award. analysis of hydraulic transients in process systems for mining. Fred is a member of the American Society of Civil Engineers and the International Association for Hydro-Environment Engineering and Research. Jon holds an MS and a BS in Mechanical Engineering from the Massachusetts Institute of Technology. Jon M. finite element structural analysis. His professional memberships include the Chilean Society of Hydraulic Engineering. He oversees a team of 20 technical specialists in the fields of CFD. Berkoe is a senior principal engineer and manager for Bechtel Systems & Infrastructure. Jon is an innovative team leader with industry-recognized expertise in the fields of CFD and heat transfer.

Chile. “On the Use of the Weibull and the Normal Cumulative Probability Models in Structural Design. near Santiago. In 2008. Chile. He is affiliated with the Colegio de Ingenieros de Chile. He has 4 years of engineering experience in the field of mining. and tanks. and Inconsult.. Santiago. he was selected as principal mechanical engineer on Bechtel’s Los Bronces project. pumps. In his previous assignment. Sergio co-authored the short paper. Sergio Contreras y Asoc.” published online in January 2007 for Elsevier in Materials & Design 28 (2007) 2496–2499. Zamorano Ulloa joined Bechtel in 2008 and is a mechanical engineer with the Xtrata Mechanical Group. Innovatec YNC Ltda. Sergio was a mechanical engineer for Vector Chile Limitada.Sergio A. Previously.. 96 Bechtel Technology Journal . all in Santiago. he assisted and prepared mainly hydraulic calculations concerning channel-free surface fluid flow. Sergio received both Mechanical Civil Engineer and Material Civil Engineer degrees from the Universidad de Chile.

Oil. Gas & Chemicals Technology Papers TECHNOLOGY PAPERS 99 Plot Layout and Design for Air Recirculation in LNG Plants Philip Diwakar Zhengcai Ye. Rajesh Narayan Athiyarath . PhD Ramachandra Tekumalla David Messersmith Satish Gandhi. PhD 109 Wastewater Treatment—A Process Overview and the Role of Chemicals Kanchan Ganguly Asim De 119 Electrical System Studies for Large Projects Executed at Multiple Engineering Centres Sabine Pass LNG Terminal A tanker with a cargo of liquid energy is moored at the Sabine Pass liquefied natural gas receiving terminal in southern Louisiana.


Keywords—air flow. Capacity Heat/MTPA Linear (Heat/MTPA) MW y = Cx0. terrain.000 70.5 3. Even with state-of-the-art equipment and thermally efficient designs employing combined cycle power and process integration. stacks.l.0 2. This paper discusses typical air recirculation issues and mitigation measures and presents case studies. simulation methodology.0 4. Since ACHEs reject heat to the atmosphere.0 MTPA MTPA Figure 1. To develop and optimize plant layouts that minimize the effects of air recirculation.0 5. liquefied natural gas (LNG).000 55. Impact of Plant Capacity on Plant Footprint and Heat Release © 2009 Bechtel Corporation. virtual reality.PLOT LAYOUT AND DESIGN FOR AIR RECIRCULATION IN LNG PLANTS Issue Date: December 2009 Abstract—The disposition of waste heat in liquefied natural gas (LNG) plants has become increasingly important as train sizes approach 5 million tons per annum (MTPA). propane condenser. their effect on local ambient temperature and wind conditions can contribute to loss of LNG production. data comparison.5 5. computational fluid dynamics (CFD).gandhi@ conocophillips.9833 z 2. air-cooled heat exchanger (ACHE). PhD s the typical train size in liquefied natural gas (LNG) plants has grown from 2 million tons per annum (MTPA) in 1990 to 4.8029 R = 0. but also by the need to optimize design margin with overall facility arrangement and capacity requirements. All rights reserved. particularly from the impact on the turbine drivers of refrigeration compressors.000 60.000 Plant Area vs. Figure 1 illustrates how this waste heat can affect a facility’s ability to produce at relative design capacities for all potential ambient conditions. 80. the disposition of waste heat has become increasingly important. a large multi-train facility releases a significant amount of heat.000 75. mitigation. A major contributor to this problem is the large number of fin-fan.000 50.000 2.5 6. and climate. temperature rise. The production rate at LNG plants can be very sensitive to the inlet temperatures of compressor turbine drivers and plant air-cooling equipment.000 65. with trains of more than 5 MTPA capacity being considered by several projects. PhD zye@bechtel.5 MTPA today. And solving this problem will only become more critical in the future. facility design economics will be driven not only by normal equipment and operating ConocoPhillips Company m2 3.000 2.982 z Satish Gandhi. temperature contamination. The inlet temperatures of this equipment depend on the local wind conditions.500 2. crosswind. Thus.5 4. Capacity m2/MTPA Power (m2/MTPA) 4.000 40. air-cooled heat exchangers (ACHEs) typically used to cool the gas to liquid phase. 99 .000 3.0 y = Ax + B R = 0. Ramachandra Tekumalla rptekuma@bechtel.0 3.500 Heat Released vs. David Messersmith dmessers@bechtel. skirts.000 Zhengcai Ye. heat exchanger.0 3.0 6. Bechtel uses computational fluid dynamics (CFD) models.0 4. multi train.0 5. wind rose INTRODUCTION A Philip Diwakar pmdiwaka@bechtel.

and compressor as individual components (Separate components reduce time requirements for geometry or component modifications.e rP Data Collection/ LNG Input Sheets Pre-Processing g Solution • Merge all components into one grid file • Set appropriate boundary conditions and run calculations for individual wind directions Post-Processing • Obtain contour plots and animation sequences showing air recirculation patterns Is the temperature rise more than 2 °C above ambient? No • Present temperature contours. To demonstrate how CFD is applied in designing LNG plant layouts. and environmental effects • Benchmark and validate. to examine the air velocity profiles around individual pieces of equipment. an engineer must have a good understanding of the phenomenon of air recirculation within the facility. Since ACHEs reject heat to the atmosphere. particularly from their impact on the turbine drivers of the refrigeration compressors used to maintain stored LNG in the liquid phase. These ACHEs use large axial-flow fans to blow air over finned tubes. and micro. ACRONYMS. if data available Figure 2. fan curve characteristics.) • Obtain component specifications. and write-up • Develop prototype drawings and detailed design process. This approach is useful in evaluating a site’s impact on process performance and in developing design margins for equipment. Most manufacturers design and test an ACHE based on a single-bay setup. The total number of condenser units required for a given duty.ABBREVIATIONS. the performance of each bay in a multiple-bay design may differ from that of a single bay because of variations in air flow distribution and hot air recirculation. and develop prototype build control plan • Use lessons learned to reduce costs. to evaluate overall siting requirements such as orientation and spacing. CFD enables air recirculation impacts to be analyzed for greenfield as well as brownfield sites. and heat requirements 100 s s e co rP . risks. and the equipment layout plan govern the actual construction of bays in the field. they can affect the local ambient temperature and wind conditions. AND TERMS ACHE CFD LNG MTPA air-cooled heat exchanger computational fluid dynamics liquefied natural gas million tons per annum THE IMPACT OF AIR-COOLED HEAT EXCHANGERS L NG plants typically use a large number of fin-fan. Bechtel performs CFD analyses at two scales: macro. thereby removing heat and condensing the process gas. air cooler. review validation plan based on analysis. These effects can lead to loss of LNG production. As train size increases. the potential impact of ACHEs on plant performance becomes even more significant and further establishes the value of performing CFD studies to evaluate ways of mitigating the effects of air recirculation through plant layout and other measures. Bechtel uses computational fluid dynamics (CFD) for this purpose. The performance guarantee of each condenser unit is based on the availability of sufficient air at design temperatures at its inlet face. animation. • Add stacks and skirts to individual components • Study the effects of horizontal and vertical skirts of varying length and the addition of longer stacks on hot air recirculation patterns Yes • Terrain data • Wind roses • Drawing with dimensions (top and elevation views needed) • Generate grid • Mesh propane condenser. CFD Analysis Work Process Bechtel Technology Journal . Validation of modeling methodology is also described. space considerations. To design a plant layout for optimal production. However. flow rates. air-cooled heat exchangers (ACHEs) to cool natural gas to the liquid phase. as well as the air recirculation caused by exhaust from plant equipment. this paper studies the interaction of a multi-train LNG facility with its environment. CFD enables air recirculation impacts to be analyzed for greenfield as well as brownfield sites.

The following example illustrates this validation process.5 °C (0. wind measurements were only obtained for a 10-day period. Measurements In one case study at a three-train LNG plant whose CFD grid is shown in Figure 3. With answers to these questions. The sensors were mounted about 1. no definite trend can be identified. Because of the scattered nature of the measurement data. 25 sensors were installed on the ethylene and propane condenser racks. VALIDATION OF CFD METHODOLOGY A s in most simulation procedures. a question may be raised as to the validity of the models. The accuracy of CFD predictions may also be questioned. Twelve measurement conditions from the 10-day measurements fit December 2009 • Volume 2.9 °F) variation in ambient temperature. The chart depicts how mitigation measures are included in the evaluation process to reduce inlet air temperatures to the compressors and air-cooling equipment. a plot of the air-cooling equipment inlet temperature rises at this plant versus wind direction. Because of a wind vane problem at the site. wind speed. The wind speed and ambient temperature were also filtered so that there was less than 10% variation in wind speed and less than 0. Number 1 101 . While CFD has been used extensively in air recirculation studies. and ambient temperature is shown in Figure 4. a plan was developed to statistically analyze the air- cooling equipment inlet temperature rises and determine their variation with wind direction. TYPICAL CFD STUDY OF AIR RECIRCULATION A process flow chart for an LNG air recirculation study using CFD simulation is provided in Figure 2. and ambient temperature. Data Comparison for East Wind Direction The 10-day wind data was filtered for the east wind direction (270 degrees ±10 degrees).Figure 3. Facility CFD Grid Comparing the results obtained from CFD with measurements taken in the field is instrumental in assessing both model validity and prediction accuracy. wind speed. Using the data obtained. projects can better understand the design envelope of an analysis and design the equipment accordingly. To collect the inlet temperature data. Wind and ambient temperature data and air-cooling equipment inlet temperatures were to be recorded every 15 minutes for 6 months.5 m (4 to 5 ft) below the tube bundles. Comparing the results obtained from CFD with measurements taken in the field is instrumental in assessing both model validity and prediction accuracy. The condensers were located 12 to 18 m (39 to 59 ft) above ground. certain assumptions and simplifications are inherent in CFD models.2 to 1.

°C Measurement Locations Light blue bars = all measurement points Navy blue bars = range within which 69% of measurements fall 12 data points for each measurement location Wind speed: 1.4 °F]) were used as inputs to the CFD model. The discrepancies between the CFD model results and the temperature measurements for the east wind direction are shown to be within 1 °C (1. Wind Speed. Measured Air-Cooling Equipment Inlet Temperature Rise (Measured Inlet Temperature Minus Ambient Temperature) Versus Wind Direction. The corresponding wind speeds and ambient temperatures were averaged. The CFD results were compared with two instantaneous temperature measurements. the two cooling water pipes that circulate water from the compressors to the nearby vessel and air-cooling equipment may have contributed the higher local temperature rise measurements. Figure 5 shows the results of this comparison. Comparison of CFD Results with Inlet Temperature Measurements for East Wind Direction 102 Bechtel Technology Journal . and Ambient Temperature these criteria. The temperature contamination can be classified into two categories: • Contamination from self-recirculation • Contamination from other exhausts W Figure 4.8 °F) in 12 out of 16 locations.2 m/sec (±0. a limited amount of fresh air is available and some pieces may start to draw in exhaust air from other pieces or themselves. CATEGORIZING AIR RECIRCULATION CONTAMINATION hen a large amount of heat generating equipment is located in a limited plot space.4 °F) ±0. °C The discrepancies between the CFD model results and the temperature measurements are shown to be within 1 ºC (1.7 ft/sec) Ambient temperature: 25.9 °F) Figure 5.2 °C [77. Such contamination results in loss of cooling surface area and/or higher inlet temperature above the ambient. At location E11. When the pieces of equipment are close together.8 ºF). and the results (east wind.9 m/sec (6. interactions among that equipment are unavoidable. 1.9 m/sec [6. The air-cooling equipment inlet temperature data for these wind conditions was then used for comparison with the CFD model results.2 ft/sec].2 ft/sec) ±0.T2-214747 – P22 12 10 8 6 DT(C) – T2 Propane 4 2 0 –2 –4 –6 –8 –10 22 24 26 28 30 32 34 Ambient Temperature. 9 8 7 6 5 4 3 2 1 0 –1 E11 E12 P11 P12 P13 P14 P15 P16 P17 E31 E32 P31 P32 P33 P34 P35 P36 Within 2 Sigma CFD All Measured Data Temperature Rise. The two largest differences occur at locations E11 and P11. Many types of equipment at an LNG plant involve air intake and exhaust. 25. ambient temperature.2 °C (77.5 °C (±0.

But when the wind blows along the length of the propane condensers.Wind Direction Self-Recirculating Zone Propane Condenser Temperature. a CFD model was constructed based on the use of a double bank of air-cooled propane condenser units. Moreover. The horizontal skirt offsets this flow away from the air-cooling equipment by a distance as wide as the skirt. Temperature Profile Underneath Double-Bank Propane Condensers Temperature Contamination from Self-Recirculation Because of site layout or wind conditions. Self-recirculation may be offset by using horizontal or vertical plates bolted to the sides of the propane condenser units. the downward pull of the airflow upstream of the vertical skirt renders the first row of fans nearly dysfunctional. the turbulence created from the corners can swirl back into the inlets of the propane condensers. the plant can be reoriented or mitigation measures studied to minimize the self-recirculation during crosswinds. the largest heated zone is created (Figure 7). However. the LNG plant may be reoriented or mitigation measures studied to minimize self-recirculation during crosswinds. and the effects on the propane condensers were studied. As shown in Figure 8. Figure 6 shows the temperature profile just underneath the propane condensers’ fans.9 31. resulting in a narrow plume and causing more lift with less possibility of recirculation back into the inlet. Z X Y Z X Y Figure 7. the horizontal skirt results in a lower inlet recirculation temperature into the air-cooling equipment. In the crosswind case. Stream Lines Showing Self-Recirculation in Crosswind and Parallel Wind December 2009 • Volume 2.5+ Figure 6. resulting in the greatest self-recirculation. The simulations predicted that when wind direction is perpendicular to the length of the propane condensers (crosswind). Number 1 103 . As seen in the vector plot. much less self-recirculation occurs because the edge facing the wind is shorter. accelerating flow below the first one or two rows of fans. °C 28. exhaust air can sometimes recirculate to the inlets of the same unit. Different wind speeds and directions were tested. the vertical skirt causes downward flow outside the skirt. resulting in a wider exhaust plume. since the crosswind Based on simulations. While the same amount of fan throughput is predicted in both cases.5 30.5 29. as shown in Figure 7.5 32. To illustrate this activity. under strong wind conditions. Based on these simulations. the vertical skirt offers larger resistance to airflow.

0 30.0 3 m Skirt Z Y X Y Z X Y Z X 4. As can be seen in the figure. preventing recirculation. the entire airflow has to enter the coolers from that side. 10.5 m Skirt 35.7 7.9 30.5 26.0 performance gradually improves as the skirt width is increased to 4.5 m (5 ft) skirt case.0 30.9 7. and fatigue limit skirts to a maximum of about 3 m (10 ft).0 Z Y X Y Z X 115. The first rows of fans appear to be severely affected when exhausting flow for the 1.5 m [15 ft]) results in the least amount of recirculation due to the downward flow being pushed away from the air-cooling equipment. 3. temperature contamination from these various sources is also possible in addition to self-recirculation.0 X Horizontal Y 35. and 4.0 26.0 0.0 30. respectively) horizontal skirts in the crosswind case.7 35.5 30. then.9 7.5 26.0 15.0 26.0 26.5 26.7 35.5 26. However.5 0.0 30.0 26. should be the size of the skirt? Figure 9 shows a comparison of 1.5 26.9 30.0 Z X Vertical Z Y Figure 8. resulting in higher average and peak velocity below the skirts.5 m Skirt Figure 9. Downstream equipment 35.7 7.0 26. What. Increase in flow and decrease in inlet temperature are also apparent as skirt width increases.0 30.5 m (5.Vertical skirt offers larger obstruction area.5. Comparison of Various Size Skirts in a Crosswind 104 Bechtel Technology Journal . 35. and 15 ft. causing intense pull-down of flow near the skirt. thus preventing it from recirculating back into the inlet.5 0.0 Horizontal skirt pushes pull-down further upstream. The fan 35. 10-Foot Horizontal Versus Vertical Skirt in a Crosswind approaches the air-cooling equipment from one side. structural stability. the widest skirt (4. flutter.0 15.0 1. Temperature Contamination from Other Exhausts When multiple large pieces of heatreleasing equipment (such as ACHEs) are located within a limited plot area. 5.5 m (15 ft).0 0.0 35.

For a multi-train LNG plant simulation.5 35. as shown in Figure 11. Number 1 105 . local terrain. This approach is effective when horizontal wind speeds are closer to moderate than strong.5 41.2 m/sec (13.0 0. altering heights without incurring significant costs from piping changes may not prove feasible. Scaled from 0–10 m/s Air Coolers Double Bank Propane Condensers Z Y X Z Temperature Contours Cut Slice Y Through All Units.0 39.5 1.5 26. Wind Directions from Wind Rose Train 1 Train 2 Buildings Z Y X Compressor Building with Turbine Stack 10.0 8. the flow pattern shows how some exhaust air is drawn into the downstream equipment. The optimal mitigation solution is one that does not require significant mechanical alterations.0 6. there is no typical worst-case scenario associated with this type of contamination.0 24. as in the case of refrigeration system intercoolers. Unlike self-recirculation.5 44.5 9. Fan hoods help by ensuring that equipment exhausts are released at higher elevations.can draw heated exhaust from upstream equipment—a more common scenario in multi-train LNG plants and harder to remedy once a plant is built. the CFD model shows the airflow path and the temperature increase. for example.5 20. Depending on the plot constraints.0 3. However.0 50.0 7. within reason.0 21.0 1.5 3.0 Z Y X Velocity Vectors Colored by Magnitude of Velocity. Mitigation solutions to avoid cross-contamination among equipment or trains include the use of skirts and the use of fan hoods.0 36.5 38. also have a significant effect on air-cooling equipment air intake. In Figure 10.5 8.0 27.7 ft/sec) – Stream Lines from All Units Colored by Temperature. when one piece of equipment is tightly connected to another. Scaled from 20–35 °C (68–95 °F) December 2009 • Volume 2. CFD Simulation—Exhaust Air from One Equipment Item Recirculating to Another’s Inlet Terrain and wind rose (wind speed and direction).0 2. Figure 10.5 5.5 2.5 23. apart from neighboring equipment and plants.0 4.5 6.5 0. Such an approach may work for propane condensers.5 7. a proper plot orientation Mitigation solutions to avoid cross-contamination among equipment or trains include the use of skirts and fan hoods. and wind conditions.5 47.5 29. If the plant is located in a valley. X Scaled from 2–50 °C Figure 11.0 5.0 33.5 30. downwash from neighboring hills causes an abrupt temperature rise in some units. Another mitigation approach may include adjusting the height of various equipment items.0 48. SWW Wind Direction at 4.0 9.0 45.0 42.

W. ADDITIONAL READING Additional information sources used to develop this paper include: • J. cheaper way to analyze the fluid dynamics around plants. V. However. results are usually available in less time than from testing using a physical model. 2002. 2005. He employs state-ofthe-art technology to resolve a wide range of complex engineering problems on largescale projects. access via http://www. A horizontal skirt is a more aerodynamic design that helps air flow into the air-cooling equipment while reducing air recirculation and improving air-cooling equipment performance under all wind speeds and directions. LA. Philip has more than 15 years of experience in CFD and finite element analysis for structural mechanics. as skirt width is increased beyond 10 feet. As a mitigation tool. D. August 19. Diwakar. 100. access via http://www. Considering the combined value of its cost and timing benefits. CFD can provide valuable insights to the plant designers or engineers and enable informed decisions to be made about these and similar design factors. ith multi-train LNG plants becoming more common. and buildings can block air feeding into air-cooled equipment. • W. In the authors’ experience. Yee. Large vessels. 1999 (access via http://www.” Oil & Gas Journal. Lin. Vol.html. and B. In a simulation. and V. the relationships among. Diwakar. the equipment can be starved for fresh air.aspx. And with CFD.” AIChE Spring National Meeting.ogj. Advanced Simulation and Analysis Group. March 29. BIOGRAPHIES Philip Diwakar is a senior eng i neer i ng specia l ist for Bechtel Systems & Infrastructure. Atlanta. towers.may minimize the impact of air recirculation under extreme conditions.aiche.html).ogj. Inc. is to consider orienting the plot so the propane condensers are axial to the prevailing wind direction at hightemperature conditions. • D. Manipulating the design of a virtual plant is much less expensive than making changes in a real facility. Turbulence generated behind these structures can also create local recirculation zones. access via http://www. the impacts from the surrounding environment are a vital consideration for production. Avidan.aiche. and orientations of. His more recent experience includes work on projects involving fluidsolid interaction and explosion dynamics. Lin. and increasingly complex problems can be W REFERENCES [1] A. Martinez. Different mitigation measures can also be examined. Berkoe. “Validation of the Air Recirculation CFD Simulations on a Multi-Train LNG Plant. Vol. CFD enables various parameters to be changed in the virtual space so the most effective solution can be found. CFD can be a faster. it should be noted that this is only a starting point that is by no means absolute. the return on improved performance diminishes while the cost increases. Using CFD in the design phase can help to minimize recirculation problems. analyzing different scenarios can minimize the effects of recirculation on LNG production. 97. “LNG Liquefaction Technologies Move Towards Greater Efficiencies. The fluid dynamics of the plume emanating from the air-cooling equipment change significantly with the type of skirt (vertical/horizontal) and wind direction (parallel/crosswind). 106 Bechtel Technology Journal . Publications/pubcat/0816909849. D.’s. and P. CFD is certain to play an increasingly important role in the design and construction of large LNG plants. 2004. Lower Emissions. where feasible. New Orleans. Computing technology continues to advance. P. CONCLUSIONS CFD can be a fast and economic way to analyze the fluid dynamics around plants. A simple starting current-issue/oil-gas-journal/volume-100/ listings/2004springmeetingcd. If aircooling equipment is located within these zones.” AIChE Spring National Meeting. However. such as changing the orientation of the plant or relocating some of the equipment. “Predicting Environmental Impacts on MultiTrain LNG Facility Using Computation Fluid Dynamics (CFD). “Fluid Dynamics Visualization Solves LNG Plant Recirculation Problem. Messersmith. recirculation effects can be minimized by using hoods or horizontal or vertical skirts. open spaces and buildings can be evaluated and different kinds of weather conditions can be assessed.K. Although no complete solutions exist for these problems. Mehrotra.” Oil & Gas Journal. Issue 13.K.aspx. April 25–29. GA. April 10–14. Issue 33. solved in ever shorter amounts of time using CFD simulations.

and projects. Atlanta. to ensure the successful completion of projects worldwide.W. Dave’s experience includes various LNG and ethylene assignments during his 18 years with Bechtel and. both in Chemical Engineering. Ram was an applications engineer with the Global Solutions Group at Invensys Process Systems. Satish also was manager of the dynamic simulation group at M. for the Oil. Inc. Zhengcai holds a PhD in Chemical Engineering from the Georgia Institute of Technology. Philip holds an MTech in Aerospace Engineering from the Indian Institute of Science.W.” in Mathematical Modeling of Food Processing (Taylor & Francis. He was previously process director in the Process Technology & Engineering Department at Fluor Daniel with responsibilities for using state-of-the-art simulation software for the process design of gas processing. India. Zhengcai contributed a book chapter. dynamic simulation. Satish Gandhi. CFD. Satish has more than 35 years of experience in technical computing and process design. Texas. PhD. and a chemical and software engineer at Shanghai Baosteel Group Corporation. One was used to determine the effects of blast pressure on structures at LNG plants.. Maryland. including work on the Atlantic LNG project conceptual design through startup as well as many other LNG studies. as well as troubleshooting of process plants in general and LNG plants in particular. as well as two awards for his exhibit on the applications of fluid-solid interaction technology at the 2006 Engineering Leadership Conference in Frederick.During his 8-year tenure with Bechtel. and optimization. FEED studies. safer. The other grant was used to study fluid-structure interaction in building structures and vessels. India. a BTech in Aeronautics from the Madras Institute of Technology. responsible for LNG Technology Group and Services. and an MS in Inorganic Materials from East China University of Science & Technology. previously. Nagpur. Baltimore. and virtual reality. PhD. with a view toward an advanced technology for designing less costly. “Mathematical Modeling and Design of Ultraviolet Light Process for Liquid Foods and Beverages. Before joining Bechtel. Texas. an MS from the Indian Institute of Technology.. as well as in real-time optimization. FEA. Kanpur.. Prior to joining Bechtel. Inc. where he was part of a Six Sigma team. He is a licensed Professional Mechanical Engineer and is a Six Sigma Yellow Belt. Dave holds a BS in Chemical Engineering from Carnegie Mellon University. He is currently engaged in air recirculation modeling of LNG plants and CFD modeling of chemical equipment. responsible for technology development and management and implementation of dynamic simulation projects in support of LNG and other process engineering disciplines. and a BS in Mathematics from Loyola College. LNG. Amherst. where he developed applications for refineries and power plants. 2009) and has authored more than 20 journal papers. He has held various lead roles on LNG projects for 15 of the past 18 years. located in Houston. Shanghai. December 2009 • Volume 2. China. and is a licensed Professional Engineer in Texas. Philip was a project engineer with Caterpillar. David Messersmith is deputy manager of Bechtel’s LNG and Gas Center of Excellence. and more blast-resistant buildings. Bengalaru. is LNG Product Development Center (PDC) director and manages the center for the ConocoPhillips-Bechtel Corporation LNG Collaboration. and chemical process modeling. and refinery facilities. He leads a group of 10 experts in Bechtel’s Houston and New Delhi offices in developing advanced applications for various simulation technologies. Ramachandra Tekumalla is chief engineer for OG&C’s Advanced Simulation Group. Ltd. Ram has more than 11 years of experience in applying these technologies. is a CFD engineering specialist with more than 15 years of research and industrial experience in chemical engineering and related areas. He is a senior member of the American Institute of Chemical Engineers (AIChE). located in Houston. Inc. all in Chemical Engineering. Zhengcai was a senior project engineer for IGCC projects at Mitsubishi Power Systems Americas. Before joining Bechtel. Kellogg. such as APC. Pennsylvania. Gas & Chemicals Global Business Unit. and a BE from the Birla Institute of Technology & Science. India. Pilani. Philip has received two full technical grants. including real-time control. Satish received a PhD from the University of Houston. his 10 years with M. OTS. He is responsible for establishing the work direction for the PDC to implement strategies and priorities set by the LNG Collaboration Advisory Group. CNG. Texas. and a BS from Laxminarayan Institute of Technology. Kellogg. Pittsburgh. People’s Republic of China. Philip has also received four Bechtel Outstanding Technical Paper awards. Much of his work has focused on CFD modeling of chemical reactors and industrial furnaces. Tallahassee. Zhengcai Ye. Maryland. performance monitoring. He applied his CFD expertise to determine the best approach for solving issues involving the cooling of Caterpillar heavy machinery. an MS in Chemical Engineering from Florida State University. Ram holds an MS from the University of Massachusetts. Number 1 107 .

108 Bechtel Technology Journal .

0 0.1 15 2 0. emulsion. These standards are usually governed by legislative bodies and are modified from time to time.5 Maximum Limit.5 40 Asim De akde@bechtel. All rights reserved.001 0. agricultural use.1 0. and eventually minimise plant life-cycle costs. or a utility plant.0 0. total (Fe) Lead (Pb) Manganese (Mn) Mercury (Hg) Nickel (Ni) Phenols Phosphate. dissolved oxygen. a chemical process. precipitation. Keywords—activated sludge. oils.5 150 0. promote water savings through recycling. mg/L 5 0.2 0. Based on the complexity of the process and the process industry.0 200 0. chemical oxygen demand (COD). The need for sustained development and industrial continuity calls for a systematic and comprehensive treatment of effluents to reduce all contaminants to acceptable limits.1 2. colloidal particle.1 1 0.05 0. mg/L 10 0. M Table 1. liquids. and aquatic life. redox reaction. dissolved air flotation.2 3.2 25 5 0. total (P) Selenium (Se) Silver (Ag) Sulphide Suspended solids Turbidity Zinc (Zn) pH Desirable Limit. industrial wastewater requires specialised treatment to remove one or more of these pollutants: • Suspended solids (SS) and/or turbidity • Oil and grease • Colour and odour • Dissolved gases • Soluble impurities and contaminants • Heavy metals • Germs and bacteria © 2009 Bechtel Corporation.5 50 0. 109 . An example of typical discharge quality requirements for effluent water is provided in Table 1.05 1. making the effluents environmentally safe before they are discharged outside the plant boundary. total (Cr) Copper (Cu) Chemical oxygen demand (COD) Cyanide (CN) Oil Iron. This paper provides an overview of the major wastewater treatment processes and the roles different chemicals play in these processes. neutralisation.09 0.1 30 0. The enforcement of stringent environmental norms has spurred scientists and process owners to develop comprehensive wastewater treatment programmes to constantly improve effluent discharge quality. clarifier. wastewater treatment INTRODUCTION ost industrial processes give rise to polluting effluents from the contact of water with gas.1 1.2 2 0.05 0. turbidity. biochemical oxygen demand (BOD). Typical Limits for Effluents Discharged into the Environment Constituent Ammoniac nitrogen Arsenic (As) Biochemical oxygen demand (BOD) Cadmium (Cd) Chlorine (residual) Chromium.5 50 120 5 Kanchan Ganguly kganguly@bechtel.0 6–9 Standard Units (SU) Note: Some countries provide limits on dissolved solids in treated effluent. polyelectrolyte.3 3. zero discharge or minimum discharge of effluent from the plant boundary is a present-day motto in safeguarding our environment. suspended solids (SS).2 30 90 0. oily wastewater. The release of effluents to water bodies and soil renders them unsafe for drinking. flocculation.2 30 0.WASTEWATER TREATMENT— A PROCESS OVERVIEW AND THE ROLE OF CHEMICALS Issue Date: December 2009 Abstract—Whether in reference to a refinery. and solids. fishing. coagulation.

Primary acidic agents are hydrochloric or sulphuric acids. Primary base agents are caustic soda. redox sp. due to their small size. SS. except for sewage treatment. Figure 1 illustrates the effect of alum dosing on zeta potential versus turbidity. The free acid of alum breaks the emulsion by lowering pH. and lime solutions. pH neutralisation. Zeta potential [1] is a convenient way to optimise the coagulation dosage in water and wastewater treatment. which is carried out for recycle or reuse of the treated effluent. • Tertiary treatment. Coagulation Coagulation destroys the emulsifying properties of the surface-active agent or neutralises the charged oil droplets. In general.0 Figure 1. which. The collected material is removed by backwashing for reuse of the filter element. The dosage at which turbidity is lowest determines the target zeta potential.. the most difficult SS to remove are the colloids. AND TERMS API BOD COD CPI MSDS NTU American Petroleum Institute biochemical oxygen demand chemical oxygen demand corrugated plate interceptor material safety data sheet nephelometric turbidity unit oxidation reduction specific gravity suspended solids standard units wastewater treatment plant Neutralisation (pH control) The removal of excess acidity or alkalinity by treatment with a chemical of the opposite composition is termed neutralisation. all treated wastewaters with excessively low or high pH require neutralisation before they can be disposed of in the environment. gr. NTU Zeta Potential. thus physically separating them out.5 Turbidity of Finished Water. sodium bicarbonates. • Secondary treatment. +5 Turbidity 0 Zeta Potential 0. mg/L 0. which.3 AN OVERVIEW OF EFFLUENT TREATMENT PROCESSES –15 0. easily escape both sedimentation and filtration.ABBREVIATIONS.1 –25 10 20 30 40 50 60 Alum Dose. takes care of most of the pollutants and toxic chemicals that can be easily removed from raw wastewater at this stage. industrial wastewater treatment programmes differ from industry to industry. ACRONYMS. which removes major pollutants to achieve the disposal quality. Such pretreatment creates conditions suitable for secondary treatment.4 –10 0. In the coagulation process. Dosing rates are decided based on treated effluent pH level. The most difficult SS to remove are the colloids. etc. polishes it to bring the biochemical oxygen demand (BOD) and SS levels down to a range of 10–20 milligrams per litre (mg/L). Filtration Filtration is a purely physical process for separating SS in which the effluent is passed through a set of filters with smaller pores than the contaminants. is designed to substantially diminish the pollutant load. Example of the Effect of Alum Dosing on Zeta Potential Versus Turbidity 110 Bechtel Technology Journal . emulsified oil. The key to effective colloid removal is to reduce the zeta potential with coagulants such as alum. and dissolved organics are the major pollutants removed at this stage. mV –5 0. due to their small size. which consists of grit and floating oil removal. ferric chloride. SS SU WTP Generally.2 A ll effluent treatment involves a few fundamental chemical and physical processes for isolating the impurities/ contaminants. –20 0. easily escape both sedimentation and filtration. A brief description of these processes helps in understanding the overall treatment philosophy. wastewater treatment is performed in three stages: • Primary treatment. As a general philosophy.

centrifugation. Number 1 111 . increased temperature. Metal Precipitation Wastewater containing dissolved metals needs to be treated to reduce the metal concentration to below the toxicity threshold for organisms potentially exposed to the wastewater. aluminium. or coalescing media to break the emulsion. Emulsified oil needs the addition of cationic or anionic polymer. December 2009 • Volume 2.→ Fe2O3 + H2O 2Mn2+ + O2 + 4OH(4) → 2MnO2 + 2H2O (5) • The pH of the effluent can be conditioned. Oil is lighter and thus floats on top of the water surface. Flocculation A flocculent gathers floc particles together in a net and helps bind individual particles into large agglomerates. floating and emulsified.Zeta potential (Smoluchowski’s formula) is dependent on the property of the SS and is ζ = zeta potential (mV calculated as: η = viscosity of solutio ε = dielectric constant v ζ = zeta potential (mV ) 4πη = : electropho ζ= × U × 300 × 300 × 1. filtration. For example. and emulsified oil is removed during secondary treatment. By virtue of oil being dismissible in water and having a density difference. titanium. Oil/Water Separation Oily wastewater is common in any industry because oil and grease are universally used as lubricants and solvents. or coalescing media. lead. zinc. These can break the emulsion so that the oil particles can be subsequently removed by normal separation processes. • Liquid polymerised aluminium can be used as a coagulant. so it can be skimmed out through a mechanical separation process using mechanised skimmers in American Petroleum Institute (API) or corrugated plate interceptor (CPI) separators. the reactions that occur during the oxidation of iron and manganese by oxygen are: 2Fe2+ + O2 + 2OH. Floating oil is separated during primary treatment. recycling the recovered oil adds some value apart from pollution control. More than one stage may be required to reach the discharge quality. Once the charge is reduced or eliminated. increased temperature. treated water may be passed through activated carbon filter adsorbers. For a stringent requirement. In addition. chromium. cadmium. • The metal ions can be oxidised to produce insoluble metal oxides. A few toxic and nontoxic metals. Four main processes are available to accomplish this: • The soluble metal ions can be converted to insoluble metal salts by chemical reaction to allow physical separation.000 L V /= electrode distance L ε v = speed of particle V = voltage L = electrode distance and/or cationic polymers. Microflocs form and grow into visible floc particles that settle rapidly and filter easily. in a refinery or petrochemical plant. Oil remains present in wastewater in two forms.000 = U viscosity of solution η V/ L ε v dielectric constant = speed of particle ε = 4πη V v = voltage U = : electrophoetic mobility (1) ζ= × U × 300 × 300 × 1.→ M (OH)2 (solid) M 2+ + S2→ MS (solid) (2) (3) Emulsified oil needs the addition of cationic or anionic polymer. and beryllium. This has been found to be extremely effective in heavy metal precipitation processes in industrial wastewater. The insoluble compounds resulting from the application of any of the above processes are subsequently removed through the coagulation and clarification process by gravity settling. can be precipitated within a certain pH range. such as iron. no repulsive forces exist and gentle agitation in a flocculation basin causes numerous successful colloid collisions. nickel. Typical precipitation reactions are described by the following equations: M 2+ + 2OH. or a similar solid/liquid separation technique. the bulk of oil in wastewater remains in suspended form and can be separated through a settlement and skimming process. Aluminium hydroxide [Al(OH)3] produced after hydrolysis of alum [Al2(SO4)3] forms a net in water to capture fine SS. copper. Oil is required to be separated before the wastewater is discharged or recycled. which retain the oil particles within the carbon molecular space and provide for clear water to be discharged. mercury.

chemical coagulants/polymers are dosed to promote agglomeration of floc particles. which converts these gases to an odourless compound 112 Bechtel Technology Journal . thereby producing very low amounts of sludge. is also used to disinfect wastewaters to avoid any material addition.Zero discharge from an industrial unit is often a prerequisite in establishing a new plant. inorganic and organic polymers. Table 2 lists typical inorganic treatment chemicals and their feed rates. Substances such as chlorine.000 mg/L. Zero/stringent quality of discharge from the industrial unit is often prerequisite to establishing a new plant. The aggregation of these particles into larger flocs permits their separation from solution by sedimentation. Oxidation Chemical/biological oxidation processes use (chemical) oxidants to reduce chemical oxygen demand (COD)/BOD levels and remove both organic and oxidisable inorganic components (metals). Ultraviolet radiation. It is not the intent of this paper to discuss the types and characteristics of all chemicals and polymers available from different manufacturers. Odour Control Odour problems—mainly from gases such as hydrogen sulphide. chromium. Ozone. This section includes a general discussion of the types of chemicals and polymers and their applications. advantages. ammonia. etc. ROLE OF CHEMICALS IN TREATMENT I ncreasing concern about environmental damages from industrial pollutants poses new challenges daily to the discharge effluent quality requirement for industry. Chlorination is one of the most commonly used disinfecting methods. can also simultaneously disinfect the effluent. As an example. project environmental criteria. Coagulants/Flocculants Adding coagulants to the wastewater creates a chemical reaction in which the repulsive electrical charges surrounding colloidal SS are neutralised. a kind of electromagnetic radiation. Specially treated activated carbon may be used as an odour control medium to absorb hydrogen sulphide. although it is often not necessary to operate the processes to this level of treatment. Aerobic digestion (in presence of activated sludge) is effected when the BOD in the effluent ranges from 100–1. and blended chemicals with polymers. and methane present in a wastewater facility—are a concern for wastewater treatment personnel. Disinfection Wastewater before discharge or particularly for reuse needs to be disinfected. and nitrogen dioxide can be removed by redox reaction. The primary treatment for odour control is oxidation. demanding more complex and controlled treatment of wastewater that cannot be achieved by standard treatment processes and chemicals. cyanide. filtration.. As an outcome. The choice of chemical conditioners depends on the characteristics of the sludge and the type of dewatering device. Chemical Conditioning Chemical conditioning improves sludge dewatering in sludge thickening devices. or straining. Oxidation via aeration of the effluent significantly reduces the COD of the treated liquid. These chemicals are designed to work in the specific treatment processes described in the overview section and require significantly lower dosing rates. for wastewater treatment. Researchers are engaged in upgrading the treatment processes and developing new chemicals to meet the stringent environmental norms while ensuring that the treatment cost remains reasonable. and recommendations from the chemical vendor. among others. allowing the free particles to stick together and create lumps or flocs. in the treatment processes. a powerful oxidising agent mainly used to oxidise certain industrial wastewaters that cannot be treated effectively by conventional biological oxidation processes. Use of any specific brand or proprietary chemical in a project must be evaluated considering the inlet effluent characteristics. In this method. the newly developed processes employ ozone and ultraviolet rays. which is convenient for disposal. These processes can completely oxidise organic materials to carbon dioxide and water. sodium hypochlorite solution is used to treat dilute cyanide in wastewater: NaCN + Cl2 + H2O → NaCNO + H20 (6) and inhibits formation of anaerobic bacteria that produce gases. there have been significant developments in manufacturing proprietary chemicals. Redox Reaction An oxidation–reduction (redox) process is used to transform and destroy targeted water contaminants.

70–250 ppm • Excessive sludge formation (self-sludge) and its treatment.5 4–7 10–15 1. especially to remove oil from water. serving as coagulants/ flocculents in the modern wastewater treatment process. The normal dosing rate works out to be 40–50 ppm. A third type of polymer. Alum—Al2 (SO4)3•18H2O The role of aluminium sulphate—alum—in water treatment is known historically. is capable of offering both coagulation and bridging. Number 1 113 . cellulose. Polyelectrolytes are categorised based on their product origin. based on carboxylate ions and polyampholytes. domestic and oily waste. Cationic polymers. Anionic polymers. and phosphate. When alum and lime are added to the treatment process. due to their very low dosing requirement and applicability over a wide range of pH. Typical Inorganic Chemicals Used in Wastewater Treatment Purpose/Chemicals Disinfection Chlorine – Primary treatment effluent – Activated sludge effluent Chlorine dioxide – Primary treatment effluent – Activated sludge effluent Ammonia Removal Chlorine Oxidation of Sulphides Chlorine Hydrogen peroxide Sodium nitrate Coagulant Feed Aluminium sulphate (alum) Ferric chloride Lime Ferrous sulphate Ferric sulphate pH Control (to maintain alkalinity) CaCO3 Lime 100–500 200–500 75–150 45–90 200–400 >1. Synthetic polyelectrolytes consist of single monomers polymerised into a high-molecular-weight substance. Typical applications include removal of organic impurity. It is an inorganic coagulant/flocculent (see Table 2). resulting in three-dimensional particle growth and thereby easy settlement. developed from cationic polyelectrolytes of extremely high molecular weight.0–1. Polyaluminium chloride is well-suited as a primary coagulant in a wide variety of industrial and domestic wastewater treatment plants (WTPs). the chemical reaction produces the following: Al2(SO4)3 • 18H2O + 3Ca(HCO3)2 → Al(OH)3 + 3CaSO4 + 18H2O + 6CO2 The dosing rate of alum depends on: • Concentration (mg/L) of SS • Nature of SS • pH of effluent • Type of flocculating equipment (7) December 2009 • Volume 2. Some of the widely used polyelectrolytes are described next. mg/L Polyelectrolytes are one of the most widely used chemicals.Table 2. metals. Natural polyelectrolytes include polymers of biological origin derived from starch. needing lime dosing for pH control Ferric Chloride Ferric chloride is used as an inorganic emulsion breaker (described in the next section). and loss of water with the sludge • Lowering of the pH. in which the cations (positive charges) form the polymer.5 10–30 10 2–5 1–3 5–10 2–5 Primary disadvantages of alum are: • High dosing requirement. Their primary advantages are their very low dosing requirement and their applicability over a wide range of pH compared with alum or other inorganic coagulants/flocculents. carry primarily negative charges and help in interparticle bridging along the length of the polymer. reduce or reverse the negative charges of the precipitate and therefore act as a primary coagulant. and alginates. The action of polyelectrolytes changes according to their type. Polyelectrolytes Polyelectrolytes are one of the most widely used chemicals serving as coagulants/flocculents in modern water/wastewater treatment. Efficient and effective in coagulating particles with a wide range of Dosing Level.

pH, the chemical offers very good turbidity removal and leaves no residual colour. Typical properties are: • Available in 25%–40% concentrate • pH (neat) = 2.3–2.9 SU • Freezing point = –5 °C (–23 °F) • Odourless and colourless • Specific gravity (sp. gr.) = 1.2 Liquid polymerised aluminium coagulants are extremely effective in heavy metal precipitation processes and combined industrial wastewater. These coagulants have a low impact on process water pH. Aluminium hydroxide chloride/polymer is a technically advanced, high-performance coagulant based on aluminium hydroxide chloride blended with an organic polymer. The active component of the reagent is a highly cationic aluminium polymer present in high concentration, which is represented as [Al13 O4(OH)24(H2O)12]7+. Typical properties are: • Sp. gr. = 1.30 • pH = 1.0–2.0 SU • Charge = +1950 High-molecular-weight cationic polymer (liquid) does not contain oil or surfactants and is designed for SS. It is specially recommended for use in non-potable raw water clarification, primary and secondary effluent clarification, and oil wastewater clarification. Dosing rate varies from 1–10 ppm for wastewater treatment. Typical properties are: • Available in liquid form • Sp. gr. = 1.21–1.23 • Viscosity = <700 centipoise • pH = 3.0–4.2 SU • Dosing concentration = 0.01%–0.1% aqueous solution Cationic guanidine polymer, a cationic liquid organic polymer based on an aqueous solution of cyanoguanidine, is designed to coagulate colloidal solids and SS and is therefore recommended for use in non-potable raw water clarification, primary and secondary effluent clarification, oil wastewater clarification, and enhanced organics removal. Organic polyamines are used as cationic emulsion breakers. Apart from the above, there are commercially available proprietary anionic polyelectrolytes. The important ones are polystyrene sulphonic

acids and 2-acrylamido-2-methyl sulphonic acids.


Alkyl-substituted benzene sulphonic acids and their salts are used as anionic emulsion breakers. Carbamate solution and liquid thiocarbonate compound are used to precipitate chelated metals. Emulsion Breakers In the coagulation process, chemicals help break the emulsion that keeps oil particles floating in water. The chemicals neutralise the stabilising agents that keep the oil particles floating, allowing them to settle and be removed as sludge. Alum, ferric chloride, sodium aluminates, and acids are common inorganic chemicals used as emulsion breakers. However, they have some disadvantages. The primary ones are: • Their effectiveness is restricted to a narrow pH range; therefore, a higher dosing rate is normally required. • A large quantity of watery sludge is produced, necessitating elaborate and expensive disposal. Organic polyelectrolytes, mostly available as proprietary chemicals from different manufacturers, are highly efficient as emulsion breakers because of their cationic charges and effectiveness over a wide pH range. These chemicals help produce a lower quantity of sludge for easy and economical disposal and also add a lower level of chemicals in the treated effluent. As discussed in the polyelectrolytes section, a few popular polyelectrolytes are polyaluminium chloride, aluminium hydroxide chloride/polymer, high-molecularweight cationic polymer (liquid), and cationic guanidine polymer. Metal Precipitants Some process wastewaters include complexing and chelating agents that bond to the metal ions, making precipitation difficult, if not impossible, for many precipitating reagents. Commercially available proprietary precipitants are capable of breaking many of these bonding agents, thereby precipitating the metal ions without adding other chemicals. In some instances, a combination of pH adjustment and varying reaction times may be required along with precipitants and flocculants for optimum result. Liquid polymerised aluminium coagulants are extremely effective in heavy metal

Liquid polymerised aluminium coagulants are extremely effective in heavy metal precipitation processes and combined industrial wastewater treatment.

Bechtel Technology Journal

precipitation processes and are popular in treatment of combined industrial wastewater. These coagulants have a low impact on process water pH. Organosulphide compounds can be used to precipitate divalent metals in the form of insoluble metal sulphides. Hydrogen peroxide (H2O2), ozone, and oxygen convert metals into oxides, which are insoluble in water and hence separated out through coagulation and the settlement process. Oxidants Typical oxidation chemicals are: • Hydrogen peroxide, widely used as a safe, effective, powerful, and versatile oxidant. The main applications are oxidation to aid odour and corrosion control, organic oxidation, metal oxidation, and toxicity oxidation. • Ozone, primarily used as a disinfectant but also aids removal of contaminants from water by means of oxidation. Ozone purifies water by breaking up organic contaminants and converting them to inorganic contaminants in insoluble form that can be filtered out. An ozone system can remove up to 25 contaminants, including iron, manganese, nitrite, cyanide, nitrogen oxides, and chlorinated hydrocarbons. • Oxygen, which can be applied as an oxidant to realise the oxidation of iron and manganese (see Equations 4 and 5). The method is popular because of oxygen’s abundant availability in the atmosphere. Chemicals for pH Control and Odour Control For pH control, lime solution, caustic soda, and sulphuric and hydrochloric acids are commonly used. For odour control, hydrogen peroxide is widely used as a safe, effective, powerful, and versatile oxidant. Other chemicals used are ozone, hypochlorite, permanganate, and oxygen. Activated carbon filters are also used to absorb bad odours. Disinfecting Agents Chlorine gas or sodium hypochlorite is used as a primary disinfecting agent because of its easy availability and residual protection. However, because chlorine is reactive to some metals, ozone is also used as an alternative. Ultraviolet radiation is preferred in some cases because it does not add new chemicals to the process.

Other Chemicals Antifoam is primarily used as a process aid. Antifoam blends contain oils combined with small amounts of silica and break down foam based on two of silicone’s properties: incompatibility with aqueous systems and ease of spreading. Lime, alum, ferric chloride, and polyelectrolytes are commonly used chemical conditioners for effective sludge thickening and dewatering. Organic phosphorous/polysulphonate compounds are used as antiscalant dispersants and corrosion inhibitors. Typical compositions are phosphonates and organophosphorous carboxylic acids and their salts. Organophosphorous carboxylic acid compounds are water soluble; usual dosing rates vary from 15–25 mg/L.

DESIGN APPROACH esigning a WTP for any project is always a unique and challenging process for WTP personnel because: • Process flow data is inaccurate; normally, most source data is estimated, with a wide variation in minimum and maximum flows and flow durations. • Input characteristics of the flow are guesswork. • Effluent disposal criteria are specific to the project and guided by the local norms. Plant designers should consider the following aspects in conceptualising a WTP: • Optimally size the plant by integrating continuous and intermittent flows. Too small a plant does not provide the discharge quality, while too conservative a design requires high capital cost and leads to inaccurate treatment at lean flow conditions. • Sequence and integrate the treatment processes for maximum effectiveness based on estimated effluent characteristics and their variations at different plant operating regimes. • Conduct a jar test, described in more detail below, to optimise the selection and dosing rate of chemicals. Use vendor information to validate the design. • Use a material safety data sheet (MSDS), described in more detail below, to build safety into process design, giving due consideration to safe handling, storage, and disposal.


Designing a good WTP for a project is always a unique and challenging task for WTP personnel.

December 2009 • Volume 2, Number 1


Figure 2, a flow diagram for typical wastewater treatment in a refinery, highlights the processes and dosing chemicals. Jar Test Before a prototype plant is developed, it is customary to conduct a laboratory test to study the behaviour of an effluent, its response to the chemical treatment, and the chemical dosing rate. The jar test is a common laboratory procedure used to determine the optimum operating conditions, especially the dosing rate of chemicals for water or wastewater treatment. This method allows adjustments in pH, variations in coagulant or polymer dose, alternative mixing speeds, and testing of different coagulant or polymer types on a small scale to predict the functioning of a large-scale treatment operation. A jar test simulates the coagulation and flocculation processes that encourage the removal of suspended colloids and organic matter that can lead to turbidity, odour, and taste problems. Material Safety Data Sheet An MSDS provides relevant data regarding the properties of a particular substance/chemical. It is intended to provide designers, operators, and emergency personnel with procedures for

safely handling and working with a substance, and it includes information such as physical data (melting point, boiling point, flash point, etc.), toxicity, health effects, first aid, reactivity, storage, disposal, protective equipment, and spillhandling procedures. The format of an MSDS can vary from source to source and depends on the safety requirement of the country.

CONCLUSIONS astewater treatment in any plant normally takes a back seat as designers focus primarily on the high-end plant equipment and systems to ensure higher plant efficiency, reliability, and availability. Supporting this conclusion is the fact that reliable design inputs are obtained from the vendor data only after major plant systems and equipment designs are in place and related data becomes available from equipment vendors. This paper can be helpful at the initial project phase in conceptualising a WTP design in terms of the treatment processes and chemicals needed based on typical industry data and project environmental permits. The initial design can then be subsequently validated using actual equipment data and chemical vendors’ recommendations.

The jar test simulates the coagulation and flocculation processes and is used to predict the functioning of a large-scale treatment operation.


Demulsifier Dosing (Rate: 15–30 ppmv)

Shop Oil Returned to Refinery

Dissolved Air Flotation Polymer Dosing (Rate: 5–10 gm/m3) Acid Gas (H1S) to Flare

Water Separated from Crude in Refinery Sour Water Process Storage Tank Antiscale Dosing (Rate: 0–50 gm/m3) Sour S Water W Stripper Corrosion Inhibitor Dosing (Rate: 1500 ppm, max.) Based on 5% Chemical in Water Thickened Sludge to Sludge Drying Bed

Skimmed Oil for Further Treatment

Wastewater astewater Collection Sump Liquid Separated Back to Sump Dissolved Air Flotation Media Filter Polymer Dosing (Rate: 0.3–0.5 litre/m3) Media Filter Biocide Dosing (Rate: 500 gm/m3) Thickener

Treated Effluent for Disposal Treated Water Storage Tank Cartridge Filter Media Filt di Filter

Figure 2. Dosing Scheme for a Typical Refinery Effluent Treatment Plant


Bechtel Technology Journal

[1] E.R. Alley, “Water Quality Control Handbook,” 2nd Edition, McGraw-Hill, New York, NY/ WEF Press, Alexandria, VA, 2007 (see 6796830 and Books/info/E-Roberts-Alley/Water-QualityControl-Handbook/0071467602.html).

Kanchan Ganguly is a senior mechanical engineer working on OG&C projects in Bechtel’s New Delhi Execution Unit. Since joining the company in 2005, he has worked on the Takreer and Scotford refinery upgrades, Texas Utilities standard power plant, and Onshore Gas Development Phase 3 projects, focusing primarily on engineering review of equipment and package systems and documents in both hydrocarbon and process/utility water areas. Kanchan has more than 23 years of experience in the industry. During this time, he has assumed responsibilities as manager and process engineer in various water treatment, process chemical, and fertilizer plants and has become familiar with their design, operation, and maintenance. Kanchan is a Chemical Engineering graduate of Calcutta University, India. Asim De is a mechanical engineering supervisor and has worked on both Power and OG&C projects in Bechtel’s New Delhi Execution Unit. He is currently working on the Yajva power generation project. During his 4 years with Bechtel, he has also worked on the Jamnagar Export Refinery captive power plant and Takreer FEED projects. Asim has more than 31 years of experience in the industry and has worked in conceptual and detailed design for all power plant systems and equipment. In addition, he is familiar with commissioning power and desalination plants. He has assumed responsibilities as lead engineer, assistant chief engineer, and general manager in different organizations involved in the power generation business. Asim is a Fellow of The Institute of Engineers (India). He is a Mechanical Engineering graduate of the Indian Institute of Technology, Kharagpur, a premier engineering institute; has an ME degree in Project Engineering from the Birla Institute of Technology, Pilani; and took a post-graduate executive course in Management at the Indian Institute of Management, Calcutta; all in India. He is a Six Sigma Yellow Belt.

ADDITIONAL READING Additional information sources used to develop this paper include:
• G. Tchobanoglous, F. Burton, and Metcalf & Eddy, “Wastewater Engineering: Treatment, Disposal, Reuse,” 3rd Edition, Tata McGraw-Hill, New Delhi, India, 1995 (see dp/B000K3A6WY/ref=sr_1_3?ie=UTF8&s= books&qid=1257373138&sr=1-3). • F.N. Kemmer, “The Nalco Water Handbook,” 2nd Edition, McGraw-Hill Book Company, 1988 (see nalco-water-handbook-frank-kemmer/ 0070458723-wmw3f9p83j). • G. Degrémont, “Water Treatment Handbook,” 7th Edition, Volume 1, Lavoisier, France, 2007 (see and • NALCO Chemical Company – Engineering Reference Manual: Technical Bulletins on Wastewater Treatment and Treatment Chemicals (various). • Manufacturer’s reference information about various chemicals (see ACCEPTA Environmental Technology [info@accepta. com], Chemco Products Company [info@], and Lenntech BV []). • Reference information about zeta potential (see Microtec Co. [ en/products/zeecom_s.htm], Wikipedia [ Zeta_potential], and Zeta-Meter, Inc. []).

December 2009 • Volume 2, Number 1


118 Bechtel Technology Journal .

Toronto. The system studies themselves were a challenge because so many study cases (particularly transient stability analysis studies) had to be evaluated. and the engineering work distributed amongst various Bechtel engineering centres and non-Bechtel engineering contractors around the world. orthoxylene.ELECTRICAL SYSTEM STUDIES FOR LARGE PROJECTS EXECUTED AT MULTIPLE ENGINEERING CENTRES Issue Date: December 2009 Abstract—Electrical system studies are carried out to verify that major electrical equipment is adequately rated. and the engineering work distributed amongst various Bechtel engineering centres and non-Bechtel engineering contractors around the world. determine the conditions for satisfactory and reliable operation. the Jamnagar plant’s extensive power distribution network. System studies are normally conducted on a selected set of study cases. polypropylene. The various units within the Jamnagar Export Refinery Project (JERP) were engineered at Bechtel engineering centres (London. Keywords—analysis. and their results are used to determine the system behaviour under all operating conditions. The complex processes 650. This paper presents an overview of system study execution on the complex JERP electrical network. Electrical Transient Analysis Program (ETAP). The core Electrical group based in Bechtel’s London office (the London core group) was tasked with preparing a combined model of the electrical system and with conducting the system studies. along with a brief report on the various studies conducted as part of this project. and New Delhi). diesel. it was difficult to select the cases to simulate and study because of the large number of possible operating configurations for such a complex industrial electrical network. and highlight any operational restrictions required for safe operation. the plant’s extensive power distribution network. the Bechtel/ Reliance Industries Limited joint venture (JV) office in Mumbai. OVERVIEW OF THE JERP R Rajesh Narayan Athiyarath rnaraya1@bechtel. including paraxylene. The large number of system study cases (particularly transient stability analysis studies) to be evaluated also made the task challenging.000 barrels per stream day (650 kbpsd) of crude oil and produces liquefied petroleum gas (LPG). India. and the sites of non-Bechtel engineering contractors. gasoline. The system studies for this project presented unique challenges because of the sheer size of the captive power generation (with a new 800 MW power plant operating in parallel with an existing 400 MW power plant). Frederick. 119 . eliance Industries operates the Jamnagar Domestic Tariff Area (DTA) oil refinery and petrochemical complex located in Gujarat. determine the conditions for satisfactory and reliable operation. kerosene. electrical system studies. This paper presents an overview of system study execution on the complex electrical network of the JERP. The system studies for the Jamnagar Export Refinery Project (JERP) presented unique challenges because of the sheer size of the captive power generation (with a new 800 MW power plant operating in parallel with an existing 400 MW power plant). © 2009 Bechtel Corporation. sulphur. and highlight any operational restrictions required for safe operation. All rights reserved. along with a brief report on the various studies conducted as part of this project. naphtha. Houston. and numerous aromatic products. Jamnagar Export Refinery Project (JERP) INTRODUCTION E lectrical system studies are carried out to verify that major electrical equipment is adequately rated. For this project.

5 kV gas turbine generators (GTGs). was the world’s largest grassroots single-stream refinery. the Jamnagar complex will be the world’s largest refinery. The key task of conducting overall system studies on the JERP and DTA electrical networks was handed over to the London core group. ACRONYMS. The JERP comprises a new export-oriented refinery located in a special economic zone (SEZ) adjacent to the DTA site. POWER GENERATION AND DISTRIBUTION A simplified depiction of the JERP power generation and distribution system is portrayed in Figure 2. 25 MW steam turbine generators (STGs) are connected to the switchboards in MRS-1 via 11/34. and an 800 MW CPP. 107 MVA autotransformers are provided as the interconnecting 120 Bechtel Technology Journal . 38 MVA step-up transformers. surpassing Venezuela’s 940 kbpsd Paraguana refining complex. Finally. JERP Power System As the JERP power source. Two 11 kV. 174 MVA refinery service transformers (RSTs) connected to the 220 kV switchyard feed the JERP plant substations through 33 kV switchboards in two main receiving stations (MRS-1 and MRS-2). a pair of 220/132 kV. The original project. with space allocated for three future GTGs. associated secondary conversion facilities. 161 MVA step-up transformers. Hence. AND TERMS AVR BSAP automatic voltage regulator Bechtel standard application program (a software application that Bechtel has determined to be suitable for use to support functional processes corporate-wide) captive power plant Domestic Tariff Area discipline work instruction electrical distribution management system energy management system Electrical Transient Analysis Program (a BSAP) front-end engineering and design gas turbine generator high-voltage direct current interconnecting transformer International Electrotechnical Commission Institute of Electrical and Electronics Engineers individual harmonic distortion Jamnagar Export Refinery Project joint venture load management system liquefied petroleum gas low voltage main receiving station medium voltage on-load tap changer personal computer refinery service transformer special economic zone steam turbine generator total harmonic distortion variable-speed drive and benzene.5 kV. The GTGs are connected to the 220 kV switchyard bus via their dedicated 14. CPP DTA The key task of conducting overall system studies on the JERP and DTA electrical networks was handed over to the London core group. Eight 220/34. project engineering was split up amongst the various Bechtel offices.ABBREVIATIONS. On completion of the JERP. headed by the London core group (Figure 1). DWI EDMS EMS ETAP® FEED GTG HVDC ICT IEC IEEE IHD JERP JV LMS LPG LV MRS MV OLTC PC RST SEZ STG THD VSD ENGINEERING THE JERP T he JERP required approximately 6 million engineering jobhours within a short and challenging project schedule. which Bechtel designed and constructed. add crude distillation.5/231 kV. and modify the existing refinery to ensure the efficient operation of both it and the new refinery.200 kbpsd. The project aims to almost double the capacity of the Jamnagar refinery to more than 1. The complex includes a captive power plant (CPP) designed to produce 400 MW of power (backed up by a 132 kV grid supply) to meet the refinery’s power demands.5 kV. 14. the CPP consists of six 125 MW.

Crude & Alkylation Units (Balance) THIRD–PARTY AND LICENSOR OFFICES JAMNAGAR ENGINEERING OFFICE (JEC) JAMNAGAR (SITE) Merox™. ATU CFP CNHT CPP amine treating unit clean fuel plant cracked naphtha hydrotreater captive power plant DTA FCC FEED FW Domestic Tariff Area fluid catalytic cracker front-end engineering and design Foster Wheeler JEC Jamnagar Engineering Office KHT kerosene hydrotreater Merox™ (mercaptan oxidation process) PRU propylene recovery unit SWS TGTU VGO sour water stripper tail gas treatment unit vacuum gas oil Figure 1. if required. within the JERP and DTA electrical networks. DTA Power System The DTA CPP consists of nine 28 MW GTGs and six 25 MW STGs that feed the five 33 kV switchboards. Number 1 121 . from which power is further distributed to the DTA plant substations. The JERP electrical system incorporates an energy management system (EMS) that comprises an electrical distribution management system (EDMS) to control and monitor the electrical network and a load management system (LMS) to carry out load shedding. Project Execution Locations transformers (ICTs) between the JERP and DTA electrical systems. JERP Power System Generation and Distribution December 2009 • Volume 2. Captive Power Plant (Six 125 MW GTGs) 220 kV Switchgear (1½ Breaker Scheme) Interconnection to DTA Two 25 MW STGs Four Sets 33 kV Main Receiving Switchgear ELECTRICAL SYSTEM STUDIES ‘ S ystem studies’ is the generic term for a wide range of simulations conducted on 6.BECHTEL FREDERICK BECHTEL LONDON DTA Expansion Group DTA Revamp FEED FCC Group FCC/VGO Units Captive Power Plant (CPP) BECHTEL NEW DELHI Captive Power Plant (CPP) BECHTEL HOUSTON/ SHANGHAI CFP. Louis) ‘System studies’ is the generic term for a wide range of simulations conducted on a model of an electrical system under various operating conditions encountered or anticipated during operation of the network. SWS. ATU.6 kV/415 V Distribution Network for Each Unit Figure 2. PRU Units FCC/VGO Units (Balance) DTA Revamp Coker (FW–Houston) Sulphur & TGTU (BVPI–Kansas City) Sulphur Granulation Hydrogen (Linde–Munich) Acid Regeneration (MECS–St. Crude & Alkylation Units BANTREL TORONTO BECHTEL LONDON CORE FUNCTIONS Aromatics Unit CNHT/KHT Units BECHTEL–RELIANCE JV MUMBAI Offsites & Utilities CFP.

The results of system studies performed during the front-end engineering and design (FEED) and detailed engineering stages enable proper selection of equipment ratings. In the case of the JERP. precluding the use of manual calculation techniques to conduct any but the most rudimentary system studies. the extensive power distribution network within the JERP and DTA plants. the study results can help in the design of a reliable electrical system suitable for the project’s present and future requirements. These circumstances have led to an effort since the late 1920s to devise computational aids for network analysis. and loads. However.0. The sheer size of the JERP and DTA electrical networks (with a combined power generation of 1. check conformance with any changes in codes and standards. that were interconnected using flexible cords to represent the system being modelled. Further. System studies analyse the behaviour of the electrical network components under various steady-state. the extensive power distribution network within the JERP and DTA plants. System studies conducted after the power system network is operational generally study the feasibility or effects of system expansion. and transient conditions. dynamic. Many mathematical computations are required to analyse even a small network. Inc. Three key elements are at the heart of a proper system study: • A dependable and versatile system study software program • A reliable model of the electrical network • Selection of studies to be conducted and study cases to be simulated ELECTRICAL SYSTEM STUDY SOFTWARE PROGRAMS S ystem studies entail the analysis of the interactions amongst the various components of the electrical network to determine the power flows between elements and the voltage profile at the various buses in the network. Although these programs originally required mainframe computing power and specialised programming techniques. and the results are used to predict the network’s behaviour under actual operating conditions. The offline simulation modules of ETAP 6. the availability of large-scale digital computers from the mid-1950s gave a boost to the use of computer programs for system studies. which has been qualified as a Bechtel standard application program (BSAP). the most current release at the time of project execution. the growth in the computing power of desktop PCs and laptops has seen these programs become an essential tool for the electrical engineer. or analyse system behaviour to identify the underlying causes of a network disturbance or equipment failure. and the crucial need to ensure reliability of the power supply under all operating conditions make it important to conduct reliable and accurate system studies. this combination of large-scale greenfield project/major expansion of an existing network becomes a special case for system studies. From about 1929 to the 1960s. analysis.0. were used to conduct the JERP power system studies. special analogue computers in the form of alternating current network analysers were used for system studies. Hence. cables. These programs were initially limited in scope due to the programming methods used (punched-card calculators). One of the more commonly used system study software programs is Operation Technology. such as power sources. and the crucial need to ensure reliability of the power supply under all operating conditions make it important to conduct reliable and accurate system studies. The sheer size of the JERP and DTA electrical networks. System studies are conducted at different stages of a project.’s (OTI’s) Electrical Transient Analysis Program (ETAP®). These network analysers contained scaled-down versions of the network components. The JERP electrical system also has to be adequate for the addition of future units and high-voltage direct current (HVDC) links to the local electricity utility supply. Although limited in scope and complexity. These early system studies can also assess the ability of the electrical network to meet present and future system energy demands. and presentation. 122 Bechtel Technology Journal . transmission lines. Current system study programs offer flexible and easy-to-use techniques for system modelling. identification of the electrical system loading and operational modes for maximum reliability and safety. The next stage in the evolution of system study software programs was the use from the late 1940s of digital computers to conduct system studies. and selection of the control modes for major equipment. the network analysers were used to study power flows and voltage profiles under steady-state and transient conditions.a model of an electrical system under various operating conditions encountered or anticipated during operation of the network. the electrical system is planned to operate in parallel with the existing DTA electrical system and the grid supply from the local electricity utility.2 GW).

The London core group integrated these various submodels into a composite model of the overall plant electrical network. where available.. Because it is not possible to analyse every expected operating condition. To study system behaviour at the JERP under all expected operating conditions.g. with single power feed to the various plant switchboards (This condition of single-ended operation can occur in the electrical network under a contingency like loss of plant transformers. and use of library data (accompanied by a common library database to be used to populate the model). Load Flow Analysis Once the refinery is commissioned and fully operational. plant loads. which were later integrated with the overall model. In line with the specification requirement for the JERP. It is also to be noted that the results of the load flow analysis affect these other analyses. generators tripping).g. Load flow analyses help identify any abnormal system conditions during steady-state operation that can be harmful for the system in the long run. T used International Electrotechnical Commission (IEC) standards as the basis for evaluating the results of all studies except harmonic analysis. and power system stabilisers). As a result. The major items modelled were the GTGs/STGs along with their control systems (governors. the engineering centres December 2009 • Volume 2. it is very important to select the study cases whose results can be used to predict the system behaviour under all operating conditions. It was necessary to ensure that all the engineering centres used uniform modelling principles to prepare the individual models. an electrical system operating under steady-state conditions is more likely to satisfactorily survive a transient event such as the step-loading or tripping of one of the operating generators if its initial operating conditions are favourable (e. They also provide the initial basis for other detailed analyses such as motor starting and transient stability. and interconnecting power cables. The following system studies were carried out to analyse the behaviour of the JERP and DTA electrical networks. to speed the process of integrating them. step loading or load sharing amongst generators) or unplanned events (e. division of responsibility for preparing and using the model. ELECTRICAL SYSTEM STUDIES AND STUDY CASES A wide range of system studies can be conducted on electrical networks to study the behaviour of the system under steady-state conditions as well as conditions in which it is subjected to disturbances in normal operation (e...g. These work instructions covered key points such as model structure... three-phase motors).e. a specialised program used to model and simulate dynamic control systems. the electrical system is expected to operate in a stable condition. Some of the main parameters examined in a load flow analysis are presence of overvoltage or undervoltage at any point in the electrical network. and very low system power factor. The London core group issued specific discipline work instructions (DWIs) to the engineering centres and held a series of conferences to explain the modelling principles to be followed to ensure uniformity. exciters.. sufficient margin in the loading of various network elements). the London core group carried out load flow analyses under these three sets of conditions: • Normal system configuration. i. voltages within limits. instructions for dealing with cases of incomplete/missing data related to network or equipment required for the model. It is very important to select the study cases whose results can be used to predict the system behaviour under all operating conditions. the studies are usually conducted on the most onerous conditions expected during the refinery operation. i..MODEL OF THE JERP ELECTRICAL NETWORK he various Bechtel and third-party engineering centres prepared models of the electrical networks for the individual JERP units. OTI constructed the models. electrical fault. For example.e. key data required.g. with redundant power feeds. Load flow analysis is a steady-state analysis that calculates the active and reactive power flows through each element of the network and the voltage profile at the network’s various buses.) • No-load conditions (This study case was selected to assist in identifying any dangerous overvoltage that may occur when the network is operating under no-load or lightly loaded conditions [e. plant startup conditions]).g. to various plant switchboards that simulate the normal operating condition of the electrical network • Loss of redundant power feed. A balanced load flow analysis is adequate because the vast majority of loads in the refinery are inherently balanced (e. overloading of any network element. Number 1 123 . The modelling of certain complex portions of the GTG control system required software such as Simulink®.

. which basically requires all of the following conditions to be met: • The electrical equipment must be able to withstand the short circuit current until the protective equipment (relays) detects the fault and it is cleared by opening circuit breakers (i. the short circuit current may contain significant amounts of transient alternating and direct current components. Stability Analysis It is relevant to note the concept of stability as defined in standards such as Standard 399 of The Institute of Electrical and Electronics Engineers. A short circuit condition imposes the most onerous short-time duty on the system electrical equipment. Additionally. which decay to zero over time. closing onto an existing fault or closing circuit breakers when the associated earth switch is closed).e.) • Line-to-line (phase-to-phase) fault • Double line-to-earth fault The electrical equipment has to be rated for the short circuit level of the system. since a natural current zero may not be achieved. The four main types of short circuits are: • Three-phase short circuit with or without earthing (This is usually the most severe short circuit condition. breaking duty). [2] Steady-State Stability Analysis Steady-state stability is the ability of the system to remain stable under slow changes in system loading. [1] IEEE 399 states that ‘a system (containing two or more synchronous machines) is stable. thermal withstand short circuit current). depending on the characteristics of the network and the rotating machines. leading to the flow of uncontrolled high currents and severely unbalanced conditions in the electrical system. The IEC standards specify a standard withstand duration of 1 second or 3 seconds... the protective system of the network has to be set to enable reliable detection of any short circuit condition (minimum and maximum short circuit conditions).e. Short Circuit Analysis A short circuit condition imposes the most onerous short-time duty on the system electrical equipment.e. The power transfer between two synchronous machines (generator G and motor 124 Bechtel Technology Journal . The core group also highlighted cases of potential overloading of transformers under loss of redundant power feed (second case) for observation during actual plant operation.g. and transient stability. The London core group recommended that the tap settings of the upstream transformers associated with these switchboard buses be changed to bring the voltages within the specified limits. Depending on the network characteristics. when subjected to one or more bounded disturbances (less than infinite magnitude). • The circuit breakers must be suitable to interrupt the flow of the short circuit current (i. behaviour of the generators in the network. This fault condition arises as a result of insulation failure in the equipment or wrong operation of the equipment (e. System stability requirements can be generally categorised into steady-state stability. and the exact instant of the fault. • The circuit breakers must be suitable to close onto an existing fault (i. the resulting system response(s) are bounded’. The results of the short circuit analysis calculated the following various components of the short circuit current at each bus: • ip—Peak current in the first cycle after the short circuit • Idc—Direct current component at the instant the circuit breaker opened • Ib sym and Ib asym—Symmetrical and asymmetrical root mean square currents at the instant the circuit breaker opened • Ith—Thermal withstand short circuit current for 1-second rating These results were cross-checked with the equipment ratings to verify that the equipment short-time ratings were suitable for the short level of the system.) • Line-to-earth (single-phase-to-earth) fault (In certain circumstances. under a specified set of conditions. the short circuit current for a line-to-earth fault can exceed the three-phase short circuit current. dynamic stability. The JERP used switchgear rated for 1-second withstand time. if. This is particularly true because the presence of a large direct current component in the short circuit current imposes a very stringent breaking duty on the circuit breakers. making duty). Incorporated (IEEE®)..The results of these analyses revealed some instances in which the bus voltages exceeded the acceptable limits. It is very difficult to account for the effects of these phenomena through manual calculation methods. The calculation of the short circuit current for these conditions is made more complex by the behaviour of the short circuit current immediately after the fault.

and power system stabiliser to adjust the output of the machine and match the load requirement. the behaviour of different sections of the network may be different for the same transient event. Also. It analysed the behaviour of the power system in the pre-fault stage. and a phase angle of ‘∂ ’ between them is represented in Equation 1: [1] P= EM EG sin∂ X (1) The maximum power that can be transferred occurs when ‘∂ ’ = 90 degrees. Power Transfer Between Machines M—refer to Figure 3) with internal voltages of EG and EM. Hence. [1] Traditionally. whereby small. tripping of generators. generally within 1 second of the event). transient stability analysis focused on the ability of the system to remain in synchronism immediately after the occurrence of the ‘transient’ event (i. Also. the traditional transient stability analysis ignored the action of the machine governor. per Equation 2: P = max EM EG X (2) For particular values of EG and E M. Rather.g. to verify whether system stability is retained. Number 1 125 . short circuits. the behaviour of the electrical system was studied for all probable cases of load throwoff in which a substantial portion of the operating load was suddenly tripped. the state of the electrical system can be considered ‘dynamic’.. the machines lose synchronism with each other if the steady-state limit of Pmax is exceeded. during the fault. The dynamic stability analysis of any system is only practical through specialised computer programs such as ETAP. Dynamic Stability Analysis A ‘steady-state’ scenario never exists in actual operation. Figure 3. load. • Load throw-off study: A load throw-off condition can cause the machines to over speed. Hence. The stability analyses can be broadly classified into the following categories. random disturbances are bounded and damped to an acceptable limit within a reasonable time. They covered the operation of the JERP electrical network while in a standalone condition as well as in parallel operation with the DTA electrical network. or system characteristics (e.e. coupled with specialised computer programs such as ETAP. Transient and Extended Dynamic Stability Analysis • Fault withstand study: This study entailed simulation of single-phase and three-phase faults at various locations in the electrical network. random changes in the system load constantly occur.. however. and automatic voltage regulator (AVR) because they were slow-acting compared with the duration of the analysis. exciter. It has also been seen that different sections of an interconnected network may respond at different times to a transient event that sometimes may be outside the traditional ‘1 second’ window for transient analysis.G jX M EG EM the first ‘swing’ of the machines. the dividing line between dynamic stability analysis and transient stability analysis has been virtually eliminated. but today such an analysis can be performed because of the availability of practically unlimited computing power on desktop and laptop computers. A range of stability analyses was carried out on the JERP refinery system. respectively. This approach to transient stability analysis has been modified in recent times since the advent of governors. exciter. December 2009 • Volume 2. The steady-state stability study determines the maximum value of machine loading that is possible without losing synchronism as the system loading is increased gradually. and AVRs based on fast-acting control systems. the transient stability analysis needs to be carried out for a longer duration (preferably over a range of transient events having varying severities and durations). The system can be considered stable if the responses to these small. This kind of analysis was not possible in earlier years due to the high complexity of modelling and limited computing power. Temporary overvoltage conditions can also occur in the system. Hence. exciters. followed by actions of the generator governor. switching in large bulk loads) without a prolonged loss of synchronism. Stability analyses carried out on the JERP refinery system covered the operation of the JERP electrical network in standalone condition as well as in parallel operation with the DTA electrical network. Transient Stability Analysis Transient stability is the ability of the system to withstand sudden changes in generation. and after the fault was cleared by the system’s protective devices.

Contingency Analysis • Load sharing on tripping of JERP/DTA GTG: The behaviour of the power system was studied when one or more of the operating GTGs/STGs tripped. The results of these stability analyses helped define the limits of safe operation of the power system under various generation/ load scenarios. leading to very long starting times or a failure to start.. A step-load addition scenario can occur in a variety of ways in an electrical system. The effects of this voltage drop include: • The combined voltage drop in the supply network and the motor cable reduces the voltage available at the motor terminals during the starting period. synchronous and induction motors draw a starting current that is several times the full-load current of the motor. This condition exacerbates the voltage problem because the increased current gives rise to an increased voltage drop in the system. Studying motor starting can help identify these voltage-drop-related problems at the design stage. The behaviour of the system was studied for all probable scenarios of step-load addition. Operational Analysis • Step-load addition study: A sudden addition of load on operating machines can cause loss of stability. as well as in the motor feeder cable. voltage-sensitive loads or contactor-fed loads where the control voltage for the contactor is derived from the switchgear bus). • The voltage drop at the switchboard buses can affect the other operating loads.e. which is particularly feasible for the larger mediumvoltage (MV) motors 126 Bechtel Technology Journal . mainly in the form of nuisance tripping of other loads on the network (e. Depending on the size of the motor being started and the generating capacity available.. the most probable being loss of one of the operating machines.. load shedding was simulated to try to achieve a stable system.g.The results of stability analyses help define the limits of safe operation of the power system under various generation/load scenarios. When system stability could not be achieved by load sharing amongst the remaining operating GTGs/STGs. this starting current is typically between 600% and 720% of the normal full-load current. a reduction in the motor terminal voltages causes the current drawn by the motors to increase as they strive to produce the power the process demands of them. the worst-case motor-starting scenario is the starting of the highest-rated motor (or the highest-rated standby motor) at each voltage level with the operating load of the plant as the standing load. other worst-case scenarios may require evaluation in certain situations: • Motors with an unusually long supply cable circuit • Motors fed from a ‘weak’ power supply (e. various improvement measures are available. However. excessive voltage drops can mean that insufficient torque is available to accelerate the motor in the face of the load torque requirement. In the absence of assisted starting. various power flow scenarios between JERP and DTA systems). which can cause a sudden increase in the load demand on the other operating GTGs/STGs. This study was carried out for various combinations of operating GTGs/STGs in the JERP and DTA electrical systems (i. causing a loss of generation. Motor-Starting Study At the instant of starting. When system stability could not be achieved by load sharing amongst the operating GTGs/STGs in each individual network. including: • Specifying that motors be designed with a lower value of starting current. • Load sharing on tripping of tie circuit breaker between JERP and DTA electrical systems: The behaviours of the JERP and DTA electrical systems were studied on disconnection of the JERP–DTA tie line. Because the motor torque is directly proportional to the square of the applied voltage. This reduction in the motor torque can cause the running motors to stall. This high current causes a voltage drop in the upstream electrical network. motor starting can impose a very high short-term demand on the operating generators. This study was carried out for various combinations of operating GTGs/STGs in the JERP and DTA electrical systems.g. starting on emergency power supplied from a diesel generator set of limited rating) • Simultaneous starting of a group of motors In the event of an unfavourable outcome from the motor starting study. There can also be cases in which the reduction in the terminal voltage for the operating motors causes the motor-torque curve to shift downwards. load shedding was simulated to try to achieve a stable system. Usually. • For the other operating loads.

effect on the voltage profile at the refinery’s various switchboard buses. and the results indicated that the motors could be started satisfactorily. The load values were selected as exponentially decreasing to simulate the inrush current decay.• Specifying lower impedance for the upstream transformer after verifying the suitability through a short circuit analysis • Starting the largest motors in the network with a reduced standing load • Using larger cable sizes for the motor feeder to improve the motor terminal voltage • Providing assisted starting. Because ETAP could not directly model transformer behaviour under inrush conditions. for the larger HV/MV motors instead of direct on-line starting • Using motor unit transformers to feed power to large MV motors. For the JERP: f 1 = fundamental frequency = 50 Hz fharmonic = n x f 1 (n = 2. The inrush current taken by the transformers is due to the behaviour of the magnetic circuit. Although modern protection systems are wellequipped with algorithms to distinguish the transformer inrush current from the short circuit. When a transformer is switched on. the magnetic flux after energisation retains a sinusoidal shape that is biased by the flux requirement at the point of energisation and the remnant flux. which depends on the point on the sine wave voltage waveform where the transformer was last switched off As explained by the constant flux linkage theorem. This finding was crucial because in certain scenarios.. the magnetic flux immediately after energisation depends on the following factors that are essentially random: • The point on the sine wave voltage waveform where the transformer is switched on. The inrush current decays substantially within a few cycles. which decides the amount and direction of the flux requirement • The amount and direction of the remnant flux. Hence. This voltage dip can cause nuisance tripping of other network loads. The results of the transformer energisation studies established the network conditions under which the JERP transformers can be safely energised. The London core group studied various probable transformer energisation scenarios (including group energisation of transformers) to confirm that the network voltages recover without tripping system operating loads. 4. the inrush current still causes a severe voltage dip at the other switchboards in the network. The constant flux linkage theorem states that the magnetic flux in an inductive circuit cannot change suddenly. (3) (4) December 2009 • Volume 2. Periodic waveform distortion is characterised by the presence of harmonics and interharmonics in the power supply. Number 1 127 . Interharmonics are sinusoidal voltages and currents with frequencies that are non-integral multiples of the fundamental frequency of the system. this condition can cause the flux requirement to be well above the knee-point voltage on the transformer magnetising curve. albeit short-term. which ensures that the effect of the voltage drop on the rest of the electrical system is reduced • Increasing upstream bus voltage temporarily (e. Harmonic Analysis The amount of periodic waveform distortion present in the power supply is one of the most important criteria for measuring power quality. Transformer Energisation Studies The inrush phenomenon in transformers can inflict a very severe. 3. the JERP main transformers were to be energised from the DTA electrical system and any disruption to the DTA operating loads could lead to tripping of the DTA refinery. the magnetic flux immediately after energisation (t = 0+) should be equal to the magnetic flux immediately before energisation (t = 0–). Harmonics are sinusoidal voltages and currents with frequencies that are integral multiples of the fundamental frequency of the system.g. leading to very high excitation currents that may reach large multiples of the full-load current of the transformer. the inrush current data was based on transformer manufacturers’ data supplemented by the measurements recorded during site testing and commissioning. …) finterharmonic = m x f 1 (m > 0 and non-integral) The results of the transformer energisation studies established the network conditions under which the JERP transformers can be safely energised. To ensure accurate modelling. Depending on the design of the transformer. through on-load tap changers [OLTCs]) before starting large motors Various motor starting scenarios were modelled for the JERP. the impact of the transformer inrush current was simulated by switching a series of low power-factor loads in and out at intervals of 5 milliseconds. if required.

Other possible effects of harmonics include reduced power factors. The London core group recommended that any corrective action (such as adding harmonic filters) to reduce the harmonics at these JERP plant switchboards be undertaken after measuring the actual harmonic levels at the various 6.6 kV/415 V switchboards when the plant is operating.5 (GTGs and STGs) were assumed to have no harmonic distortion. Hence. Harmonic Limits as Defined by IEEE 519 Individual Harmonic Distortion (IHD). % 5 2. the data required to be populated in the model. LESSONS LEARNT onducting electrical system studies on a complex project such as the JERP and working with execution centres and non-Bechtel engineering contractors located across the globe have highlighted three major areas. The results of the harmonic analysis highlighted the switchboards whose power quality needs to be monitored further during plant operation. and occurrence of series and parallel resonant conditions that can lead to excessive currents and voltages in the system. The non-linear loads act as sources of harmonic currents in the power system. Amongst other things. which are loads that do not draw a sinusoidal current when excited by a sinusoidal voltage. the common library to be used to populate the standard data in the model. C Rated Bus Voltage Total Harmonic Distortion (THD). the harmonic analysis was carried out with the minimum generation configuration under normal operating conditions because this configuration corresponds to the maximum system impedance. the methodology of populating the data. is higher). % 3 1. and the London core group for carrying out ETAP modelling must be clearly defined through proper DWIs. characterised by a high system impedance. Hence.5 For shorter periods. and transformers. mainly 22 kW and 37 kW low-voltage (LV) variable-speed drives (VSDs) that act as sources of harmonic currents. which cause a voltage distortion at the various buses because of the harmonic voltage drops across the impedances of the network. Table 1. The JERP electrical network includes a large number of harmonic-generating loads. during startups or unusual conditions. these limits may be exceeded by 50%. The presence of excessive harmonics can lead to premature aging of electrical insulation due to dielectric thermal or voltage stress in equipment such as motors. the extent of modelling required. cables. the instructions should include the structure of the model. Distributing Work The work distribution amongst the execution centres. The power sources in the JERP network 128 Bechtel Technology Journal . As a worst-case scenario. the use of assumptions and approximations.Conducting system studies on a complex project such as the JERP has highlighted areas where existing Bechtel procedures can be improved or fine-tuned. non-Bechtel engineering contractors. incorrect operation of protection systems.5 69 kV and less Greater than 69 kV up to 161 kV 161 kV and greater 1 1. All harmonic-generating process loads were modelled in the ETAP model used for harmonic analysis. CONCLUSIONS T he results of the system studies of the JERP electrical network verified the adequacy of the ratings for the system’s major equipment. and the tests that must be carried out to ensure that sections of the model meet all requirements before they are transferred to the London core group for integration into the overall model. The London core group carried out a harmonic analysis of the JERP electrical network to verify that the voltage distortion at the network’s various switchboards caused by these harmonic-generating loads is within the limits specified in Table 11-1 of IEEE 519 (Table 1). interference with communication networks. discussed below. where existing Bechtel project procedures can be improved or fine-tuned to increase operating efficiency. The results also helped determine the conditions for satisfactory and reliable system operation and highlighted any operational restrictions required for safe operation. Periodic waveform distortion is caused by nonlinear loads. the quantum of voltage distortion depends on the harmonic currents injected into the system and the impedance of the system (the voltage distortion in a weak system. it is important to carry out a harmonic analysis wherever the non-linear load forms a significant portion of the total load.

Number 1 129 . He has also acted as the responsible engineer for the energy management system and load shedding system on the JERP. D. it is essential to optimise the types of studies to be conducted on a project and the number of cases to be analysed for each study. Inc. a Honeywell Company. access via http://www. pp.S. Buckle (chief engineer). Rajesh has contributed to system studies and relay coordination studies on the JERP and to FEED for the Ruwais refinery expansion project. At the same time. and GTG power plant projects worldwide. Incorporated. FL. Venkateswar. Mujawar from Reliance Industries for their support and encouragement. This opportunity is particularly valuable because the projects that Bechtel is bound to take up (in the role of engineering contractor or as a project management consultant or member of a project management team) are more likely to be of the scale of the JERP.J. He is a Six Sigma Yellow Belt. “Power System Engineering. “IEEE Recommended Practice for Industrial and Commercial Power Systems Analysis. “Power System Control and Stability. Simulink is a registered trademark of The MathWorks. John Wiley & Sons.” 2nd Edition. LLC. CRC Press/Taylor & Francis Group. and David Hulme (project engineering manager) for their support and guidance during execution of the JERP system studies. and M. B.Handling Model Revisions Proper work procedures for handling ETAP model revisions need to be furnished to the execution centres/non-Bechtel engineering contractors so that the London core group can integrate the revised models or revised sections of same into the overall model without causing rework or loss of center_view0/. December 2009 • Volume 2. mcgraw-hill.” The Institute of Electrical and Electronics Engineers. Power-System-Control-Stability-Engineering/ dp/0471238627#noop. Controls. Rajesh received a performance award for his work on the JERP system studies. He has 16 years of experience in engineering oil and gas. TRADEMARKS ETAP is a registered trademark of Operation Technology.H. pp. Richard C. Embedded Systems. IEEE is a registered trademark of The Institute of Electrical and Electronics Engineers. Dorf. member of Institution of Engineering and Technology [MIET].. and it is highly likely that the engineering work for such projects will be divided amongst various execution centres. and Machines. access via http://www. it is essential to ensure that the number and types of study cases allow the engineer to determine system behaviour under all operating conditions. and is a chartered electrical engineer (CEng. Rajesh holds a BE from Mumbai University. 5–10. Merox is a trademark owned by UOP LLC. Chapter 5. 558–560. During his 3 years with Bechtel OG&C (London). 2006. REFERENCES [1] IEEE 399-1997.A. Shanbhag.” The Electrical Engineering Anderson and A. 2008.” 2nd Edition. Inc. Chapter colorbooks/sampler/Brownbook. Nagrath. IEEE Press Series on Power Engineering. Inc. Kothari and I. pp. 7– dp/0849373476. pp. petrochemical. India.A. access via http://standards.. ed. 2003. ACKNOWLEDGMENTS The author wishes to express his gratitude to R. 5-1 – 5-3. UK). [2] ADDITIONAL READING Additional information sources used to develop this paper include: • P. BIOGRAPHY Rajesh Narayan Athiyarath is a senior electrical engineer in Bechtel’s OG&C Global Business Energy. Tata McGraw-Hill Publishing Company Ltd. 1998. Fouad.M. • “Systems. R. Boca Raton. 209–214. Hibbett (lead electrical engineer—JERP). Identifying Study Cases To increase engineering efficiency. access via http://highered. 3rd Edition.P.pdf.. The author also wishes to thank V.

130 Bechtel Technology Journal .

McLane Yongmin Yan. PhD Joon Park 145 Managing the Quality of Structural Steel Building Information Modeling Martin Reifschneider Kristin Santamont 157 Nuclear Uprates Add Critical Capacity Eugene W. which will support flue gas treatment equipment.Power Technology Papers TECHNOLOGY PAPERS 133 Options for Hybrid Solar and Conventional Fossil Plants David Ugolini Justin Zachary. Interoperable Deployment Strategies for Enterprise Spatial Data in a Global Engineering Environment Tracy J. PhD Robin Benjamins . The power block is in the background. catches light at sunset. Thomas 165 Prairie State Energy Campus Exposed piling.

Issue Date: December 2009 Abstract—Renewable energy sources continue to add to the electricity supply as more countries worldwide mandate that a portion of new generation must be from renewable energy. In areas that receive high levels of sunlight, solar technology is a viable option. To help alleviate the capital cost, dispatchability, and availability challenges associated with solar energy, hybrid systems are being considered that integrate concentrating solar power (CSP) technology with conventional combined cycle (CC) or Rankine cycle power blocks. While briefly discussing Rankine cycle applications, this paper focuses primarily on the most widely considered hybrid approach: the integrated solar combined cycle (ISCC) power plant. The paper examines the design and cost issues associated with developing an ISCC plant using one of the three leading CSP technologies—solar trough, linear Fresnel lens, and solar tower. Keywords—combined cycle (CC), concentrating solar power (CSP), concentrating solar thermal (CST), heat transfer fluid (HTF), integrated solar combined cycle (ISCC), linear Fresnel lens, renewable energy, solar tower, solar trough

INTRODUCTION ore and more countries are mandating that a portion of new energy be from renewable sources such as solar, wind, or biomass. However, compared with traditional power generation technologies, renewable energy faces challenges—primarily related to capital cost—that are only partially compensated for by lower expenditures for operation and maintenance (O&M) and fuel. Other challenges include dispatchability and the intermittent nature of some of these energy sources. These challenges can be overcome by using some form of storage. However, large-scale energy storage also has unresolved technical and cost issues.


Finally, hybrids are possible that combine the different forms of renewable energy to increase daily electricity supply. The focus of this paper is the ISCC power plant. For comparison purposes, integration options with Rankine cycle power blocks are also briefly discussed. In either case, the integration seeks to achieve efficient operation even though solar energy intensity varies according to time of day, weather, and season.

BACKGROUND oncentrated sunlight has been used to perform tasks since ancient times. As early as 1866, sunlight was successfully harnessed to power a steam engine, the first known example of a concentrating-solar-powered mechanical device. Today, conventional CC plants achieve the highest thermal efficiency of any fossil-fuel-based power generation system. In addition, their emissions footprint, including CO2, is substantially lower than that of coal-fired plants. Properly integrating an additional heat source, such as concentrating solar power (CSP), can dramatically increase CC system efficiency.

David Ugolini

Justin Zachary, PhD

Joon Park

A viable alternative that helps to alleviate the challenges associated with renewable energy is a hybrid system that integrates renewable sources with combined cycle (CC) or Rankine cycle power blocks. One such hybrid system is the integrated solar combined cycle (ISCC), which uses concentrating solar thermal (CST) energy as the renewable source. In regions with reasonably good solar conditions, CST hybrids involving conventional coal-fired plants are also feasible. For these plants, where steam pressures and temperatures are higher than for ISCC plants, the specific solar conversion technology used dictates how solar is integrated into the plant.


© 2009 Bechtel Corporation. All rights reserved.


ACC CC CSP CST HP HPEC air-cooled condenser combined cycle concentrating solar power concentrating solar thermal high pressure HP economizer HP evaporator HP superheater heat recovery steam generator heat transfer fluid integrated gasification CC intermediate pressure IP economizer IP evaporator IP superheater integrated solar CC low pressure LP evaporator LP superheater low-temperature economizer (US) National Renewable Energy Laboratory operation and maintenance Pacific Gas & Electric reheater (NREL) Solar Advisor Model Solar Electric Generating Station

daily startup. Moreover, during solar operation, steam produced by the solar heat source offsets the typical CC power loss resulting from higher ambient temperatures. Thus, ISCC is a winning combination for both CC and solar plants in terms of reduced capital cost and continuous power supply. When considering an ISCC system, the following must be examined: • Solar technology to be used and its impact on steam production • Amount of solar energy to be integrated into the CC • Optimal point in the steam cycle at which to inject solar-generated steam

ISCC is a winning combination for both CC and solar plants in terms of reduced capital cost and continuous power supply.


EXISTING SOLAR THERMAL SYSTEMS AND THEIR IMPACT ON STEAM PRODUCTION SP systems require direct sunlight to function. Lenses or mirrors and a tracking device are used to concentrate sunlight. Each system consists of the following: • Concentrator • Receiver • Storage or transportation system • Power conversion device Existing CSP technologies include: • Solar trough • Linear Fresnel lens • Solar tower Solar Trough The solar trough is considered to be the most proven CSP technology. Since the 1980s, more than 350 MW of capacity has been developed at the Solar Electric Generating Station (SEGS) solar trough plants in California’s Mojave Desert. The solar trough is a cylindrical parabolic reflector consisting of 4- to 5-mm-thick (0.16to 0.20-inch-thick), glass-silvered mirrors. (The mirrors may also be made of thin glass, plastic films, or polished metals.) It is designed to follow the sun’s movement using a motorized device and to collect and concentrate solar energy and reflect it onto a linear focus. A specially coated metal receiver tube, enveloped by a glass tube, is located at the focal point of the parabolic mirror. The special coatings aim to maximize energy absorption and minimize heat loss. A conventional synthetic-oil-based heat


Compared with the cost of a steam turbine in a standalone solar power plant, the incremental cost of increasing a CC plant’s steam turbine size is considerably less. At the same time, the annual electricity production resulting from CST energy is improved over that of a standalone solar power plant because the CC plant’s steam turbine is already operating, avoiding time lost to


Bechtel Technology Journal

transfer fluid (HTF) flows inside the tube and absorbs energy from the concentrated sunlight. The space between the receiver tube and the glass tube is kept under vacuum to reduce heat loss. Several receiver tubes are connected into a loop. A metal support structure, sufficiently rigid to resist the twisting effects of wind while maintaining optical accuracy, holds the receiver tubes in position. Figure 1 shows a solar trough installation.

(Source: Solel)

Linear Fresnel Lens The linear Fresnel lens solar collector is a line-focus system similar to the solar trough. However, to concentrate sunlight, it uses an array of nearly flat reflectors—single-axis-tracking, flat mirrors—fixed in frames to steel structures on the ground. Several frames are connected to form a module, and modules form rows that can be up to 450 meters (492 yards) long. The receiver consists of one or more metal tubes, with an absorbent coating similar to that of trough technology, located at a predetermined height above the mirrors. Water or a water-andsteam mixture with a quality of around 0.7 flows inside the tubes and absorbs energy from the concentrated sunlight. At the ends of the rows, the water and steam are separated, and saturated steam is produced for either process heat or to generate electricity using a conventional Rankine cycle power block. Figure 2 shows a linear Fresnel lens installation.

Figure 1. Solar Trough Installation

Many loops are required to produce the heat necessary to bring large quantities of HTF to the maximum temperature allowable, which is around 395 ºC (745 ºF) because of HTF operational limitations. In locations with good solar radiation, about a 1.5- to 2.0-hectare (4- to 5-acre) solar field is needed to generate 1 MW of capacity. Hot HTF goes into a steam generator—a heat exchanger where HTF heat is transmitted to water in the first section to convert the water into steam and then transmitted to steam in the second section to generate superheated steam. From this point onward, the power block converting steam into electricity consists of conventional components, including steam turbine, heat sink, feedwater heaters, and condensate and boiler feed pumps. Advantages of solar trough technology include: • Well understood, with proven track record • Demonstrated on a relatively large scale • May bring projects to execution faster than other competitive CSP technologies Disadvantages of solar trough technology are related to: • Maximum HTF temperature, which dictates relative cycle efficiency • Complexity of an additional heat exchanger between the Rankine cycle working fluid and the solar-heated fluid

(Source: Ausra, Inc.)

Figure 2. Linear Fresnel Lens Installation

Advantages of linear Fresnel lens technology over solar trough technology include: • Direct steam generation without using intermediate HTF • Less stringent optical accuracy requirements • Decreased field installation activities because the construction design is geared toward factory assembly • Use of conventional “off-the-shelf” materials • Less wind impact on structural design • Possible improved steam cycle efficiency if the temperature can be increased up to 450 ºC (840 ºF), as some technology suppliers are pursuing

December 2009 • Volume 2, Number 1


The water temperature is higher. in development. steam. liquid sodium. thus allowing the system to extend operating hours or increase capacity during periods when power is most valuable. the design requires accurate aiming and control capabilities to maximize solar field heliostat efficiency and to avoid potential damage to the receiver on top of the tower.020 ºF). the solar tower can be connected to molten salt storage. ISCC Plants ISCC Project Kureimat Victorville Palmdale Ain Beni Mathar Hassi R’Mel Yazd Martin Agua Prieta Location Egypt California (US) California (US) Morocco Algeria Iran Florida (US) Mexico Plant Solar Solar Output. Solar Tower Installation Questions To Be Addressed How steam generated by a given solar technology is integrated depends on the steam conditions that the technology generates. close to 545 ºC (1.705 480 20 50 62 20 25 67 75 31 Figure 3. Table 1. in the solar tower system than in the line-focus systems. ºC (ºF) 395 (745) 270 (520) (or higher) 545 (1. CSP Technologies Summary Technology Solar Trough Linear Fresnel Lens Solar Tower How steam generated by a given solar technology is integrated depends on the steam conditions that the technology generates. Rather. the system consists of a tall tower with a boiler on top that receives concentrated solar radiation from a field of heliostats.Disadvantages with respect to solar trough technology are related to: • Less mature. relatively small-scale commercial developments • Lower power cycle efficiency because of lower steam temperature • Lower optical efficiency and increased heat losses because of no insulation around receiver tubes Solar Tower A solar tower is not a line-focus system. It is critical to remember that all power generated in the CC 136 Bechtel Technology Journal . In addition. Table 1 summarizes the CSP technologies and their associated thermal outputs. which are dual-axis-tracking mirrors. or under construction. On the downside. The main advantage of solar tower technology is the ability to provide high-temperature superheated steam. water is the working fluid. The heat transfer medium can be water.020) INTEGRATION OPTIONS WITH COMBINED CYCLE POWER PLANTS ISCC Plants ISCC plants have been under discussion for many years. Table 2. molten salt. with only recent. Figure 3 shows a conventional solar tower installation. Working Fluid Synthetic Oil HTF Steam Steam Maximum Temperature. However. or compressed air. Table 2 lists plants that are proposed. Contribution. Summary CSP technology provides different options for introducing CST energy into a conventional fossil-fired plant. in the more conventional arrangement. Technology MWe MWe Trough Trough Trough Trough Trough Trough Trough Trough 140 563 555 472 130 430 3.

Number 1 137 . and an air-cooled condenser (ACC). Because solar technologies are evolving and improving. unfired.900 psia/ 1. the solar technologies under consideration are to be integrated into a new 2 x 1 CC plant using F Class gas turbines. 400 ºC (750 ºF) • Low temperature. Rather.steam cycle is “free. >500 ºC (>930 ºF) • Medium temperature. A contractor familiar with IGCC integration issues can easily manage ISCC integration issues. That is. [1] A schematic of this process is depicted in Figure 4. 250 ºC to 300 ºC (480 ºF to 570 ºF) Medium-temperature technology is discussed first. because it is the most proven technology. and the levelized cost of electricity for the sitespecific location under consideration. reheat heat recovery steam generators (HRSGs). care must be exercised to not simply substitute fuel-free energy from solar power for fuel-free energy from exhaust gases. Therefore. three-pressure. Solar Technologies For discussion purposes. Medium-Temperature Solar ISCC Technology December 2009 • Volume 2. Therefore. the goal is to maximize the use of both energy sources. detailed technical and economic analyses must be performed to evaluate various MWth solar inputs to the CC. When solar energy is being integrated into the CC steam cycle.050 ºF).050 ºF) and reheat temperature of 566 ºC (1. a reheat steam turbine with throttle conditions of 131 bara/566 ºC (1. they have been categorized based on fluid temperature capability: • High temperature. not by burning additional fuel. Medium-Temperature Solar Technology The solar (parabolic) trough is the most common medium-temperature solar technology. different solar technologies and associated steam conditions. The key factor to keep in mind is that in an ISCC plant. for parabolic trough systems generating steam up to around 395 ºC (745 ºF). the following questions must be answered: • What solar technology should be used? • How much solar energy should be integrated? • Where in the steam cycle is the best place to inject solar-generated steam? There are no simple answers to these questions. Integrating HP saturated steam into the HRSG and sending heated feedwater from the HRSG is common in integrated gasification combined cycle (IGCC) plants.” from a fuel perspective. it is important When solar energy is integrated in the CC steam cycles. it is best to generate saturated highpressure (HP) steam to mix with saturated steam generated in the HRSG HP drum. steam cycle power is generated from energy provided in the gas turbine exhaust gases. Previous studies indicate that. the following questions must be answered: What solar technology should be used? How much solar energy should be integrated? Where in the steam cycle is the best place to inject solar-generated steam? Solar Field Steam Turbine ACC ACC Fuel Air H P S H R H H P E V H P E C H P E C I P S H I P E V H P E C I P E C L P S H L P E V L T E C Gas Turbine Heat Recovery Steam Generator Figure 4.

The objective is to maximize solar efficiency by maximizing feedwater heating in the HRSG and minimizing feedwater heating in the solar field. this configuration results in the highest solar efficiency. Conversely.000 (507. doing this maximizes the gas turbine exhaust energy used to heat the feedwater.6 take feedwater supply to the solar boiler from the proper location in the steam cycle. MWe Solar Efficiency. it is beneficial instead to take feedwater after it has been further heated in the HRSG HP economizers. The most convenient place in the steam cycle from which to take feedwater is the HP feedwater pump discharge. the HTF temperature leaving the solar boiler is approximately 290 ºC (550 ºF). plant fuel consumption would be reduced by approximately 8%.000 (507. thereby minimizing the solar field size needed to produce a given amount of solar steam. is kept constant while the feedwater temperature is increased from 160 ºC (320 ºF) to 260 ºC (500 ºF). High-Temperature Solar ISCC Technology 138 Bechtel Technology Journal .000) 42. MWth Solar Steam Added. Solar Field Steam Turbine ACC ACC Fuel Air H P S H R H H P E V H P E C H P E C I P S H I P E V H P E C I P E C L P S H L P E V L T E C Gas Turbine Heat Recovery Steam Generator Figure 5. Feedwater Temperature. ºC (ºF) Net Solar Energy Added. hence the amount of solar energy added.3 44. which is too low for optimum results.000) 48. However. the feedwater temperature should be approximately 260 ºC (500 ºF). kg/h (lb/hr) Change in Net Output. The objective is to maximize solar efficiency by maximizing feedwater heating in the HRSG and minimizing feedwater heating in the solar field. Thus. % 160 (320) 96 173. allowing for a reasonable approach temperature.8 260 (500) 96 230. the solar field would have to be approximately 30% larger to generate enough solar energy to keep the amount of solar steam added to the HRSG the same as when using 260 ºC (500 ºF) feedwater.000) 37. in turn reducing gas turbine power and exhaust energy. For the same plant net output with 100 MWth solar energy input.2 160 (320) 124 230. This represents a decrease in solar efficiency. even though the change in net output drops 11% when the solar field size. For a parabolic trough plant similar to the SEGS plants. These effects of varying the feedwater supply temperature are summarized as follows. the typical LP drum pressure of approximately 5 bara (73 psia) in a three-pressure reheat system results in a feedwater temperature of only approximately 160 ºC (320 ºF) at pump discharge. If feedwater is taken from the HP feedwater pump discharge at 160 ºC (320 ºF) rather than after an HP economizer at 260 ºC (500 ºF).1 38.000 (382. feedwater pumps take suction from the lowpressure (LP) drum.3 Solar thermal input to an ISCC can reduce gas turbine fuel consumption. On most modern CC systems. Therefore.

however. A schematic of this process is depicted in Figure 6. Similar to other solar systems. Economic Considerations To be able to select the appropriate solar technology for a given site.or low-temperature applications. integrating these systems would be more in line with a lowtemperature system. In addition. Therefore. design pressure limitations prevent their use in developing HP saturated steam. A schematic of this process is depicted in Figure 5. A high-temperature system could be used in medium. These conditions allow solar-generated superheated steam to be admitted directly into the HP steam line to the steam turbine. Solar Field Steam Turbine ACC ACC Fuel Air H P S H R H H P E V H P E C H P E C I P S H I P E V H P E C I P E C L P S H L P E V L T E C Gas Turbine Heat Recovery Steam Generator Figure 6. although recent technology has been enhanced to reach higher temperatures.In addition to solar troughs.020 ºF)—at high pressure. Number 1 139 . These systems generate saturated steam at up to 270 ºC/55 bara (520 ºF/800 psia). taking feedwater supply from the optimum location in the steam cycle is important to maximize system efficiency. High-Temperature Solar Technology Solar tower systems can generate superheated steam—up to 545 ºC (1. it is doubtful that this would result in optimum application of solar tower technology. in low-temperature systems. • Generate steam at approximately 5 bara (73 psia) and admit it to the LP steam admission line. and operating scenarios. To be able to select the appropriate solar technology for a given site. Therefore. Low-Temperature Solar Technology ISCC December 2009 • Volume 2. steam can be reheated in the solar tower like it is in the HRSG. This pressure is too low to allow integration into the steam cycle HP system. there is less flexibility in feedwater takeoff point selection because the takeoff temperature must be below the saturation temperature of the steam being generated. However. a detailed economic analysis must be performed. performance data. However. Similar to medium-temperature technology. Thus. some Fresnel lens systems fall into the medium-temperature category. there is minimal impact on the HRSG because solar steam superheating and reheating are accomplished in the solar boiler. taking feedwater supply from the optimum location in the steam cycle is important to maximize system efficiency. a detailed economic analysis must be performed to assess capital and O&M costs. Low-Temperature Solar Technology Most Fresnel lens systems fall into the lowtemperature solar technology category. two options exist: • Generate saturated steam at approximately 30 bara (435 psia) and admit it to the cold reheat line.

For example. To analyze a proposed plant configuration. can greatly benefit a solar facility. Solar Thermal Energy Production vs. Figure 8 shows performance characteristics for a 2 x 1 CC configuration designed to accept 100 MWth of solar energy input in the form of HP saturated steam. Hourly dry bulb temperature. Net Output vs. and cycle performance model developed.. 120 100 80 MWth 60 40 20 0 0 2 4 6 8 10 12 Hours 14 16 18 20 22 24 August January Figure 7. Site data must also be examined to quantify solar facility energy contribution and to define CC performance characteristics. to analyze a particular plant configuration. relative humidity. such as the NREL Solar Advisor Model (SAM). Ambient Temperature for Various Solar Inputs 140 Bechtel Technology Journal .An advantage of solar energy is that it produces energy when most needed— during peak times of the day and the year. “time-ofdelivery” pricing. Time of Day 600 550 500 MW 450 MWth 100 80 60 40 20 0 400 350 10 20 30 40 50 60 °F 70 80 90 100 110 Figure 8. Therefore. conceptual design established. An advantage of solar energy is that it produces energy when most needed—during peak times of the day and the year. This data can be used with software programs.m. and solar insolation data for various sites is available from the US National Renewable Energy Laboratory (NREL). where energy payments vary with time of day. performance characteristics must be defined. The pricing structure must be included in the economic analysis to assess the viability of any hybrid solar plant configuration. Monday through Friday) at rates almost double the rates at any other time. The representative graph shown in Figure 7 illustrates the results of an analysis of average hourly solar thermal energy production at a particular location versus time of day for January and August. some PG&E power purchase agreements include time-of-delivery pricing that values energy produced during “super-peak” periods (from June through September between noon and 8 p.

Therefore. it can be advantageous to use solar energy to displace fossil fuel energy. integrating solar steam into the boiler proper is a more complex proposition than in a CC plant because of the higher gas temperatures and the need to control fuel firing. A schematic of this process is shown in Figure 10. High-Temperature Solar Technology Solar tower systems can be used to generate superheated steam for injection into the turbine main steam line. since integration applications to date have focused primarily on ISCC. Low-Temperature Solar Technology Options for integrating low-temperature solar technology are limited to generating steam or heating feedwater to reduce turbine extraction steam to feedwater heaters.000 ºF). However. Medium-Temperature Solar Integration with Rankine Cycle Plant INTEGRATION OPTIONS WITH RANKINE CYCLES any issues associated with integrating solar technology with CC plants apply to integrating solar technology with Rankine cycle plants. Several options involving both water heating and steam generation in the solar field have been examined. Integration options with Rankine cycle plants are discussed only briefly. The same amount of cold reheat steam can be extracted and reheated in the solar field. December 2009 • Volume 2. Reducing or eliminating extraction steam to feedwater heaters appears to be the most practical application for medium-temperature solar integration because it avoids complex boiler integration issues. because boiler efficiency typically increases slightly as boiler load is reduced. Similar to its application in a CC plant.400 psia) and 538 ºC (1. However. In addition. Similar analyses must be performed to determine the best solar system for the specific plant site and design. A schematic of this process is shown in Figure 9. minimizing integration with the Rankine plant boiler. [2] These options address using steam or heating feedwater to displace turbine extraction steam to feedwater heaters. solar energy can be used to reduce boiler load to save fuel. mediumtemperature solar technology can be used to generate saturated or slightly superheated steam M for injection upstream of boiler superheater sections. there are also differences between integrating solar technology with Rankine cycle plants and with CC plants.Solar Field Steam Turbine ACC ACC Boiler Feedwater Heaters Figure 9. Medium-Temperature Solar Technology A typical subcritical Rankine cycle power plant has turbine throttle steam conditions of 166 bara (2. A major difference is that all electrical power produced in the Rankine cycle plant is generated by burning fuel. Number 1 141 .

As this happens. Boiler Feedwater Heaters Figure 10. Therefore. that integrates CST energy with existing fossil energy sources is a winning combination for both the solar field and the power plant. determining the optimum solar field is a site-specific task that must consider the grid requirements and operational profile of the steam cycle components at night or during periods when the solar energy is not available. A carefully planned and executed hybrid plant. In areas that receive high levels of sunlight. solar technology is a viable option. such as an ISCC. However. Solar trough is considered the most proven CSP technology and has been implemented in the SEGS plants in California. the complete system should be optimized based on operational and cost considerations. Total system transient behavior. including solar steam source and power plant. Complex issues associated with proper transient representation by equipment and controls should be addressed using computer simulation programs. as well as in other areas of the world. when dealing with solar hybrid configurations. Finally. High-Temperature Solar Integration with Rankine Cycle Plant CONTROLS AND TRANSIENT BEHAVIOR A contractor experienced in IGCC or cogeneration plant design can easily manage the integration and control issues of hybrid plants integrating a solar power source. CONCLUSIONS R enewable energy sources continue to add to the electricity supply as more countries worldwide mandate that a portion of new generation must be from renewable energy. it is important to assess the impact of steam supply changes on the behavior of the conventional generation facility. it is likely that the installed solar field price will decrease through economies of scale and increased manufacturing and installation productivity. Regardless of the option selected to develop a hybrid solar and conventional fossil plant. resulting in: • Higher CC system efficiency • Smaller CC plant carbon footprint • Larger renewable energy portion of new generation • Minimized effect of the intermittent nature of solar energy supply It is expected that the number of ISCC plants will continue to grow worldwide.Solar Field Steam Turbine ACC ACC When dealing with solar hybrid configurations. it is important to assess the impact of steam supply changes on the behavior of the conventional generation facility. and solar tower technologies most widely used to concentrate solar thermal power are evolving and improving. should be modeled early in the plant design stage. IGCC and cogeneration plants do not experience the solar-sourced steam supply variability associated with solar technology integration. The solar trough. 142 Bechtel Technology Journal . The goal is to create an integrated system capable of predicting steam temperature and pressure variations during steady-state and representative transient conditions. linear Fresnel lens.

[1] B. Kelly, U. Herrmann, and M.J. Hale, “Optimization Studies for Integrated Solar Combined Cycle Systems,” Proceedings of Solar Forum 2001, Solar Energy: The Power to Choose, Washington, DC, April 21–25, 2001, G. Morin, H. Lerchenmüller, M. Mertins, M. Ewert, M. Fruth, S. Bockamp, T. Griestop, and A. Häberle, “Plug-in Strategy for Market Introduction of Fresnel-Collectors,” 12th SolarPACES International Symposium, Oaxaca, Mexico, October 6–8, 2004, Solarpaces_Fresnel_Market_Introduction_ final_pages_2004-07.pdf


Justin Zachary, PhD, assistant manager of technology for Bechtel Power Corporation, oversees the technical assessment of major equipment used in Bechtel’s power plants worldwide. He is engaged in a number of key activities, including evaluation of integrated gasification combined cycle power island technologies; participation in Bechtel’s CO2 capture and sequestration studies; and application of other advanced power generation technologies, including renewables. Justin was recently named a Bechtel Fellow in recognition of his leadership and development of Bechtel’s Performance Test Group and the key technical support he has provided as a widely respected international specialist in turbo machinery. Justin has more than 31 years of experience with electric power generation technologies, particularly those involving the thermal design and testing of gas and steam turbines. He has special expertise in gas turbine performance, combustion, and emissions for simple and combined cycle plants worldwide. Before coming to Bechtel, he designed, engineered, and tested steam and gas turbine machinery while employed with Siemens Power Corporation and General Electric Company. Drawing on his expertise as one of the foremost specialists in turbo machinery, he has authored more than 72 technical papers on this and related topics. He also owns patents in combustion control and advanced thermodynamic cycles. In addition to recently being named a Bechtel Fellow, Justin is an ASME Fellow and a member of a number of ASME Performance Test Code committees. Justin holds a PhD in Thermodynamics and Fluid Mechanics from Western University in Alberta, Canada. His MS in Thermal and Fluid Dynamics is from Tel-Aviv University, and his BS in Mechanical Engineering is from Technion — Israel Institute of Technology, Haifa, both in Israel. Joon Park is a financial analyst with Bechtel Enterprises Holdings, Inc. Since joining Bechtel, he has contributed to a variety of projects as a financial analyst; built financial models for power and civil projects; analyzed the economics of various fossil, renewable, and nuclear power technologies; and conducted US power market research. Prior to working at Bechtel, Joon was a system design engineer for combined cycle power plant projects, where his duties included preparing heat balance and cycle optimization studies and technically evaluating major equipment. He was also a mechanical engineer for pipeline, refinery, and petrochemical plant projects overseas. Joon holds an MBA from the University of Chicago Booth School of Business, Chicago, Illinois; an MS in Mechanical Engineering from Seoul National University, Korea; and a BS in Mechanical Engineering from Konkuk University, also in Seoul, Korea. He is a registered representative of the Financial Industry Regulatory Authority (FINRA).

David Ugolini is a senior principal engineer with more than 32 years of mechanical and cycle technology engineering experience on a variety of nuclear and fossilfueled power generation plants. He works in the Project Development Group as supervisor of the Cycle Performance Group and is responsible for developing conceptual designs and heat balances for Bechtel’s power projects worldwide. Dave also supervises efforts related to plant performance testing. Dave began his engineering career by joining Commonwealth Edison Company in 1977 as an engineer at the Zion nuclear power plant. He joined Bechtel in 1980 in the Los Angeles office as an engineer, working first on the San Onofre Nuclear Generating Station and then on the Skagit/Hanford nuclear project. Dave later transferred to the San Francisco office and worked as a mechanical engineer on several projects, including the Avon cogeneration project, the Carrisa Plains solar central receiver project, and two combined cycle cogeneration projects— Gilroy Foods and American 1 . In late 1989, Dave moved to the Gaithersburg, Maryland, office and became supervisor of the Fossil Technology Group’s Turbine Technology Group, where he directed activities related to developing technical specifications and bid evaluations for gas turbines, heat recovery steam generators, and steam turbines for combined cycle and Rankine cycle power plants. When this group was merged with the Cycle Performance Group, he became deputy supervisor and eventually supervisor. Dave is actively involved in ASME Performance Test Code committees PTC 52 (solar power plant testing) and PTC 6 (steam turbine testing). Dave received his BS in Thermal and Environmental Engineering from Southern Illinois University, Carbondale.

December 2009 • Volume 2, Number 1



Bechtel Technology Journal

Issue Date: December 2009 Abstract—Managing the quality of building information modeling (BIM) is essential to ensuring the effective and efficient use of data for the engineering, procurement, and construction (EPC) process. For structural steel, BIM uses both graphical and non-graphical data. Previously, only geometric data such as size, shape, and orientation of a structural member could be viewed graphically. However, current BIM tools allow non-graphical information (such as material grade, coating requirements, and shipping and erection status) to be easily visualized and reviewed in a three-dimensional (3D) model through the use of color, transparency, or other representation. This ability to visualize and verify non-graphical data is vital because this data often significantly affects the cost and control of delivering a project. Thus, ensuring the accuracy and validity of the data provides confidence in the reliability of BIM as part of the EPC process. By using a database that is independent of the graphical model, along with associated validation rules, the data behind the structural graphics can be validated and corrected for downstream processes, reports, and other software applications. Keywords—building information modeling (BIM); database; engineering, procurement, and construction (EPC); quality; structural steel; Tekla model
BACKGROUND assessment of overseas fabricators’ connection design and detailing capabilities, which led to the realization that not all fabricators could satisfactorily offer the services expected. Seeing a need to change its traditional work process to position the company for the use of other countries’ steel and suppliers, Bechtel developed an in-house detailing capability and changed the design engineering deliverable. Specifically, Bechtel changed its 3D structural modeling philosophy from one in which the deliverable was a set of engineering drawings to one that allowed the engineer to produce a fully connected detailing model to be used directly for shop and erection drawing extraction. This change has increased the company’s flexibility when selecting a structural steel fabricator because it is no longer bound to the traditional single fabricator. Rather, now that Bechtel delivers completed shop drawings, it is able to engage multiple fabricators based both in the United States and elsewhere, as project needs dictate.


Martin Reifschneider

Kristin Santamont

ess than 10 years ago, Bechtel, like most engineering firms in the process and power generation industry, was using a structural steel design process in which the results of engineering analysis and design were presented in a set of structural framing drawings. Generally, these two-dimensional (2D) drawings were extracted from an engineering threedimensional (3D) modeling tool; otherwise, they were manually drawn. Subsequently, these engineering drawings served as the contract documents with the structural steel fabricator, describing the detailed configuration of the structure. The fabricator’s scope typically included tracking and identifying the raw material supplied from the steel mill, arranging for the preparation of connection design calculations, creating a detailing model used to extract shop fabrication drawings, creating computer numerical control (CNC) data used to control the fabrication process, and fabricating and delivering the final assemblies to the project site. In the late 1990s, recognition of a forthcoming decline in the availability of US structural steel led Bechtel to evaluate overseas fabrication alternatives. This evaluation included an



he change in the nature of Bechtel’s design engineering deliverables has had several

© 2009 Bechtel Corporation. All rights reserved.


2D 3D 4D API ASTM two-dimensional three-dimensional four-dimensional application programming interface ASTM International (originally American Society for Testing and Materials) building information modeling cast in place computer numerical control Central Steel Management—a Bechtel software system whose primary objective is to improve the quality of steel engineering in Bechtel projects engineering, procurement, and construction extraction, transformation, and loading

set of drawings. The most significant impact, however, is the ability the company has gained to manage more building information within the structural model. Working solely in the 3D model environment provides the ability to develop and manage procurement, fabrication, and erection data, which brings both cost and schedule advantages to Bechtel projects. The advantages of work process change have proven undeniable; yet, numerous challenges ensued at first. Initially, the project team needed training to work in a detailed building information modeling (BIM) environment. Though very experienced in developing and using a 3D plant model for collaboration among multiple disciplines, the team struggled with the regimen and the attention to minute detail that were necessary to deliver a fully connected structural steel model. It was no longer acceptable to model only the individual pieces in the correct location; instead, it was imperative to also ensure that all the associated data was correctly identified and added for each associated part or assembly. The steel modeling tool used in the new work process provides the flexibility to create any number of user-defined attributes (UDAs) for non-graphical information, which enhances the geometric data contained in a 3D model. To define relevant data for the UDAs, Bechtel’s structural engineering team first determined who the potential data users were, which existing processes they intended for the data to support, what data could best support those processes, and how it could do so. This data identification (see Table 1) and use determination was a gradual learning process for the team, as well as for Bechtel’s procurement and erection partners.

Working solely in the 3D model environment provides the ability to develop and manage procurement, fabrication, and erection data, which brings both cost and schedule advantages to Bechtel projects.



prelim mark preliminary mark—a numeric identifier attached as a UDA to a model component UDA user-defined attribute

effects on the company’s work processes, the most problematic of which has been the paradigm change whereby its vendors and internal customers now receive a completely detailed 3D model rather than the traditional

Table 1. Model Data by Type
Model Model Various/identification Various/status Engineering Procurement Construction

Graphical Non-graphical Non-graphical Non-graphical Non-graphical Non-graphical Non-graphical

Number of Attributes
18 25 23 22 41 11 12

Height, width, length, position coordinates Member prefix, class, material grade, weight Identifications, bar code, shop drawing number, TEAMWorks1 identification On hold, fabrication start date, ship date, erection complete Originator, checker, loads Purchase order number, fabricator, detailer, vehicle, bundle number Construction sequence, leave-out, fieldwork, laydown

1 TEAMWorks™ is Bechtel’s proprietary corporate software used to track equipment and material and to report quantities.


Bechtel Technology Journal

” or “Beam” will not be recognized. procurement. Structural Steel Data and Design-to-Delivery Life Cycle December 2009 • Volume 2. Engineering The Bechtel engineering organization makes extensive use of its own data. connections originated. To begin a quality management process. fabricated. For example.” “BM. To execute this type of monitoring. component length. and what data format was desired. construction. the next big challenge was to train a large and distributed workforce to properly enter the data into the model as prescribed. The immediate customers for the engineering design products are internal procurement. However. the immediate customers for the engineering design products are internal procurement. Steel Modeling Model Check Early ABOM Release for Fabrication Release for Shipping Site Delivery Erection Start Connection Modeling CNC TEAMWorks Shop Data & Load Shipping Drawings Reports Sizes Drawing Extraction Fabrication Erection Complete Updated Model Shipping Shakeout Erection Member Originated Loads Input Member Checked Profile Name Material Grade Class Part Prefix Assembly Prefix Constr. the quality management process expanded to include many of them. From the purchase of raw material and development of detailed fabrication information to delivery planning. Even subtle variance of data entry limits its recognition by a database application. only a few data fields were selected as vital. numerous data applications and reports that facilitate project performance have been developed. Over the course of many years. an engineering. Examples of data in the steel design-to-delivery life cycle and the users/processes for which the model data is used are shown on the steel timeline in Figure 1. This challenge of subtle variation in data structure led the team to seek a better way to manage the model quality. and project management organizations. and project management organizations. Having the ability to create and manage more data in the graphical structural model has led to less reliance on such manual input or data import. material staging. structural elements. its applications and reports wholly depend on the quality of the data. DATA CUSTOMERS or Bechtel Power Corporation.Once it was determined what data was relevant to each end user. many of these applications require manual data input or manipulation of various electronic files imported from Bechtel’s vendors. and material grade). construction. and erection status. To track this raw material to the related final. procedures were established to record the digital signatures of model originators and checkers in the model as a UDA. how the data would be merged into user processes. For these fields. members checked. Fabricators Early purchase of raw steel material from the mill or warehouse requires the identification of components to be purchased and their purchase definition (section profile.” then entries such as “b. As other UDAs were added to the BIM work process. Modeling progress is evaluated by measuring the percentage complete of members originated. specific criteria were established for their control. if a data field is used to describe the member function as a “beam” and the expected entry is “B. As with any database. and construction (EPC) firm. Number 1 147 . The quantity of steel modeled and progress relative to the project schedule are regularly monitored. data integration and quality are vital. F Start of Design Model Status Reports Framing Modeling Release for Drawing Extraction MTO Misc. Steel Suplmtl Steel Release Phase On Hold Shop Drawing Number Bar Code Drawing Checked Fabrication Complete Fabrication Start Delivery Forecast Vehicle Bundle Delivery Date Laydown Erected Bolting Complete Inspection Complete Construction Sequence Part & Assembly Drawing Pay Numbers Originated Category Detailer & Leave-Out Fabricator Connection Preliminary Checked ID Steel Mark Loads Checked Connection Shop/Ground Originated Assembly Cost Code Coating Complete ABOM advance bill of materials CNC computer numerical control MTO material take-off Figure 1. and connections checked.

Using the Bechtel-developed database tool TEAMWorks to track installation progress. In response. A similar combination of data is used to create bar codes. Data from this tracking database can also be returned to the 3D steel model to provide graphical status. miscellaneous.75 meters long and fabricated from ASTM International (originally American Society for Testing and Materials) A992 material grade steel. is attached as a UDA to the model component. as shown in Figure 2. all components needed (within a sequence) during erection can be quickly identified in the document management system. column. and detailing (shop and erection drawing extraction). material tracking. These definitions are often also used to determine the costs of connection design. construction sequence.). which is stored in the building model for later use in processes such as drawing extraction. erection.. the detailer-assigned shop drawing number combines several pieces of non-graphical information.g. multiple components typically share a prelim mark. During early construction planning. staging. and staging. Prelim Mark 10049 is defined as a W12x50 profile that is 4. and piece number. brace. These components are identified in the model by an identifying code (cost code). Each assembly in the structural model is pre-assigned a cost code based on its constituent makeup and arrangement. It identifies the fabricator. Performance is measured by comparing reported and predicted progress using historical installation rates for different types of model components (e. The sizes of these regions and their order of erection are governed by site configuration. Each prelim mark is unique to the three components that constitute its definition. A prescribed set of component definitions and estimated overall quantities are presented to the fabricator for pricing. crane access. and inspection. delivery. detailer. commonly known as a preliminary mark (prelim mark). Procurement Structural steel raw material and its fabrication for large industrial facilities are typically purchased based on a unit rate (cost/ton of steel). Shop Drawing Numbering 148 Bechtel Technology Journal . the sequence of construction is identified by a construction sequence number. structural. fabrication. component type (beam. to ultimately associate the raw material purchased with the structural component for which it is intended. to group material purchases of like definition. the detailer-assigned shop drawing number combines several pieces of non-graphical information. By sorting on the shop drawing number containing these identifying details. crane capacities. To aid in quick recognition of components on a shop drawing. etc.117 B1021 - 001 Submittal No. delivery. and second. Detailers (Shop Drawing Extractors) Shop and erection drawings are released to the fabricator in a sequence that facilitates the delivery and erection plan. erection performance can be monitored relative to the established project budget and schedule. 25316 Project Number 011 - V1 A - SS01 . connection modeling. The purpose of the prelim mark is twofold: first. A pay category number is assigned to each model assembly to match its unit rate definition (examples are To aid in quick recognition of components on a shop drawing. bolting. Figure 2. The erector typically identifies a region of the structure to build first and what regions are to follow. The release of material for fabrication initiates a material tracking process that follows the material through fabrication. Facility Code Unit Number Fabricator ID Detailer ID Piece Mark Assembly Prefix Construction Sequence No. However. the fabricator provides its unit rate to reflect material costs and fabrication complexity for each type of component. An independent material tracking database (another component of project BIM) is used for this process. Erectors Structural steel erection planning determines the desired sequence of construction. the prelim mark UDA is included on the shop drawing bill of materials. and other project-specific conditions.a numeric identifier. for example. To meet this second need. modular).

The data is useful.05 THE STEEL (STRUCTURES) T Unit Description ton ton ton ton ton ton ton ft 2 ton ton Beam. Grade 65 material. Data Quality Data quality is determined by comparing stored data with expected values.01 6. galvanized Metal decking. fabricate.09 4. consistent. Table 2. 81–150 lb/ft Column. guides were prepared to define the format and content expectations for data entered into the model. 6 in. rolled (WT and L).200 tons) of structural and miscellaneous steel. the cost of fabrication can be computed immediately by applying the associated unit rates to the summarized quantities. December 2009 • Volume 2. Regular comparison of indicators such as erection quantities and costs with estimated budgets and planned schedules provides continual assessments of trends and progress. This quantity roughly equates to more than 10.000 metric tons (11. fabricated. thus providing an indication of model checking completeness. round sections. The same data and work processes exist for smaller and less complex structures as well. served to define many of the data management needs.000 metric tons (13. fabrication. only if it is reliable. walkway panel with grating shop-installed Shop assembly. The complexity of these structures. and the work processes needed to design. model data errors remained an obstacle to accurate data usage. >257 lb/ft Q390B material. Regular tracking and reporting of these indicators and comparison with estimated budgets and planned schedules provide project management with continual early assessments of trends and progress. Upon pay category assignment. composite floor deck. <40 lb/ft Truss.05 12.03 12. he building model is an ideal tool to store information used to manage material procurement. and erection processes. are means of assessing project completion and cost. Q390B Horizontal brace.03 8. The non-graphical attributes associated with modeled components can reflect the status of design. x 3/16 in. Nevertheless. including accessories Shop assembly (W). rolled (pipe. To address this problem. or replicate. Number 1 149 . Bechtel developed a model-independent database to shadow.02 6. much of the graphical and nongraphical model data. This shadow database. however. which is determined as the connected structural model is prepared. 3 in. recording the engineers’ checking of components in the model allows a report to be generated comparing components checked with total components scheduled for design release.01 8. 65 ksi Truss.provided in Table 2). stair tower partially assembled with handrail loose he structural steel pieces originally considered by the structural engineering team when it was developing the data needs and work processes were the components required for the large. and erect them. The model attributes provide real-time access to information related to project costs associated with downstream processes such as fabrication and erection. Initially. rolled (W).04 3. It is difficult to manage the accuracy of the model data because it can only be partially managed through procedures for software input control and is verified through manual checking of the model. and accurate. storing and validating selected attributes. HSS round). serrated. as well as fabrication and erection costs.09 5. complex boiler-support structures typical in coal-fired power plants. field-bolted truss. shop-assembled truss Grating.000 individual structural framing assemblies plus more than twice as many miscellaneous steel assemblies. 1-1/2 in. Sample of Pay Category Definitions Pay Category 2. or erection. and smaller Vertical brace. and procedures were developed for manual model checking. fabrication.000 tons) to 12. MODEL DATA T Project Management Design and erection quantities. For example. Similar progress comparisons can be generated by recording fabrication status dates and erection completion status dates. Such boilersupport structures are on the order of 61 meters (200 feet) by 65 meters (215 feet) in plan and 91 meters (300 feet) in height and consist of 10. 18 gauge. Each control has limitations on how effectively it can ensure that all pertinent data is entered accurately.

Currently more than 100 independent model data format or content validations are performed in every extraction. regular model quality feedback is sent directly to the modelers in a set of reports. The reports are provided in a format that permits direct selection of parts or assemblies in the model for data correction. the data is transferred and loaded to stage tables hosted by an Oracle database (Figure 3). and all attributes for each transaction are recorded in the log tables as a new. facilitating prompt resolution. A set of log tables keeps a record of all changes made to production tables. Data Extraction A standard ETL process is carried out to automatically collect data from the model with no user intervention. is an Oracle® database accessed by a Microsoft® Windows®-based user interface.Table 3. thus. After each ETL process. The ETL process is performed against one or several models. A type constituting a condition in which the data is not appropriate is identified as an error. The stage tables provide a mirror copy of the data found in the 3D model. The model checking/data validation process is performed daily for active models. a type that identifies reference material is listed as ignored. and the time and frequency of the ETL runs. Acceptance rules. Data from the stage tables is validated against a set of acceptance rules. automated model data quality status and summary reports are created and transmitted to the project team for action. are specified in lookup tables. Once extracted. The process is identical for every model and is managed by a customizable tool that allows the user to specify the set of models to run. An application programming interface (API) provided by the 3D modeling tool (Tekla® Structures) allows the program to traverse and read the entire model data. New data acquired is compared with what was previously stored. another type that identifies a model change. via e-mail. Data Quality Evaluation Data validation criteria are categorized by type. The data stored in the CSM production tables is replaced by each ETL process. Extraction is done via an in-house program scheduled to run daily. as required for a given project. and. 150 Bechtel Technology Journal . as well as coded functions. but not necessarily invalid data. is listed as a warning. finally. the data always reflects the status of the model at the time of extraction. the e-mail notice requirements. Feedback. modified. or deleted action. Table 3 provides examples of graphical and non-graphical data validation criteria built into CSM. automated model data quality status and summary reports are created and transmitted to the project team for action. and loading (ETL) process. and new or modified data that passes all validation checks is loaded into production tables. Part validation Part validation NON-GRAPHICAL Assembly validation Assembly validation part of Bechtel’s Central Steel Management (CSM) software system. and each ETL run is identified by a unique session number. not considered part of active model Part accepted in CSM database Invalid class code Invalid model check date Invalid assembly identification for part Invalid prelim mark. incompatible material Type ERROR ERROR ERROR Description Section profile not available for the project Invalid material name Invalid main part After each ETL process. Sample Data Validation Data Process Part validation Part validation Assembly validation GRAPHICAL Assembly validation Part validation Part validation WARNING IGNORED SUCCESS ERROR ERROR ERROR WARNING Weight zero or null Reference material. a type that indicates that data is validated is identified as a success. Its purpose is to copy key attributes from the graphical model and then evaluate data accuracy and inform the modeler of any needed corrections. This regular modeling feedback resulted in a noticeable improvement in initial model quality over a short period of time. thus. includes copies of the error (rejected data) report and the validated data report. transformation.

pay category numbers. administrator. Prelim mark numbers are assigned by grouping modeled components that have similar purchase definitions. W Project model file locations and other related information needed to perform the ETL process are also stored in lookup tables accessed by a CSM remote server application. update.THE CSM SOFTWARE SYSTEM ith shadow data available outside the modeling environment. The The CSM interface provides utilities to assign prelim mark numbers. and super user. To ensure data protection and thus provide assurance that the data indeed represents the model information. CSM Data Acquisition Process December 2009 • Volume 2. Model Data Errors Figure 3. applicable to all models on a given project. reporting and other applications are able to access the information without requiring direct model interface. pay category numbers. Included in the scheduler is the capability to e-mail specified automated reports to specific customers. Tools and Procedures The CSM user interface provides access to several tools and automated procedures. and TEAMWorks identification numbers—all data previously determined via manual spreadsheet calculations—at the appropriate time. the CSM software system provides three levels of user access: user. The CSM interface also provides utilities to assign prelim mark numbers. used to validate attribute content in the model. actions that modify the many lookup tables. The lookup tables. cost codes. A UDA model import file is created. and applicable to an individual model only. Number 1 151 . and the prelim mark definition is recorded both on the modeled component and in a CSM table used to validate the prelim mark assignments in subsequent ETL processes. The CSM Production Tables reflect only valid data. and delete records. Administrative tools are used to insert. and TEAMWorks ID numbers. cost codes. fall into three categories: applicable to the enterprise (all models/all projects). each providing different degrees of functionality. allowing a CSM super user to initiate individual or multiple model ETL processes on demand or to a fixed schedule. Graphic Model Data Check Parts & Assembly Rejected Data Reports & rts ly Pa semb As Parts & Assembly Validated Data Reports Assemble Lookup Table Data Check Data Errors Valid Data Pa As rts & sem bly Model Data ETL Stage Tables Production Tables 1 Prelim Marks 2 Pay Categories 3 Cost Codes External Database Interface Model data is extracted as initiated by the Scheduler.

in turn. During the assignment process.09. If a match is found. which.” the rule next determines if the shape is rolled. that a pay category definition matches the assembly modeled.09. then the original purchased member. minimizes disputes at contract closeout. For example. Because a structural design evolves. which is no longer appropriate. If “yes. As a result.” then Pay Category 4. and length limit). the surplus material is evaluated for its applicability for any new framing member not previously purchased. The utility searches model data for all sections that meet the surplus material original purchase specification (section profile. using a procedure whereby all model parts constituting each structural assembly are collected and the assembly main part attributes are evaluated against a series of criteria that define the pay category. the tool helps mitigate the amount of surplus material left at the end of a project. and that only one pay category definition is valid for the assembly.09 is not appropriate Surplus material is evaluated for its applicability for any new framing member not previously purchased. The pay category number for any member or assembly is determined using model data alone. Pay category numbers are determined before shop drawing creation. Project Document Database Model Data Selected Model Data Released Drawing Clock Material and Erection Tracking Database Tracking Data Selected Tracking Data CSM Database Figure 4. a utility is used to evaluate and reassign purchased material that is no longer applicable to the component for which it was ordered.prelim mark validation compares the purchase definition retained in CSM with the active model attributes. the following is the qualifying rule statement for Pay Category 4.” then the final check is for the profile depth. Pay category assignment is performed using a set of rules that evaluate the model data. as shown in Table 2: (IS_HOR_BRACE=’Y’) AND ((IS_TUBE=’Y’) OR (IS_PIPE=’Y’)) AND (IS_ROLLED=’Y’) AND (D<= 6”) The first part of the rule evaluates whether the assembly is a horizontal brace. If the answer is “yes. all assembly parts are validated to ensure that the correct part is defined as the main part. is placed into surplus material status. then the assembly falls into Pay Category 4. the prelim mark for the previously purchased material is reassigned to the new member. If any one of the above answers is “no. In this way. Graphic Model Data Sharing 152 Bechtel Technology Journal . changes may occur after the original material order. eliminating the subjectivity of manually assigning a unit price to a given steel assembly. These rules return a “yes” or “no” value or a discrete result and thus choose the category. If the largest depth in the profile is less than or equal to 6 inches. the answer is “yes” and the next qualifier is evaluated to determine if the profile is either a tube or a pipe. For example. To manage the use of the ordered material. This tool helps mitigate the amount of surplus material left at the end of a project. If the assembly meets the prescribed definition. material grade. if a design change makes it necessary to increase a framing member size from what was purchased. Before any subsequent material purchase (which usually occurs sometime before each design release). early computation of fabrication costs yields an accurate prediction of the fabricator invoices. as defined by a set of data characteristics.

Subsequently. connection originated. This number. The number of project models generally doubles when a copy of each one is created for its release to the drawing extraction team. TEAMWorks identification numbers are assigned using another utility that combines several attributes to create a unique identifier. connection material. and connection checked). This capability reduces the effort associated with collecting and evaluating the entire project status. and compare data across multiple models. weight. This feature gives project management the capability to analyze structural steel data even if they are not familiar with working in the modeling environment. and compare data across multiple models. Number 1 153 . This report summarizes quantities by structural category (structural. and assembly type. a model UDA import file is created with the pay category assignments and then imported back into the model. for export to the TEAMWorks database application. considering that each power plant project typically consists of a minimum of 8 to 12 models. and construction sequence. This and other selected progress and quality reports are delivered to an e-mail list. required for material tracking. Capturing all model data in a central database provides the added capability to collect. each model ETL process also initiates a model status report. Each report is created using a report template and is displayed via a viewer interface that allows the user to view summarized data as well as print the report using numerous desired formats. CSM assembles a table containing a list of each TEAMWorks identification number and other relevant model data. and construction aids) and summarizes the model completion status for the structural category (model checked. A significant benefit of collecting all model data in an external database is the ability to prepare and standardize reports without the need for direct access to each model. Within the CSM application interface. CSM uses the project document database to identify which drawings are released for fabrication (see Figure 4). Therefore. CSM Reports In addition to data quality reporting as described above. Engineering Model Model Release and Updates Drawing Model Model Data ETL Model Comparisons Model Data ETL Differences Report Re por t Production Tables Production Tables Figure 5. such as assembly weight. Subsequent ETL processes validate model assemblies against the previously assigned pay categories. Capturing all model data in a central database provides the added capability to collect. coating system. miscellaneous.and the process continues until the member data matches another pay category rule. shop drawing number. is assigned and loaded back into the model on the release of a shop drawing to the fabricator. the CSM database monitors two models for every portion of the structure—one identified as the engineering model and the other identified as the drawing model (see Figure 5). analyze. Multiple Model Tracking and Comparison December 2009 • Volume 2. Other pay category rules include checks on material grade. analyze. Upon satisfactory rule evaluation. numerous other reports are available to use in checking the model data quality and completeness.

For structural steel. thus improving and ensuring its quality. Furthermore. The need to use two models arises from tight project schedules and the often different locations of the engineering and detailing teams. Interfaces with Other Process Control Tools Within the Bechtel EPC organization. penetrations.) directly to a model would greatly enhance construction work packaging. ACKNOWLEDGMENTS The authors would like to acknowledge Peter Carrato. material staging. placement weather conditions. progress monitoring. etc. Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States and/or other countries. CONCLUSIONS A high-quality BIM application and process are essential to the effective and efficient use of data throughout the EPC process. anchorages. several material and erection tracking databases exist to manage all procured and constructed commodities. rebar bundles. the identifier comprises several model attributes. TRADEMARKS FUTURE DEVELOPMENT evelopment plans for the structural steel BIM application include integration of four-dimensional (4D). or schedule-tied. and erection status. These models differ by ownership and by the data added by the detailer. The detailer then incorporates the new release phase into the drawing model to maintain consistency between the two models. TEAMWorks Corporation. cylinder break results. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. and embedments in CIP concrete demands the use of 3D modeling. The ability to track CIP-concrete-related data (placement breaks. is a trademark of Bechtel D Tekla is either a registered trademark or a trademark of Tekla Corporation in the European Union. for his assistance and contributions to this paper. embedded item tags. even before shop drawing extraction. Managing the congestion of reinforcing steel. functionality. much like the shop drawing number. These modifications often occur in parallel with the shop drawing extraction process and thus are tracked by a separate release phase—another UDA. Additionally. and configuration control. a BIM database allows standard processes and tools to be used to analyze and report reliable data to other essential project processes. The development of a BIM application for cast-inplace (CIP) reinforced concrete construction work processes is imperative as the power industry proceeds into the design and construction of the next generation of nuclear plants. The multiple models complicate management of new model information. 154 Bechtel Technology Journal . The engineering model is controlled by the design team. principal engineer and Bechtel Fellow. and direct links to fabricator process monitoring applications.A BIM database allows standard processes and tools to be used to analyze and report reliable data to other essential project processes. admixtures used. Because the CSM database acquires model data from both the engineering and the drawing models. it is common for the engineering team to continue to need access to the engineering model to make additions and/or changes as final equipment information becomes available. rebar heats. a set of reports is provided to quickly and concisely compare data between them. Using a BIM database synchronized with the graphical model enables data to be evaluated. by providing access to the data outside of the modeling environment. the ability to track material information such as heat numbers from the mill. offered by most modeling applications with external tools to enhance erection tracking and progress monitoring. Each database uses a unique identifier for each record. validated. thus facilitating the linking of data tracked in the external databases with the graphical model components. the integration of data and reliance on its quality are vital. the United States. thus validating common data and highlighting any differences. including structural steel. The CSM application prepares and records the unique identifiers as part of the model. From the purchase of structural steel raw material to the development of detailed fabrication information to delivery planning. PhD. and other countries. This linking then offers the project the ability to view other non-graphical attributes in a graphical manner using the model. and corrected. and the drawing model is controlled by the detailer or drawing extraction team.

a Bechtel engineering manager.BIOGRAPHIES Martin Reifschneider. Kristin has performed detailed structural steel design and detailed connection design. she has worked on multiple power projects. worked extensively in the 3D building model environment. He is a Six Sigma Yellow Belt. his responsibilities included technical quality. Number 1 155 .000-ton steel purchase order for the Sammis air quality control system project. Currently serving as manager of the Central Steel Engineering Group. Kristin has an MS in Structural Engineering and a BS in Civil Engineering. December 2009 • Volume 2. He also served in leadership positions on combined cycle and nuclear projects. In this role. Kristin Santamont is the responsible civil engineer for the Ivanpah Solar Electric Generating Facility. Martin was the civil/structural/architectural chief engineer for the Power Global Business Unit for 5 years. from design through construction. She was also part of the Central Steel Engineering Group. and is a licensed Professional Engineer in Michigan and Wisconsin. collaborating to further improve the Bechtel steel work process as it relates to building information modeling and the Power EPC business. with most of her experience focused on steel work processes. has more than 31 years of experience in civil/structural engineering and work process automation. Since joining Bechtel in 2004. both from the University of Illinois at Urbana-Champaign. Martin was project engineer for a large fossil fuel project. and personnel development of nearly 200 civil/structural engineers and architects. he is responsible for structural steel design work processes. Earlier. She is a licensed Professional Civil Engineer in California. Martin has an MS and a BS in Civil Engineering from the University of Michigan. and recently helped manage and execute a 16. Ann Arbor. Martin has published and presented papers on building information modeling and on the performance steel embedments in concrete. staffing.

156 Bechtel Technology Journal .

However. approximately 50 are pressurized water reactors (PWRs).4% reactor thermal uprate.600 MW since 1998. classifies power uprates into the following three categories: measurement uncertainty recapture. License Amendment Request (LAR). Measurement Uncertainty Recapture Power Uprates Measurement uncertainty recaptures (MURs) entail improvements to feedwater mass flow measurement technology. but that does not mean that US nuclear capacity additions are at a standstill. an EPU is a major undertaking that requires a significant commitment of resources. This paper focuses on one power uprate category—extended power uprate (EPU). since this is the ninth consecutive year that the US fleet’s capability factor—a measure of a plant’s online time—has exceeded 90%. and extended power. which is responsible for regulating all commercial nuclear power plants in the © 2009 Bechtel Corporation. Moreover. including more than 5. combining an EPU with a maintenance upgrade and/or license extension allows costs to be shared among programs. the US’s 104 operating nuclear units have added substantial new capacity in the form of reactor and plant uprates over the past 20 years. The US Nuclear Regulatory Commission (NRC). The World Association of Nuclear Operators (WANO) released the power industry’s 2008 report card in late March 2009. Thomas ewthomas@bechtel. Of these. The NRC has updated regulations that now permit licensing using an uncertainty in the safety analysis allowance consistent with that determined in these improved calculations. margin management program. Lowering the uncertainty can result in uprates of up to 2%. In fact. Only Calvert Cliffs Units 1 and 2 have received the NRC’s approval for a 1. the industry also embarked years ago on a path of upgrading US plants to produce additional power. stretch power. MUR activity remains sporadic. Owners of at least 60 nuclear units are expected to seek approval for EPUs in the near future. Many observers have come to expect nothing less. extended power uprate (EPU). through use of ultrasonic flow metering. While most of the early EPUs were performed on boiling water reactors (BWRs). Not content to merely improve and maintain these outstanding operating statistics. design margin.NUCLEAR UPRATES ADD CRITICAL CAPACITY Originally Issued: May 2009 Updated: December 2009 Abstract—Over the past 20 years. . Power uprates alone have added more than 5. the present interest in PWR upgrades is primarily due to advances in technology and improvements in available fuel. operating margin. uprate INTRODUCTION N ew-generation nuclear plants may be having trouble getting started in the United States. regardless of the benefits. to significantly reduce the uncertainty in core calorimetric computations. EPUs can increase a nuclear plant’s output by as much as 20% but usually involve significant plant modifications. 157 Eugene W. Nuclear Regulatory Commission (NRC). All rights reserved.1%. proclaiming that safety and operating performance remained “Top Notch” with the year’s average capability factor of 91. pressurized water reactor (PWR). Keywords—analytical margin. The success of an EPU rests on the quality of the management team and its ability to develop an effective implementation plan and to schedule work efficiently. boiling water reactor (BWR).600 MW since 1998—the equivalent of five new nuclear plants. nuclear plant uprates have added substantial capacity. An EPU offers the economic benefit of increasing power output while avoiding long lead times for constructing new nuclear plants. However. continuing to mark nuclear power as the most reliable source of electricity in the US.

% 6. AND TERMS ACRS BOP BWR EPRI EPU FAC HP INPO LAR Advisory Committee on Reactor Safety balance of plant boiling water reactor Electric Power Research Institute extended power uprate flow accelerated corrosion high pressure Institute of Nuclear Power Operations Licensing Amendment Report LP MUR NRC NSSS PWR SPU SSCs WANO low pressure measurement uncertainty recapture (US) Nuclear Regulatory Commission nuclear steam supply system pressurized water reactor stretch power uprate systems.8 17.733 Approximate MWe Added 35 68 68 83 143 143 149 149 193 70 122 122 92 106 85 70 70 154 154 167 2.8 8 8 13 13 15 Added MWt 105 205 205 248 430 430 446 446 579 211 365 365 275 319 255 211 211 463 463 501 6.5 15 15 8 20 16.ABBREVIATIONS. EPUs Approved by the US NRC Plant Monticello Hatch 1 Hatch 2 Duane Arnold Dresden 2 Dresden 3 Quad Cities 1 Quad Cities 2 Clinton ANO-2 Brunswick 2 Brunswick 1 Waterford 3 Vermont Yankee Ginna Beaver Valley 1 Beaver Valley 2 Susquehanna 1 Susquehanna 2 Hope Creek Total MWt = megawatts of reactor thermal output MWe = megawatts of electrical output (Source: NRC) Nuclear Steam Supply System BWR BWR BWR BWR BWR BWR BWR BWR BWR PWR BWR BWR PWR BWR PWR PWR PWR BWR BWR BWR NRC Approval 9/16/98 10/22/98 10/22/98 11/6/01 12/21/01 12/21/01 12/21/01 12/21/01 4/5/02 4/24/02 5/31/02 5/31/03 4/15/05 3/2/06 7/11/06 7/19/06 7/19/06 1/30/08 1/30/08 5/14/08 Uprate. and components World Association of Nuclear Operators Table 1.224 158 Bechtel Technology Journal .3 8 8 15. ACRONYMS.8 20 7. structures.3 17 17 17.

and 3 — Anticipating Approval for 15% EPUs December 2009 • Volume 2.8% Power Uprate (Source: TVA) Figure 2. having begun operation in (Source: NRC) M Figure 1. and 3 were renewed in May 2006. Four of them requested rather modest increases. and future EPUs. The focus of this paper is to review EPU requirements and provide estimates of the power added to the nuclear inventory by past. A total of five PWRs received approval. Stretch uprate modifications concentrate on procedures and equipment setpoints. at most. Table 2 identifies EPU applications under review by the NRC. Extended Power Uprates An extended power uprate (EPU) increases the original licensed thermal power output by up to 20% but requires significant plant modifications. Table 1 provides a list of NRC-approved EPUs. making the uprate capability plant-dependent. 1970. The plant is a single-unit Westinghouse two-loop PWR.8% power uprate increase in July 2006. Browns Ferry Nuclear Power Plant Units 1. 2. Located along the south shore of Lake Ontario in Ontario. (Source: NRC) Stretch Power Uprates A stretch power uprate (SPU) typically increases the original licensed power level by up to about 7%. Fifteen BWRs have been approved to date. The exception is Constellation Energy’s Robert E. usually by taking advantage of conservative measures built into the plant that previously were not included in design and licensing activities. Ginna nuclear power plant (Figure 1). Number 1 159 . enabling an almost 17% EPU to be approved by the NRC 20 years later. 2. Ginna Nuclear Power Plant — Approved for 16.9 17 17 Added MWt 494 494 494 229 260 260 2. modest equipment replacement and little or no change to either the nuclear steam supply system (NSSS) or turbine by limiting pressure increases (2% to 3%) to allow sufficient mass flow margin in the high-pressure (HP) turbine. EARLY EPU ACTIVITY ost of the early EPUs were performed on boiling water reactor (BWR) plants. and nine were approved before the first pressurized water reactor (PWR) EPUs received approval from the NRC. The original steam generators were replaced in 1996. New York. SPUs involve. Ginna is one of the oldest nuclear power plants still in operation in the US.231 NRC Approval Expected To Be Determined To Be Determined To Be Determined December 2009 To Be Determined To Be Determined MWt = megawatts of reactor thermal output * These applications were undergoing NRC acceptance review as of May 2009. which allows continued operation of the units until 2033.E. The NRC operating licenses for Units 1. respectively. current. which received NRC approval for a 16. % 15 15 15 12.Table 2. EPU Applications Under NRC Review Plant Browns Ferry 1 Browns Ferry 2 Browns Ferry 3 Monticello Point Beach 1* Point Beach 2* Total MWt Uprate. and 2036. An EPU increases the original licensed thermal power output by up to 20%. R. 2034. Three of them are for 15% EPUs for the three Tennessee Valley Authority Browns Ferry nuclear power plant units (Figure 2).

low noncondensable gas releases. MANY ECONOMIC ADVANTAGES T T he economic incentive of an EPU is to increase power output at competitive costs while avoiding the long lead times for constructing new generation. these costs are very site dependent and tend to increase on a per-kilowatt basis with larger uprates.Table 3. All of the aforementioned improvements lead to greater thermal power output. engineers have found ways to safely place more fuel in existing reactor vessels. an increase of 2% to 3% pressure has been adopted for increased fuel margins. these likely EPU upgrades would provide added capacity equivalent to that of 6 to 12 new plants. Expected Applications for EPUs Year 2010 2011 2012 2013 Total Number of Plants 6 8 1 2 17 MWt 2. so it is difficult to generalize other than to suggest that on a capital-cost-per-kilowatt basis. MWt = megawatts of reactor thermal output MWe = megawatts of electrical output (Source: NRC) PWR UPGRADES EMERGE he plants whose owners have already contacted the NRC about an EPU and are expected to file an application by 2013 are listed in Table 3.280 The added power from uprated units is effective in reducing greenhouse gas emissions for the entire utility fleet of plants. Taken together. Integrating the total project minimizes future outage risks because upgraded/modified equipment would be used that had already considered the new life-extension requirements. improved manufacturing processes have resulted in better process control. improvements in available fuels. Over the past two decades. An EPU can be brought into operation in about one-half the time required to license and build a new plant. Also. Although not necessarily an economic advantage. Combining an EPU with a maintenance upgrade and/or a license extension allows some of the cost and cost recovery to be shared among programs. These fuels also have better structural stiffness that makes them less vulnerable to vibration and fretting. At least 60 nuclear units are most likely candidates for an EPU program in the near future.058 174 290 2. improvement to neutron fluence through the use of low-leakage cores has provided additional margin in NSSS components and helps to accommodate higher power levels over the long term. Additionally. based on a small number of plants currently involved in EPU programs. license renewal. These and other plants either have already completed. in part. of which about 50 are PWRs.274 3. a utility does not have to wait as long to reap the benefits of an EPU. indicates that the capital cost of this incremental power ranges from about 15% to 50% on a cost-per-kilowatt basis. Many plants have been operating for 30 years or more and require major equipment replacement. compared with the cost of a new nuclear plant. Information provided in a June 2008 Nuclear Energy Institute Seminar. Moreover. Other intangible benefits are also associated with an EPU. The added thermal power output can be achieved with little or no increase in output steam pressures (based on redesign and replacement of the HP turbine rotor) by increasing steam mass flow. and can reduce utility costs as well. EPU uprates are very competitive with new power plant construction. an output uprate is possible.173 522 870 6. 160 Bechtel Technology Journal . but the real number of prospective uprates is far higher. The increased interest by PWR owners has been prompted. or are contemplating.839 Approximate MWe 758 1. Of course. In some cases. With less design margin required. fuels have become available with slightly higher enrichments. and improvements in burnable poisons. improved cladding. by advances in technology and. which leads to less statistical variation design margin. The added power from uprated units is also effective in reducing greenhouse gas emissions for the entire utility fleet of plants in a timely fashion. perhaps more importantly.

auxiliary feedwater systems and emergency service water systems may require modification. Increased thermal output requires greater thermal input into many of the plant systems and components. Number 1 161 . Depending on the existing margins. some general trends have been observed. the magnitude of the uprate. Even side-by-side “identical plants” frequently require separate plans to accomplish an equivalent EPU. Major BOP upgrades have been the focus of most EPUs. and. reactor coolant pressure boundary modifications have not been a major concern. Major modifications to the generator and stator (rewinding) are expected. Industry experience with power uprates to date has shown that the installed capacity of emergency core coolant systems is nearly always sufficient without modification. NSSS contractors. Plants with closed-loop cooling may also have to consider cooling tower upgrades. and heater drain pumps. Several “margins” are of interest in a margin management program. Increased flow accentuates flow accelerated corrosion (FAC) in pipes and other components. This may also require increased cooling for the generator. However. main generator. Design duty for overpressure protection and required relief capacity in the reactor coolant pressure boundary from normal operating and transient design conditions typically increase with increased power. The design must also consider any increased demand for demineralized water. Transformers may need to be replaced with larger units. detailed studies are required for each plant. It is difficult to generalize about the perfect plan to complete an optimum EPU for a given plant. Differences in initial regulatory approaches. However. For each EPU. and the condition of the turbines. turbine contractors. The Electric Power Research Institute (EPRI) maintains a lessons-learned database that identifies issues observed and resolved in previous power uprates and serves as an excellent information base for future uprates. An initial but important step is to establish a margin management program (if the plant does not already have one) to ensure that adequate margins are available in systems. Developing or updating the margin management program may be done in parallel with other EPU preparation steps. along with supporting components. typically have to be replaced or modified. the uprate can be analyzed and performed at constant pressure. and plants with open cycles need to evaluate thermal effects from the condenser outfall. Increased steam and feedwater mass flow often require that piping be replaced to accommodate greater mass flow or to counteract the effect of FAC. and heat exchangers are frequently replaced with larger units. nuclear EPC contractors. and because throttle margin can be achieved through the retrofit without an attendant increase in operating reactor pressure. and components (SSCs). bearings. MANAGING MARGINS A n EPU is a major undertaking for an operating plant that requires the combined expertise of the plant staff. Major balance-of-plant (BOP) upgrades have been the focus of most EPUs. Increased mass flow has the potential to raise flow-induced vibration levels in systems and components to unacceptable levels or change the frequency of the exciting forces. condensate. or modify the low-pressure (LP) and/or HP portion of the turbine. This may require that the primary. repower. The turbine. In many cases. Feedwater. This increases the demands of isophase bus duct cooling. main power transformer. which means that structures supporting these components are challenged and frequently have to be strengthened by modifying the building structure and other foundations.MORE THAN REPLACING PARTS his does not mean that an EPU can be completed by merely making a few simple plant modifications. structures. Components such as feedwater T heaters. potentially reducing required margins through lowered material properties and adding burden on pumps. Replacement components are generally larger and heavier. causing vibration where it previously did not exist. in most cases.and/or secondary-side safety valves and safety-relief valves be modified. and previous modifications and equipment changeouts to maintain plant operation all combine to make the EPU program for each plant unique. and seals. moisture separator reheaters. the condenser is either replaced or retubed. it may be necessary to replace. The Institute of Nuclear Power Operations (INPO) identifies December 2009 • Volume 2. past responses to regulatory issues. an HP turbine retrofit (at least) is required. and power train pumps often have to be replaced or modified. Therefore. Otherwise.

and consistent with the plant design features. An integrated team consisting of the owner’s plant staff. correct. Managing Margins three different nuclear plant design margins: operating. One is analytical: ensuring that the design documents are current. The turbine generator is also evaluated to determine modifications required to meet the proposed uprated power needs. and the BOP are studied. fabrication durability. For all of these reasons. the greater the uprate. Increased thermal output from an EPU imposes further demands on the operating limit. and turbine generator equipment. The next phase of the feasibility study is to identify modifications necessary to meet the EPU’s requirements and ensure that the 162 Bechtel Technology Journal . The operating limit is analogous to design values in engineering terms. It accounts for. and the turbine generator supplier should perform the feasibility study. (Source: INPO) Figure 3. the greater the cost of the last kilowatt added. Design codes and licensing criteria include a certain margin. as discussed above. the next step is to conduct a feasibility study.Ultimate Capacity Analyzed Design Limit Operating Limit Analytical Margin Design Margin Operating Margin Range of Normal Operations Most utilities are finding that. Assuming that the necessary assessments have been performed and a decision made to consider an EPU. Normal aging and plant operation—which require constant attention by owners—can decrease each of these margins. the margin management program becomes an important tool in performing an EPU. Even systems or components not directly affected by the power increases may not function as efficiently as intended following an EPU. A cost-benefit analysis is included in. and analytical (Figure 3). the NSSS supplier. which INPO calls the design margin. compared with other available alternatives. the turbine and cooling system. provided that other outside factors demonstrate that the need exists. an experienced architectural/ engineering firm. design. the nuclear systems. compared to other available alternatives. And finally. it is cost-effective to implement the greatest amount of added power possible from the EPU. A thorough review of EPRI’s generic lessons-learned database is also important for identifying potential future issues. it is cost-effective to implement the greatest amount of added power possible from the EPU. This approach minimizes interface issues among the aforementioned parties in relation to current operating experience at the nuclear plant and the NSSS. The margin management program has two basic parts. Potential modifications to the NSSS. and other issues. the potentially affected nuclear and BOP systems and components are evaluated to determine the pinchpoints—those items that have suffered margin erosion due to preexisting factors or would suffer erosion due to the EPU modifications. Most utilities are finding that. beyond the design limit. The second part is more complex in that it requires a systemic assessment of the current condition of the physical plant through engineering walkdowns and reviews of condition reports and other operational data. Initial evaluations are conducted to identify the potential power increases available through modifications of the NSSS. BOP. which addresses uncertainties in design. or safety factor. or prepared in parallel with. The difference between the analyzed design limit and the operating limit is this conservatism. all the potential operating conditions of the plant. the feasibility study. reliability. and envelops. Operating margin is the difference between the operating limit and the range of normal operation. Typically.

Implementing the modifications requires extremely focused management planning and execution processes. heater drains. component cooling water) • Spent fuel pool cooling heat exchangers • Main steam reheaters • Condenser and/or cooling tower (upgrades) • Water treatment system (upgrades) Based on the feasibility study. involves evaluating virtually every aspect of a plant’s operating experience. condensate. The NRC either approves or denies the power uprate request. and 10 CFR 50. members of the public have 30 days to comment and 60 days to request a hearing. the owner decides on the final upgrades/modifications required to meet the EPU goals. Modifications are typically prepared in the form of design change packages. a more detailed evaluation is performed that supports a Licensing Amendment Report (LAR) for NRC review and approval. Next. equipment specifications are prepared and purchase orders are placed for long-lead-time components. the NRC again issues a public notice in the Federal Register. This process is governed by 10 CFR 50.90. With this final list. Some EPU programs require more than 50 major design change packages. Therefore.modifications reestablish required margins. surely be requested through the NRC Request for Additional Information process. hardware changes or plant modifications are required. After the NRC and ACRS complete their reviews and consider and address any public comments and requests for hearings related to the application. and exposes the licensee to questions about the current licensing basis. After a licensee submits an application to change the power level at which it operates the plant. Review Standard for Extended Power Uprates. In other cases. Members of the public have 30 days to comment on the licensee’s request and 60 days to request a hearing. and construction experts work hand-in-hand to provide the design details to maintain the configuration management and design control process required by the utility. These packages include detailed design documents and a step-by-step process for field implementation. The LAR incorporates the completed analytical results along with additional detailed evaluations of SSCs directly or indirectly affected. Number 1 163 . Care must be taken to ensure that no damage occurs to adjacent equipment outside the boundaries of the modification. 10 CFR 50. The LAR requirements are provided in NRC document RS-001. including public hearings. often in very confined quarters. As before. including the cost-benefit analysis. And again.91. they verify that the modification can be completed in a safe and efficient manner that results in a quality product. The licensee has to manage the risk that its LAR may bring attention to unique features that may be reevaluated by the NRC and be subject to public hearings. any comments. procurement staff. Next. In some cases. Typical components in this category include: • HP and LP turbines (replacement) • Main and auxiliary generators (upgrades) • Transformers (replacement) • Feedwater heaters (replacement) • Pumps and motors (feedwater. margin can be restored solely through more sophisticated analysis. all under the purview of a strict quality assurance program. The NRC thoroughly reviews the application. the NRC notifies the public by issuing a public notice in the Federal Register stating that it is considering the application. and any requests for hearings. The LAR for an EPU is extensive. all EPU submittals require an Advisory Committee on Reactor Safety (ACRS) meeting. Equally important. PERFORMING THE UPRATES THE NRC’S REVIEW OF THE PACKAGE he process for amending commercial nuclear power plant licenses and technical specifications for power uprates is the same as the process used for other license amendments. December 2009 • Volume 2. additional information is expected to be requested. After the NRC accepts the owner’s application. Additional information on the application will T I mplementing the modifications requires extremely focused management planning and execution processes that are more complex than those of typical maintenance outage activities. NRC commitments to review the LAR include a 12-month review cycle for acceptance of the EPU submittal.4.92. the NRC issues its findings in a safety evaluation report. Engineers. EPU requests are submitted to the NRC as an LAR. 10 CFR 50. A notice is then placed in the Federal Register regarding the NRC’s decision.

com/issues/features/ Nuclear-Uprates-Add-Critical-Capacity_1860. To keep the costs of replacement power as low as possible. and utilities typically work toward this goal. A large portion of his career was spent as a technical staff supervisor. are subject to major rework.html). The paper has been edited and reformatted to conform to the style of the Bechtel Technology Journal. Pennsylvania. As described in this paper. His interests soon broadened to encompass engineering supervision. when electrical demand is low. he provided direct technical support to more than 35 nuclear power plants. access to space. corrosionfree steel as part of a larger government effort to support US manufacturing. and to provide controls to ensure that those schedules are carried out. replacement power may not be needed or the utility may operate less-efficient assets (and purchase replacement power). in this role. to schedule the work effectively. The success of an EPU program relies heavily on the quality of the management team and its ability to develop an effective integrated implementation plan. Gene has an MS in Mechanical Engineering and a BS in Civil Engineering. the utility must purchase replacement power. and improve the US infrastructure. CONCLUSIONS N 164 uclear power is currently the most reliable source of power in the US. Thomas has been with Bechtel for nearly 40 years. involved primarily in commercial nuclear power. office. It is also critical that the necessary trained human resources be available on a 24/7 basis during all implementation outages. ensuring that each system and component performs its intended functions. Gene joined Bechtel as one of the early seismic specialists. To minimize plant downtime. actual hardware implementation is generally performed over two or more refueling outages. Based on weather conditions and on other units that the utility may own (for example. As the demand for nuclear power increases. Gene has served on both ASCE and ASME code committees and on various nuclear industry task forces. In addition. the time required to bring an EPU on line is about half that needed to license and build a new plant. and critical tools and other resources are available for all required tasks. During each plant outage. particularly the BOP. compared with new construction. 2009. Gene was the chief engineer for the civil/structural/architectural and plant design disciplines. both from Drexel Institute of Technology (now Drexel University). BIOGRAPHY Eugene W. Philadephia. Work in an operating plant introduces a whole new set of complexities. expand research and development. capacity. The scheduling effort is a critical component in controlling implementation costs.powermag. While EPUs represent the most challenging power uprate due to their wide range of design and licensing impacts and need for plant modifications. he has been an engineering manager for nuclear projects in the Frederick. identifying construction installation activities that occur during each outage shift. He was one of the earliest recipients of a Bechtel Technical Grant and was privileged to serve as a Bechtel representative on a special US Government Presidential Task Force to develop and promote use of high-strength. He is a Six Sigma Champion. Minute details are included in the schedule. A detailed power ascension testing and monitoring program needs to be developed early in the process so it can be implemented following outage completion. The success of an EPU program relies heavily on the quality of the management team and its ability to develop an effective integrated implementation plan. Earlier.INPO guidance suggests that each design package be completed and approved by designated plant personnel 1 year in advance of the planned outage. and he successfully led design efforts on several projects. The original version of this paper was first published on May 1. published by McGraw-Hill. it is important to keep outage time to a minimum. less-efficient coal plants or gas turbines). Therefore. in POWER—an Internet magazine and the official publication of the Electric Power Conference & Exhibition (http://www. outage execution plays a major role in the overall success of the EPU. After an initial stint with Boeing Company as a structural dynamist. Maryland. it can be advantageous for plant owners to perform uprates to increase Bechtel Technology Journal . Gene has written several technical papers and contributed to the Handbook on Structural Concrete. an EPU can achieve up to 20% additional output but is much more involved than an MUR or SPU. Because major portions of the plant. For the past 8 years. Each work package is integrated with all other packages and with unrelated but required outage activities so that the needed cranes. it is easy to understand why an EPU is an attractive prospect for a utility. Outages are performed during off-peak periods. they offer competitive costs and more payback in terms of MWe.

The service-oriented architecture. This success has been achieved by standardization in data model and work flows. security. productivity. and Web deployment methodologies for spatial data are being developed in conjunction with Bechtel’s larger interoperability efforts in data integration and encapsulation under International Organization for Standardization (ISO) Standard 15926. © 2009 Bechtel Corporation. Oracle Spatial. the issues of data interoperability. and GIS automation continue to be central to the GIS implementation and deployment strategies being adopted for the company. implement. The need for an effective and affordable interoperability solution that fits the current execution model drove Bechtel to develop. It was based on a proprietary middleware platform that used ISO 15926 reference data at the “dictionary compliance” level. This implementation. and within the global engineering environment of a company such as Bechtel. Such interoperable deployment strategies for spatial data are being developed in conjunction with Bechtel’s larger interoperability efforts in data integration and encapsulation under International Organization for Standardization (ISO) Standard 15926. is now used by most of Bechtel’s business lines and projects worldwide. This generalized international standard for industrial automation systems and integration can be adapted to any business domain. With the GIS technical discipline and spatial data being relatively new centralized resources within the company. The first implementation of ISO 15926 at Bechtel occurred in early 2000. 24/7. geographic information system (GIS). [1] However. GIS. Robin Benjamins rxbenjam@bechtel. interbusiness-domain execution model. enterprise architecture that provides reliability. information technologies that support the engineering work processes began moving from technology-centric to standards-centric solutions. McLane tjmclane@bechtel. security.INTEROPERABLE DEPLOYMENT STRATEGIES FOR ENTERPRISE SPATIAL DATA IN A GLOBAL ENGINEERING ENVIRONMENT Issue Date: December 2009 Abstract—The implementation of a Bechtel enterprise geographic information system (GIS) has greatly facilitated the sharing and utilization of vast amounts of diverse geospatial data to support complex engineering projects worldwide. BACKGROUND T A Tracy J. the so-called DataBroker solution. ISO 15926. work-shared. Currently. spatial data standards INTRODUCTION he success of geographic information systems (GIS) in increasing efficiency. PhD yyan1@bechtel. and collaboration within any business or organization has been documented. and GIS automation that streamlines work processes and assists with data discovery and access. mining and metals. it can be even more challenging. and given the great variety of computer-aided design (CAD). a new initiative is underway to upgrade the existing DataBroker platform to Yongmin Yan. such as fossil power. and deploy an approach based on ISO 15926. For s Bechtel shifted from a home-office-based to a global. Keywords—ANSI INCITS 353. the implementation of an enterprise GIS has greatly facilitated spatial data sharing and utilization. and other data sources and formats involved in supporting Bechtel projects in its five global business units. managing and deploying spatial information across any large organization can be a daunting task. accuracy. nuclear power. All rights reserved. communication. and scalability. engineering. data model standardization. and civil. ISO 19115. Integration of Life-Cycle Data for Process Plants including Oil and Gas Production Facilities. system reliability. 165 . metadata. enterprise desktop. enterprise GIS.

and other related geospatial data is vital to support complex engineering projects worldwide. This paper describes interoperability strategies implemented at Bechtel that facilitate the sharing and integration of GIS content. Thus. ranging from hand drawings. clients. verify. and Environmental Systems Research Institute. To address this challenge. e-mails. and the Environment technical working group Spatial data is an essential component of the overall engineering and business information that is critical to Bechtel’s interoperability strategy. ACRONYMS. Spatial data is an essential component of the overall engineering and business information that is critical to Bechtel’s interoperability strategy. subcontractors. Pitney Bowes MapInfo Professional® files. ISO 15926 establishes a fundamental element of Bechtel’s information management strategy (see Figure 1). text files. IS&T GBU GIS INCITS interoperability The automatic interpretation of technical information as it is exchanged between two systems TWG one fully compliant with ISO 15926. maximizing the use of standardized technologies. The interoperability of mapping. and load these datasets into a central Oracle® Spatial database conforming to a D 166 Bechtel Technology Journal . GIS vendors. (ESRI®) shapefiles and geodatabases. including Web Ontology Language and the Semantic Web. georeference. Inc.ABBREVIATIONS. Federal Geographic Data Committee (Bechtel) Geotechnical and Hydraulic Engineering Services (Bechtel) global business unit geographic information system(s) InterNational Committee for Information Technology Standards JV LIM MAPX MR NEXRAD NWS OGC P&ID PSN RDL SDSFIE ISO 19115 ISO ISO 15926 International Organization for Standardization The ISO standard for Integration of Life-Cycle Data for Process Plants including Oil and Gas Production Facilities The ISO standard for Geographic Information— Metadata (Bechtel) Information Systems and Technology joint venture life-cycle information management mean areal precipitation material requisition next-generation radar National Weather Service Open Geospatial Consortium piping and instrumentation diagram project services network reference data library Spatial Data Standard for Facilities. imagery. and spreadsheets to database files. MANAGING DIVERSE SPATIAL INFORMATION ata originating from government agencies. Inc. Bechtel’s GIS department developed standardized GIS desktop procedures and then diligently used them to catalog. Infrastructure. AND TERMS 3D ADF ANSI BecGIS BSAP BIM CAD DEM ECM ESRI FGDC G&HES three-dimensional application development framework American National Standards Institute Bechtel’s enterprise GIS Bechtel standard application program building information modeling computer-aided design digital elevation model enterprise content management Environmental Systems Research Institute. and different disciplines within the company comes in a variety of formats. This new implementation will facilitate the ubiquitous interoperation of information among diverse systems and across different disciplines and business lines. Making such vast and diverse data easily accessible and reusable for projects is a great challenge. AutoDesk® AutoCAD® and Bentley® MicroStation® computer-aided design (CAD) drawings.

Google™ Earth. [2] The SDSFIE data model is distributed with a number of tools that have allowed the Bechtel GIS department to generate standard GIS architectures and automate portions of the data management workflow. a GIS data model first developed for the military but whose usage has spread throughout many other federal. The underlying data model used by Bechtel in its Oracle Spatial database environment is the Spatial Data Standard for Facilities. which has helped implement numerous process improvements for data-intensive workflows around the company. and deploy spatial information. The implementation of the standardized data model has facilitated the development of standard processes to capture. and local GIS organizations. thereby creating a Bechtel enterprise GIS. Bentley Map™ and gINT®. Engineering Information Management Strategy standardized GIS data model. Infrastructure.PSN Portal 3D Information Views P&ID Specification ISO 15926 Information Model Data Sheet MR JV Partner RDL BSAP BSAP B BSAP BSAP BSAP BSAP BIM Supplier Procurement ECM GIS LIM Construction Customer Figure 1. Interoperable Approach to Spatial Data Storage Because Bechtel uses a variety of software products to manage and maintain data. Examples of client software are ESRI ArcGIS Desktop. December 2009 • Volume 2. breathing” data dictionary • Architecture reproducibility • Facilitated use of GIS automation While the SDSFIE data model captures 90% of all GIS datasets that Bechtel may need to support its work. Number 1 167 . It is published by the American National Standards Institute (ANSI) as ANSI INCITS 353 and is listed in the Federal Enterprise Architecture Geospatial Profile. Adoption of a standardized data model for the BecGIS environment has facilitated the development of GIS automation. Pitney Bowes MapInfo Professional. the solid green lines represent clients that can directly access spatial data within the BecGIS. In Figure 2. state. the GIS department has developed some of its own solutions to feed and extract data for specialty modeling software packages. along with the ESRI ArcGIS® Server environment. it has sometimes been necessary to add new GIS layers or fields to meet specific business needs. manage. Standardized Data Model and Workflows The implementation of the standardized data model within the BecGIS has facilitated the development of standard processes to capture. Controlling the addition of new data architecture through the SDSFIE tools ensures that all structures are reproducible and documented. manage. the decision was made to use Oracle Spatial. Thus. and the Environment (SDSFIE). and Mentum® Planet®. Where interoperable vendor solutions and plug-ins have not been available. and deploy spatial information. AutoDesk AutoCAD. known as BecGIS. Benefits of a standardized data model include: • “Living. if necessary. while the dashed red lines represent Bechtel custom solutions to data interoperability. to ensure that a variety of client software could access the spatial information. it has been essential that the GIS department take an interoperable approach to the storage of spatial data within the BecGIS.

Full-Featured GIS) (Geotechnical Data Analysis & Reporting Software) CADD COTS GIS LIDAR computer-aided design and drafting commercial off-the-shelf geographic information system light detection and ranging NED NLCD USDA USGS Bentley gINT HEC-RAS HEC-HMS52 (Army Corps of Engineers Hydrologic Modeling Software) National Elevation Dataset National Land Cover Data US Department of Agriculture US Geological Survey (Mapping and Geographic Analysis Application) MapInfo I f Professional Figure 2.g GIS Data Clearinghouses ArcG G ArcGIS (Integrated Collection of GIS Software) CADD Engineering Drawings AutoCAD CAD (Design and Documentation Software) Web Map Services (Army Corps of Engineers Hydrometeorological Modeling Software) Boundaries HEC-HMR52 Tabular Data. Data Interoperability of Spatial Data 168 Bechtel Technology Journal COTS and Bechtel GIS Automation Population Density Direct Read/Write Daytime Population Density Interpolation of Subsurface Stratigraphy LIDAR Surface Elevation Data . Charts & Reports Cadastral (Parcel) Direct Read/Write Communications Propagation Models Climate Data Communications GIS Extrapolated xtrapolated Network Models G IS Pr oc e s t ion oma s Utilities Bechtel Enterprise Corporate GIS (BecGIS) Oracle Spatial Data Warehouse Direct Read/Write Topography Transportation USGS NED Pre-Construction and Post-Construction Surface Contours Land Cover Aut Geology Demographics Direct Read/Write COTS and Bechtel GIS Automation COTS and Bechtel GIS Automation Environmental COTS and Bechtel GIS Automation Hydrography NLCD 3D Visualization of Borehole Plan and Subsurface Picks Specialty Environmental and Permit Data USDA Soils 3D Cross-Section Visualization (Communications Propagation Tool) Mentum Planet (Geographic Database Service) Google Earth Bentley Map (Advanced.

Network-Level Security • Firewalls: The system is secured within the company’s BecWeb firewalled intranet environment. Web services add another layer of security on top of the database. Here. the use of a development/acceptance environment for spatial data development and testing ensures the quality of information before it is released into a production environment for the BecGIS user community. preventing external access to this valuable company resource. System viewers cannot access the underlying spatial database structure and can access information only through spatial views. High performance can be achieved by both database tuning and application optimization. and viewer roles.Metadata Management Each geospatial layer. the map engine is often the bottleneck that requires constant monitoring and performance tuning. This tool eliminates duplicated data entries by transferring metadata directly from SDSFIE metadata tables to metadata stored on a spatial view within the BecGIS database. a custom metadata management tool was developed. the additional use of spatial views allows GIS personnel to fine-tune spatial data access at the feature and column level as well. Reliability An enterprise system should be reliable. and access and use restrictions of the dataset. while also providing a means to make the data more user-friendly. scale. project data and public data are segregated into separate database instances. Depending on where bottlenecks tend to occur. and be resilient to disasters such as data corruption and hardware failure. there is a need to scale it as demand picks up. schemas. data source. have few downtimes. as well as its feature-level geometry. pedigree. in the BecGIS is accomplished by metadata created to document information such as the currentness. editor. Routine database backup and redundant backup servers should be used. However. A conceptual diagram of database security implementation is provided in Figure 3. As new projects and new users are constantly added to the system. Spatial views have also been a means by which Bechtel has tied into other Oracle database environments outside of the BecGIS. access control. system reliability is critical. A multitier system architecture is much more scalable than a single-tier system. the Web. Scalability An enterprise system should be able to respond to an increased user base without sacrificing performance. This can be accomplished by careful system design. publication or revision date. more hardware capacity can be added to the database. scalability. or the system’s application tier. Bechtel global business unit (GBU)-based roles cover multiple schemas. and high security. All three of these issues have been of key importance to the implementation of the system developed at Bechtel. In a GIS Web mapping environment. and roles should be the starting place for any GIS organization. User access is controlled by a multitier security strategy. rather than making a redundant. To facilitate the creation of metadata that is compliant with both the Federal Geographic Data Committee (FGDC) standard (FGDC-STD-001-1998) and the ISO Geographic information—Metadata standard (ISO 19115). which means dynamically tying information to spatial features from its source. and encryption. The use of traditional database instances. which is secured against outside Internet access. December 2009 • Volume 2. purpose. ENTERPRISE GIS ARCHITECTURE A well-implemented enterprise GIS is characterized by reliability. For Bechtel. The entire BecGIS resides within Bechtel’s intranet environment. this is further strengthened by application-level access control. For a global engineering firm that operates 24 hours a day. Network-Level Security The first tier where the security of any GIS is implemented is at the network level. Each schema has administrator. Number 1 169 . Access privileges to the computer network and underlying server architecture should be the first defense in protecting a valuable company resource. Database-Level Security The second tier in securing an enterprise-level GIS is implemented within the database. Security User access is controlled by a multitier security strategy implemented through operating system and database authentication. out-of-date copy.

g. • Data encryption: Critical information is stored as encrypted data and unencrypted on the fly to prevent unauthorized access. and viewer were created for each schema and GBU. geography. Desktop/Application-Level Security • Web map services: Security is set by Web map service levels that limit access to certain users. The partitions within the database to which a user has access privileges are determined by the group of which the user is a member.GBU 1 Schema C Spatial Views GBU 2 Schema F Spatial Views Schema F Admin Schema F Editor Schema C Spatial Views Project Spatial Data Schema B Spatial Views Schema F Spatial Views Schema F Viewer Schema E Admin Schema B Schema E Schema E Spatial Views Schema E Editor Schema E Viewer Schema D Admin Spatial views allow GIS personnel to fine-tune spatial data access at the feature and column level. project data is stored in individual project schemas). • Schemas: Individual schemas separate data by project. Schema A Schema A Spatial Views Schema D Schema D Spatial Views Schema D Editor Schema D Viewer GBU 1 Viewer GBU 2 Viewer Public Spatial Data Figure 3. Functionalities available to the user are also tied to the user logon. • Spatial views: Spatial views allow GIS personnel to fine-tune spatial data access by hiding the underlying spatial tables and exposing only selected rows and columns to the end users. Database Security Database-Level Security • Database instances: Separate database instances secure public versus projectspecific data layers. • Role-based user control: Roles like administrator. while viewers have read-only access to the data through spatial views. • Desktop application level authentication and access control: User logons are checked against an access control table to see which schema the user has permission to access.. editor. Editors have permission to load and edit data. • Web application level authentication and access control: Access is restricted at the Web server level and also within Web applications (particularly those that provide spatial content management functionality). Administrators have the privilege to create objects and grant permissions to other users. They also allow information from other databases to be dynamically tied to BecGIS features. or data source (e. Bechtel’s GIS department has taken a very innovative approach to implementing application-level security by establishing database user groups. 170 Bechtel Technology Journal . Desktop/Application-Level Security The third tier through which to address the security of an enterprise GIS is at the desktop and/or Web application level.

At Bechtel. Web Services Application Development Framework The ADF provides a foundation for enterprise-level GIS application development. multitier application development framework (ADF). handles database connections and access. It greatly simplifies access to data without compromising security. Number 1 171 . Web Applications) GIS Workstation GIS Desktop Users Database Servers (Spatial Data) Figure 5. which is used by Web services. Common Windows Controls GIS automation is built on top of a Webservice-oriented. this GIS Applications GIS automation is built on Web Applications top of a Web-service-oriented. The ADF exposes functionalities and business logic as Web services or objects while hiding the complexity of security handling and database interactions from the developers. and Figure 5 shows how different components are deployed. Application Development Framework Web Services Web Browser GIS Web Users Local GeoDatabase Enterprise Database Server-Side Custom Components Client-Side Custom Components Application/Web Servers (Spatial Data Engine. Web Map Services. multitier ADF. The data access component. Deployment Plan December 2009 • Volume 2. Access to custom BecGIS utilities and data has helped users increase productivity and streamline data-intensive workflows. Figure 4 shows the general ADF architecture of the BecGIS. Specific interactions with the Web services are packaged as data providers to be used by desktop applications. D GIS automation is a key means of standardizing GIS analysis processes and data access to an enterpriselevel system. Data Access Spatial Data Spatial Data Spatial Data Spatial Data Figure 4. Web map ADF services are essential in Common Objects Data Provider disseminating information across the company. component-based. Desktop Applications component-based. Web services are consumed by both desktop and Web applications. this further reduces the system’s complexity.ENTERPRISE GIS DEPLOYMENT STRATEGIES eployment of a customized GIS client interface has helped Bechtel provide GIS users with a common experience within their GIS software while controlling the release of new software versions and service packs.

One particular GIS extension developed at Bechtel translates data between the BecGIS and the US Army Corps of Engineers hydrologic modeling software HEC-HMR52. These tools streamline data processing and analysis. Custom GIS Tools Bechtel has developed custom GIS tools on top of the ADF and deployed them as logic groups of tools in GIS extensions.This multitier architecture eliminates duplicated codes. both internal (through Bechtel University) and external. it has developed a GIS technical discipline home page on the BecWeb. As part of the Bechtel GIS department’s role as a corporate center of excellence. and place keyword searches. which can save substantial processing time and costs. Search results are organized in tree views. the GIS department sponsors informal GIS technical sessions at lunchtime and holds monthly GIS technical working group (TWG) teleconferences via Microsoft Live Meeting to share and discuss current GIS-related technical issues. Custom GIS tools can provide interoperability between the GIS and mathematical and/or specialty modeling software packages. which automates the generation of compass rose diagrams at a user-specified distance from a source location. utilities. Completed in a day. These resources include avenues for GIS training. and is much more scalable. this tool can easily locate and retrieve GIS layers and maps housed within the enterprise system. Development of a GIS Knowledge Bank Of course. BecGIS users are authenticated based on their logons. operating areas. It enables security to be implemented at different levels. survey control points. Spatial Content Management and Other GIS Automation Interfaces Finding the desired spatial data from hundreds of spatial views across dozens of schemas and multiple database instances can be a daunting task without some type of automation. The BecGIS Spatial Data Tool makes the BecGIS accessible to the casual user and greatly increases the system’s usability. The tool has the capability to use a digital elevation model (DEM) or other surface raster data source to generate three-dimensional (3D) information. one such quickly deployed Web application provided surveyors for a rail expansion project access to survey request forms. and data access is controlled by an access control table that specifies which users have access to which database schemas. Without such an easy-to-use tool. Using project. Changes can be made to specific components without affecting other components. Users can quickly gain access to valuable data without having full-blown desktop GIS software and substantial GIS training. The GIS department has also developed a body of literature called GIS Desktop Procedures. a Web application was quickly set up to share terrain information among project staff from different offices across the globe. and full-length metadata can be retrieved easily. which is used to model storm events. GIS tools streamline data processing and analysis and save substantial processing time and costs. It then allows the results to be output to a Microsoft ® Excel® spreadsheet as individual profile charts. rail segments. railway station locations. aerial photos. theme. no enterprise GIS can be considered a complete success without having the capability to communicate with and train the end users. a sophisticated system can be a useless and wasted resource. both Bechtel and SDSFIE symbology can be selected by users to standardize their visualization of the spatial information. Thus. where employees can access useful GIS resources. To facilitate easy discovery of and access to enterprise spatial data and map products. Another example is the BecGIS Terrain Profile Tool. One such custom GIS tool was used to derive radarbased mean areal precipitation (MAPX) data series by watershed sub-basins from 11 years of next-generation radar (NEXRAD) Stage III hourly precipitation data provided by the National Weather Service (NWS). 172 Bechtel Technology Journal . simplifies programming. Succinct metadata information is shown as tooltips. changes to the data access component or Web services do not necessarily invoke the recompilation and redeployment of client-side components. For another rail project. Locating one of hundreds of maps created (and revised many times) in the past by a GIS organization is equally challenging. the whole process could have easily taken weeks or months if this data processing had been done manually. Bechtel developed a spatial data search and retrieval tool called the BecGIS Spatial Data Tool. and other information through a Web mapping environment. In addition. For consistency. Web Application Solutions Web map services are essential tools for disseminating information across the company. For Bechtel.

CONCLUSIONS ecGIS. Incorporated. Bechtel’s enterprise GIS approach. geotechnical investigations at construction.cim3. “The Case for a Centralized Geographic Information System (GIS) Organization for Bechtel. Activities BecGIS currently supports include nuclear power combined license applications. power.” ESRI Press. ESRI and ArcGIS are registered trademarks of ESRI in the United States. Communicating the benefits and uses of a GIS will continue to be a primary goal of the Bechtel GIS department if the BecGIS is going to be a fully realized technology for the company. Taylor and T. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. “Federal Enterprise Architecture Geospatial Profile Version 1. Thus. These documented workflows have been assembled to provide GIS users with step-by-step examples of how to do anything from modeling subsurface bathymetry data to validating cut-and-fill calculations for engineering design. and mining and metals sites. or certain other jurisdictions. TRADEMARKS AutoDesk and AutoCAD are registered trademarks of AutoDesk. 2004.. The ongoing effort to integrate use of BecGIS into the daily workflow continues to be championed by the G&HES functional [1] C. Ospina. workflow processes.A. and develop Web-based GIS applications are the key to furthering awareness and use of this valuable corporate index. and automation tools can significantly improve the effective retrieval and usability of spatial data in a global work environment. Inc. Thomas and M.” January 27. California. S. gINT. B Support from upper management and subject matter experts. and MicroStation are registered trademarks and Bentley Map is a trademark of Bentley Systems. the European Union. expand Bechtel’s GIS Knowledge Bank. McLane. Stew Taylor. ongoing efforts to provide training.These procedures document standardized processes and “best practices” for performing a variety of GIS-related tasks. manager.1. and the many subject matter experts around the company with whom the GIS department has had the opportunity to work since its formation. Microsoft and Excel are registered trademarks of Microsoft Corporation in the United States and/or other countries. [2] [3] December 2009 • Volume 2. 2006. Redlands.” unpublished white paper. a division of Pitney Bowes Software and/or its affiliates.cfm?fuseaction=display&websiteID= 79&moduleID=0.esri. http://colab. contributed to the success of centralized GIS efforts. Google is a trademark of Google Inc. Its recognition and support of a GIS white paper developed in the fall of 2006 [3] that made a case for a centralized GIS resource for the company helped realize the formal creation of the GIS technical discipline within the Geotechnical and Hydraulic Engineering Services (G&HES) organization in March 2007.pdf. Number 1 173 . “Measuring Up: The Business Case for GIS. MapInfo Professional is a registered trademark of Pitney Bowes Business Insight. Mentum Planet is a registered trademark owned by Mentum S. has been successfully implemented and is currently employed in a variety of Bechtel’s businesses. Architecture and Infrastructure Committee. and pipeline routing in the oil and gas industry. This support. or one of its direct or indirect wholly owned subsidiaries. large-scale market analyses for telecommunications clients. and/or its subsidiaries and/or affiliates in the USA and/or other countries. REFERENCES ACKNOWLEDGMENTS The Bechtel GIS department would like to thank Bechtel upper management for its support in the creation of a corporate-level GIS. This range of uses demonstrates that an interoperable approach to enterprise GIS design and deployment through standardized data models. access via ProfileDocument/FEA_Geospatial_Profile_ v1_1. along with a continued close partnership with Bechtel’s Information Systems and Technology (IS&T) department. Bentley. Today’s economic challenges make it even more important for a company such as Bechtel to embrace the opportunity to integrate GIS efficiencies and innovations into its current work processes. as well as close partnership with IS&T. has contributed to the success of centralized GIS efforts within the company over the past few years. Federal Chief Information Officers Council and Federal Geographic Data Committee. 2006.

Robin is the project manager for two key industry collaboration projects focused on implementing ISO 15926. including Fluor Corporation. Both projects are joint PSOC Caesar Association and FIATECH projects. as well as experience with the Tennessee Valley Authority and the US Department of Energy’s Savannah River Site. She is responsible for developing GIS as a new corporate Technical Center of Excellence and has implemented an enterprise GIS database and a centralized GIS Knowledge Bank for the Bechtel GIS user community. engineering construction contractors. Yongmin holds a PhD and an MA in City and Regional Planning from the University of Pennsylvania. Robin worked for other leading EPC firms. His 32 years of experience in the engineering. and Ralph M. Brown & Root. Yongmin Yan. software. Parsons. Her GIS experience includes positions in the public and private sector. 174 Bechtel Technology Journal . combined with technological acumen. an industry consortium of leading capital project industry owners. interoperability. was created in 2000 and is a separately funded initiative of the Construction Industry Institute at The University of Texas at Austin. incorporating ISO Standard 15926 methodologies. and data integration solutions. procurement. PhD. nonprofit member organization promoting the development of open specifications to be used as standards for enabling the interoperability of data. is currently a GIS automation specialist for Bechtel Corporation. FIATECH. Tracy holds an MSc in Geography from the University of Tennessee. Knoxville. In this context. global interoperability solution. which initiated ISO 15926. he is responsible for developing and implementing the strategy for Bechtel’s Central Engineering & Technology and Information Systems & Technology Groups. Prior to joining Bechtel in 1990. Florida. that is now used by the company worldwide. Robin led Bechtel’s effort to create a standard. During her 11 years with Bechtel.BIOGRAPHIES Tracy J. and construction industry includes 16 years providing technology solutions to internal and external customers. and a BA in International Business from Eckerd College in Saint Petersburg. both from Peking University. He has provided GIS support related to nuclear licensing applications for Bechtel’s Power GBU. POSC Caesar Association is a global. He is responsible for creating GIS application development standards and procedures and for leading GIS automation tasks. and technology suppliers. and related matters. she has provided GIS support for each of the company’s global business units. Tracy has worked in the GIS industry for more than 16 years. In this role. Yongmin has more than 15 years of experience in GIS automation and application development. as well as an MS in Environmental Planning and Management and a BS in Physical Geography. China. Inc. and Civil GBUs. He is an accomplished technologist with proven expertise in managing the engineering application portfolio. as well as GIS support for the Mining & Metals. He has established an invaluable expertise in business processes. she also serves as the GIS technical discipline lead for the company. As a member of Bechtel’s Geotechnical and Hydraulic Engineering Services Group. McLane is Bechtel’s corporate GIS manager. Communications. Robin Benjamins is Bechtel’s engineering automation manager. Robin is a board member of the POSC Caesar Association.

Investigation of Erosion from High-Level Waste Slurries on the Hanford Waste Treatment and Immobilization Plant Ivan G. Washington. will use a process called vitrification to transform some 200 million liters of radioactive and chemical waste into glass so that it can be safely stored. McHood 193 Evaluation of Plant Throughput for a Chemical Weapons Destruction Facility Christine Statton August D. Myler. under construction at the former nuclear production site in Hanford. Communications Specialist Jenna Coddington looks at the maze of pipes being installed in the Chiller Compressor Plant. PhD Michael D. The plant will be the largest of its type. Lewis Ignacio Arango. Benz Craig A. PhD Wilson Tang Paul Dent 205 Hanford Waste Treatment and Immobilization Plant The WTP. Papp Garth M.Systems & Infrastructure Technology Papers 177 Site Characterization Philosophy and Liquefaction Evaluation of Aged Sands Michael R. Duncan .


B. liquefaction. Schmertmann. The first part of this paper addresses exploration and the use of the CPT. Although the SPT is giving way to many other in situ tests. The paper shows that not only does aging play a major role in cyclic resistance. This paper addresses recommendations regarding the liquefaction assessment of soils in the context of reassessing the SRS soils. Sowers are reasonable for developing an investigation program. while the second part of the paper addresses aging of soils and the role it plays in the dynamic strength of soils. spanning over 50 years. All rights reserved. at ASCE conferences. This approach consists of “phasing” the investigation. His papers. have dealt with numerous aspects of soil mechanics important to practicing engineers.SITE CHARACTERIZATION PHILOSOPHY AND LIQUEFACTION EVALUATION OF AGED SANDS Originally Issued: March 2008 Updated: December 2009 Abstract—This paper describes site characterization using the cone penetration test (CPT) and recognition of aging as a factor affecting soil properties. In his lecture. His ideas about the T potential of these two tests improved in the subsequent years through lessons learned from additional research and case histories. Professor Schmertmann chose the important topic of aging as it affects soil properties (Schmertmann [2]). (Professor Emeritus. He published guidelines for the interpretation of cone penetration tests (CPTs) and standard penetration tests (SPTs) as early as 1970 (Schmertmann [1]).F. presentations. characterization. but it should also be accounted for in liquefaction potential assessments for soils older than Holocene age. University of Florida). The subsurface soils at the SRS are of Eocene and Miocene age. University of Florida) to the Geotechnical Engineering Ignacio Arango. Peck and others. cost. In particular. 177 .com © 2009 Bechtel Corporation. a field investigation and laboratory testing program was devised to measure and account for this effect. soil. P. he elaborated on the impact of aging on soil compressibility. stress-strain characteristics. Lewis mlewis@bechtel. liquefaction resistance. Department of Civil and Coastal Engineering. recognizing that the final program will evolve through continuous review.E. Michael R. the CPT and SPT still constitute two of the most important tools for geotechnical site characterization. at international Michael D. exploration. risk. uncertainty INTRODUCTION he contributions made by Dr. (Professor Emeritus. and technical discussions published in the ASCE geotechnical journals. and in public agency research reports cover many aspects of geotechnical engineering. Pioneered by Dr. these geotechnical engineering methods are practiced by Bechtel in general and at the Savannah River Site (SRS) in South Carolina in particular. and other properties. standard penetration test (SPT). Department of Civil and Coastal Engineering. For the ASCE’s 25th Terzaghi Lecture in 1989. Schmertmann. static and cyclic strength. they relate the application of laboratory and field testing to the strength and compressibility characterization of in situ soils. McHood mxmchood@bechtel. in ASTM special technical publications. based on numerous laboratory test results and observations compiled from welldocumented case histories. The paper introduces a general subsurface exploration approach developed by the authors.E. research reports. cyclic shear strength. PhD iarango@bechtel. The authors found that borehole spacing and exploration cost recommendations proposed by G. Because the age of these deposits has a marked effect on their cyclic resistance. employing the observational method principles suggested by R. Keywords—aging. John H. cone penetration test (CPT). P. John H. geology.

In recent times. but we believe that one aspect is the ever increasing reliance on modeling. they should go hand in hand. and eventual decommissioning. The SRS occupies about 830 km2 (320 mi2) and is owned by the Department of Energy (DOE). all in an effort to T or geotechnical engineers and geologists.” More recently. and the effect that age plays in the cyclic strength of soil deposits. these facilities demand the very best in design and construction and all of the trappings that follow nuclear and defense-related projects. and statistical inference. site characterization is the most important aspect of the work. the SRS presents a number of interesting challenges. site characterization is a term used “…to describe a site by a statement of its characteristics. While these activities are important and clearly play an integral role in site characterization. subsequent analyses are guesswork. or at least it may be taken for granted. the SRS has been an integral part of the United States’ defense establishment. Since its inception in the early 1950s. constructed and operated at the SRS. without an accurate depiction of the subsurface conditions and the geology of a site. approximately 160 km (100 miles) upstream of Savannah. operation. without an accurate depiction of the subsurface conditions and the geology of a site. In fact. FVST LNG MPa NP PMT SC SCPTu SM SP SPT SRS TEC tsf UCB USCS Greenville South Carolina Atlanta Augusta Columbia Aiken SAVANNAH RIVER SITE Georgia Savannah Charleston Atlantic Ocean 0 N 0 100 miles 100 kilometers Figure 1. however. From a geologic and geotechnical standpoint. What is site characterization? According to Gould [3]. several critical facilities have been. Site characterization is the most important aspect of the work. Savannah River Site and Surrounding Region SITE CHARACTERIZATION F BACKGROUND he Savannah River Site (SRS) is located along the Savannah River in the upper portion of the Atlantic Coastal Plain of South Carolina. We are not sure of the reasons. ACRONYMS.ABBREVIATIONS. and will continue to be. it appears that this activity has been receiving less and less attention.” Sowers [4] describes it as: “…a program of site investigation that will identify the significant underground conditions and define the variability as far as practical. We discuss two of those challenges in this paper: site characterization and how we use the CPT. As a result. Baecher and Christian [5] describe site characterization as “a plan of action for obtaining information on site geology and for obtaining estimates of parameters to be used in modeling 178 Bechtel Technology Journal . subsequent analyses are guesswork. AND TERMS BSRI CPT CRR DOE DMT FC Bechtel Savannah River Incorporated cone penetration test cyclic resistance ratio (US) Department of Energy dilatometer test fines content field vane shear test liquefied natural gas megapascal non-plastic pressure meter test clayey sand seismic piezocone penetration test silty sand poorly graded sand standard penetration test Savannah River Site total estimated cost ton per square foot University of California at Berkeley Unified Soil Classification System ensure safety during construction. Georgia (Figure 1). parametric analyses. By their nature. they are no substitute for carefully planned and executed subsurface exploration programs.

In fact. and aerial photo interpretations) • Performing appropriate geophysical surveys. and dynamic properties of the affected strata. risk can be in terms of money and time to complete a program and/or technical risk if a program is not fully implemented or if it is cut short. it is incumbent upon the geoscience professional to ensure that this philosophy is clearly understood by the decision makers on the project and the client. geoscience professionals are obligated to communicate risk and uncertainty (common to every program) early to the decision makers. pressure meter test [PMT]. Therefore. In our experience. in situ testing (SPT. the physical. Level of Effort In developing an exploration program. all exploration programs have some uncertainty attached to the results―it is unavoidable. etc.5 meters (5 feet) costs about three times as much as. and is additional information needed? In addition. the groundwater conditions. the characterization program needs to be well planned and communicated to the project team and the customer and must include two key components. geologic mapping and interpretation. sampling. a routine boring to 50 meters (164 feet) depth with split-spoon sampling every 1. This so-called “low-cost/high speed mentality” may be fine on some or even most projects. In this case. it should be the SCPTu rather than the conventional CPT. (Note: It is our the opinion that except for the most routine projects. The objective of site characterization is to better predict the performance of the proposed facility. this is precisely where tools such as the CPT are invaluable. including site-specific (local) knowledge and general professional judgment Additionally. the quality of the characterization data must be assessed continuously. it is a combination of schedule (having enough time to complete the initial program and evaluate the results) and/or revisions to the program based on actual conditions encountered (scope changes) that presents the greatest challenge to investigation programs. our experience indicates that most nongeotechnical professionals attempt to limit this performance. borings. even though they may be fully warranted. CPT technology should be combined with drilling and sampling (and other in situ testing) for a highly effective exploration program. This can and does present challenges with regard to budget and schedule considerations. but they can be managed. We do not advocate the abandonment of traditional borings as a technique for subsurface exploration. budgets generally are given to perform a scope of work. Historically. Therefore.” From our perspective. our experience is that an effective site exploration program must be flexible and continually reviewed and adjusted in real time as it proceeds. At the SRS. We clearly endorse project management principles and the need to manage the effort. dilatometer test [DMT]. but simply to “manage” the project. The question is one of how to keep uncertainty to a minimum given such constraints as budget and schedule. and takes four times longer than.) Thus. the data need to be interpreted and analyzed on a near real-time basis. if the CPT is to be used. and for less time and money. and probably more importantly. In other words. To meet these objectives. much more stratigraphic detail can be obtained using CPT technology as a first choice over conventional drilling and sampling methods. CPT. A discussion about risk and The objective of site characterization is to better predict the performance of the proposed facility. it is necessary to understand the geology. and groundwater and piezometric observations • Completing appropriate laboratory testing and engineering analyses and modeling • Reviewing and interpreting the performance of nearby facilities • Applying individual and collective experience. on most major and critical projects. Although heavily scrutinized. albeit ill-defined at the beginning of a project. a seismic piezocone penetration test (SCPTu ) to the same depth. are difficult to get approved. What answers are suggested by the data? Do they make sense. site characterization is the determination of subsurface conditions by: • Understanding local/regional geology (through site visits. and the performance of existing facilities. field vane shear test [FVST]. but it can also be a recipe for disaster. and as is discussed in the next section. the scope is invariably reduced to cost. December 2009 • Volume 2. as geotechnical professionals. mechanical. Are the data adequate and accurate? Second. we also recognize the need for flexible investigation programs that take into account actual site-specific conditions and the uncertainty inherent in all site investigations. First. the initial budget is normally not an issue. Rather.). Number 1 179 . at any stage of the investigation program. Risk and uncertainty cannot be alleviated. changes to the original scope. not because the cost is not justified. As a means to this end.

3%–1.5% to 1% of the TEC. For example.000 1. dams (0.05% to 0.3%–1. five times more. multistory buildings. based on experience.85x 0. m Low Hazard 2 100. and manufacturing plants.000.2%–1. on average. they also report that the perception of the respective clients was that the site investigation cost.5%). Figure 3 depicts the geotechnical cost in relation to the total estimated cost (TEC) of a particular project. and liquefied natural gas (LNG).000 m 2 to 100.2% of the TEC but. Littlejohn et al. trends are still obvious. the cost could increase to range from 0.000 m 2 (about 11. We’re not sure how to interpret this disparity. however. [7] report that for building projects in the UK.000 ft2) for dams/dikes. coupled with scope growth. The results show considerable scatter in terms of hazard.6% for the range of TEC shown. These results are not unlike the suggestions of Sowers [4] in the size range of about 1. the expenditure for site investigations ranged from 0.000 10. actual results (cost and scope) for like projects can be factored in to ensure that the project under consideration is not an outlier in terms of the proposed level of effort. Sowers [4] reports that for “an adequate investigation (including laboratory testing and geotechnical engineering)” the cost ranges from 0. rather. based on our experience. however. and buildings (0. it is more qualitative. A range of site investigation costs as a function of TEC was reported by Sara [6]: tunnels (0.72 1½ Times 100 Penetrations. the larger the facility. 1. In the same way.000 Moderate Hazard High Hazard Figure 2. SPT or CPT 2/3 Times Trendline 10 1 10 100 1. power. roads (0. for critical facilities or facilities with unusual site or subsurface conditions. For example.Each site characterization program is unique— designed to fit the project and the unique conditions inherent in every project.3% of TEC. but there is a distinct trend in terms of size. the more exploration. as well as results from familiar case histories.000 Facility Area. uncertainty is far beyond the scope of this paper.373 R2 = 0.1% to 0. The projects shown have an average geotechnical expenditure of approximately 0. The hazard category is somewhat subjective and is not based on any hard and fast criteria. Penetrations per Square Meter for Various Projects by Hazard Category 180 Bechtel Technology Journal . the higher the estimated cost. While the scatter is significant.5%).000 TRENDLINE: y = 0.8%). They are categorized by focus on transportation. Figure 2 shows the results of the number of borings/CPTs by facility area and hazard category for projects with which we have been involved. bridges (0. The projects shown are those with which we have been involved or that are found in the literature and for which reasonable cost information is available.000 10.2%–0.6%). The large differences result mostly from actual site conditions and the geologic variability associated with the transportation projects in particular. other than as an apparent lack of communication.3%–2%). the lower the geotechnical effort on a percentage basis.000. nuclear fuel handling. Site conditions should dictate what is ultimately carried out.000 ft2 to 1. which traverse great distances and involve widely varying geologic and site conditions. The results are not unlike other published cost information.000.

A general approach that we have developed over the years consists of “phasing” the investigation. and performance surveys of existing structures. or other hard-andfast criteria that dictate the ultimate level of effort.000 Figure 3. (2) proposal or preliminary design. it is a slippery slope that should really be avoided.5 1. In this way. Too often. The effort generally entails researching the site and surrounding area by reviewing historical reports. There are no building codes. thus. Number 1 .0 4. and (5) post-construction monitoring.5 0 10 100 Total Effort. topographic maps. and (5) post-construction monitoring. geologic maps. particular attributes to every well-planned and well-executed characterization program above and beyond the level of effort already discussed. employing the observational method principles suggested by Peck [8].000 10. There are. From our experience.0 1. Each phase has a specific purpose and can vary considerably. (2) proposal or preliminary design. given the specific project conditions. For a characterization program to be as successful as possible. Attributes of a Good Characterization Program So. each site and facility is unique. (4) construction. field visits.0 2. regulatory documents. Geotechnical Cost as a Function of Total Project Estimated Cost A successful site characterization program is done in five basic phases: (1) reconnaissance. soil surveys. actual site conditions should dictate what is ultimately carried out. $M Nuclear Fuel Handling and LNG Power Transportation 1. characterization programs (including the reporting) are recycled with a cut-and-paste mentality.5 4. aerial photographs. (3) detailed design. These attributes are discussed next. (3) detailed design.5. Although this approach may suffice in some instances. there are only guidelines. (4) construction. what constitutes a good characterization program? First. These results are not meant as recommendations. Unfortunately. among others. it must be tailored to the specific project under consideration and must be sufficiently flexible to adapt to changing conditions as they are encountered. project managers and clients are included in the decision-making process and are a part of the risk and uncertainty discussions. We suggest that information such as that given in Figures 2 and 3 be developed and used in the planning stages of a project to “educate” the decision makers and clients about the level of effort required and to provide a sanity check on the baseline program established. each program is unique— designed to fit the project and the unique conditions inherent in every project with which geoscience professionals become involved. 181 December 2009 • Volume 2.5 2. subsurface investigations are thought of as commodities that can be purchased off the shelf.0 3. all characterization programs are unique. Rather. or should be. The reconnaissance phase is generally done for planning purposes and feasibility studies.0 0. a successful program is done in five basic phases: (1) reconnaissance. however.5 Geotechnical Effort. % 3.

It can include some laboratory testing and simplified analyses for conceptual design and/or cost estimating purposes. and geophysical tests. the use of CPT technology at the SRS. followed by borings and “pinpoint sampling” of targeted strata for further evaluation and laboratory testing. and within Bechtel. later. CPT. simple or complex.) • Higher vertical resolution due to nearly continuous measurements. the characterization program is doomed. Communication with the project team. cost and schedule tend to be managed at the expense of gathering needed data. Representative samples of the subsurface materials are taken and sent to a laboratory for testing. is critical to success. work is still done against a schedule and budget. This evolution has resulted in a basic exploration philosophy for critical facilities: Use the CPT early and often in a project. It also adds needed flexibility to the program. The detailed design phase is where the bulk of the characterization program is performed. or compressible layers. the level of effort required for each phase may vary considerably based on the type and size of the project and the complexity of the subsurface. depths. even though the deviation may be valid given the subsurface conditions that dictated the change. respectively) depends on the success of the initial program and on any scope changes or any unknown subsurface conditions encountered during construction. thus. and in particular the project manager. For example. FVSTs. a phased approach for the detailed design phase is highly recommended. However. the initial phase might include CPT soundings to determine site stratigraphy. 2. On any project. this does not necessarily mean that the program will grow. however. It includes detailed field exploration. In our experience. and PMT soundings. This tends to focus the effort on those strata that have the greatest potential effect on the facility. Several particularly important advantages of CPT technology have been recognized and. and 3 should be carried out on every project. allowing for superior stratigraphic interpretation and detection of layers of special interest. this phase may be subdivided into additional phases. and geophysics. such as sample borings (SPT and undisturbed sample borings). The key point is that whatever the program entails. for critical projects (critical can be defined in terms of safety or monetary expenditure). used to further enhance the quantity and quality of geotechnical exploration that we have performed: • More exploratory penetrations due to lower cost and less field time compared to traditional drilled borings (CPTs at the SRS are about one-half to one-third the cost and take about one-fifth the time of an SPT boring of equal depth. And any deviation from either causes concern. sample types. and in particular the project manager. which can be used to determine target intervals for further adjacent sampling and subsequent laboratory testing • Highly repeatable measurements within similar material types or layers because of standard and automatic testing and data acquisition methods 182 Bechtel Technology Journal . DMT. large or small. Unfortunately. communication with the project team and/or owner is crucial. in many cases. loose. but it could also include a limited field exploration with widely spaced borings. Use of the CPT Following the trend observed in the industry in general. Testing generally includes index tests and tests for static and dynamic strength and compressibility. monitoring phase. once the program has been agreed to and initiated. In our experience. A second phase would then target specific horizons for undisturbed samples for laboratory testing. is critical to the success of a site characterization program. the construction phases. including very thin. Depending on the size of the project and the complexity of the subsurface.The proposal or preliminary design phase may include only the reconnaissance phase. which is required if the exploration program is to be successful. CPTs. Phases 1. This also requires full-time oversight and direction of the program by qualified geotechnical engineers and geologists dedicated to the effort who will continue to follow through on the project as it moves from the investigation phase into the design and. it needs to be flexible and the geoscience professional must be able and allowed to adapt the program to the conditions encountered. It allows “pinpoint” sampling of specific horizons rather than sampling at preselected depths. and the type of exploration to meet the conditions encountered. Without the flexibility to adjust locations. has increased progressively since the late 1980s in an effort to meet the aforementioned objectives on a project-by-project basis. in addition to the more routine SPT borings. The inclusion of Phases 4 and 5 (construction phase and post-construction Communication with the project team.

• Multiple measured parameters. SPT energy measurements were obtained and used later to correct the field N-values to N60. The clay―chiefly kaolinite and illite―binds the sand grains and appears to have been formed by in situ weathering. A word of caution. Geotechnical Investigations at the SRS For the formations of interest (Tobacco Road and Dry Branch). 18 additional high quality. Although the fines content (FC) of the samples varied. the decision was made to perform a detailed geotechnical exploration program. sleeve stress. It is well known that this empirical chart is based on observations of the performance of Holocene deposits. were performed on each sample tube prior to packing and transporting and after being received at the UCB laboratory. In the first. to correlate CPT (qt)1 with the cyclic resistance values obtained in the laboratory. including tip stress. and dynamic testing of carefully sampled soil specimens in the laboratory. studies of the liquefaction vulnerability using the empirical liquefaction chart suggested by Seed et al. since correlations shown in the literature do not fit all conditions. and the CPT friction ratios (sleeve resistance divided by tip resistance) range from less than 1% to over 10%. including X-ray photography. and the dynamic laboratory testing was carried out at the University of California at Berkeley (UCB) laboratory (BSRI [10. Eocene and Miocene age (35 to 50 million years old) sediments of the Altamaha. The laboratory testing included index testing. friction ratio. a single “cyclic resistance ratio (CRR) design curve” for these soils was established. a factor that normalizes the cyclic resistance of a soil to an overburden pressure equal to one atmosphere. clayey sands (SCs) deposited under alternating marginal marine and fluvial conditions. South Carolina). events (case histories) to draw upon at the SRS or in the vicinity. however: verification and calibration of site-specific correlations of engineering parameters determined with CPT parameters is highly recommended. For this reason. or (qt ) l. for resolving material characteristics. the question logically arose regarding whether the empirical chart was appropriate for the liquefaction evaluation at the SRS. fixed-piston samples AGED SOILS AT THE SRS n the shallow subsurface beneath the SRS. isotropically consolidated. Site-specific sampling and laboratory testing were completed for two facilities at the SRS. identify any anomalous strata (soft or compressible soil). The samples tested to develop the SRS curve were classified as SC soils (Unified Soil Classification System [USCS]) and had plastic fines with contents ranging from about 9% to 29%. leading to sitespecific recommendations for Kσ. there were no paleoliquefaction December 2009 • Volume 2. Because of the relatively low penetration values and the relatively high seismic exposure (proximity to Charleston. Samples were obtained by a fixed-piston sampler using controlled techniques and sampling procedures well established at SRS. and within Bechtel. 11]). To aid in the evaluation. [9] indicate that the site is potentially vulnerable to seismic liquefaction. a series of 17 stress-controlled. including initial stiffness At the SRS. the CPT is used primarily to establish stratigraphy. and Upper Dry Branch Formations are composed primarily of laminated. Corresponding SPT N-values range from less than 10 blows to over 20 per 0. including field testing. 183 . CPT soundings were pushed adjacent to the Site A borings described above. undisturbed sampling. The program was developed and implemented by Bechtel Savannah River Incorporated (BSRI). For the second site-specific sampling and laboratory testing. two tasks were completed: an extensive field and laboratory geotechnical investigation at the site and a review of available opinions and data in the technical literature on the liquefaction vulnerability of geologically old sand deposits. All of the test results were correlated back to a sample-specific (Nl )60. pore pressure. and acquire a preliminary estimation of specific engineering soil properties for design. The overall assessment resulted in three data points relating (Nl )60 to CRR. gap-graded.5 to 3 meters [5 to 10 feet] of) SPT boreholes at locations exhibiting low N-values. Tobacco Road. with an average of about 17%.3 meter (1 foot). undrained cyclic triaxial tests were performed for Site A. The CPT tip resistances of this material range from less than 1 MPa (10 tsf) to about 15 MPa (157 tsf). Measurements. where (Nl )60 is the SPT penetration resistance normalized to one atmosphere and 60% energy level and (qt ) l is the CPT tip resistance normalized to one atmosphere. the determination of dynamic strength and of volumetric strain after liquefaction. The samples were obtained adjacent to (within 1. and shear wave velocity. Number 1 I Verification and calibration of site-specific correlations of engineering parameters determined with CPT parameters is highly recommended. In the same way. and an evaluation of the influence of confining pressure. To resolve the concern. Since the soils at the SRS are geologically much older than Holocene age.

% 51 49 NP 41 NP* 47 48 50 48 60 39 80 PI.7 18.6 1. and (2) as expected. SC (qt) 1 .1 18.5 17.6 16.2 15.165 0.5 1. The lines representing various FCs were constructed based on the laboratory test results described above. SC SC SC SC SP-SC. kN/m3 16. we relied more on the Idriss and Boulanger [13] relationship because (1) the Idriss and Boulanger clean curve “fits” our site-specific data more closely than the Youd et al.were obtained at Site B from boreholes adjacent to (within 1. SRS Data Summary Data Point 1 2 3 4 5 6 7 8 9 10 11 12 CRRf DB1 DB3 LL LTR NP PI (qt)1 SC Notes: Site A A A A A B B B B B B B Geologic Formation UTR UTR LTR LTR LTR TR3 TR3 TR3 TR3 TR3/TR1 DB1/3 DB1/3 USCS SC SC SP-SM SC.0 33. % 30 26 NP 17 NP* 28 32 29 29 37 18 62 γd . in particular.0 1.152 0. for the revised SRS clean curve. the Idriss and Boulanger curve for Holocene soils results in a more conservative estimate of CRR. The laboratory results were evaluated in the same manner described above except that the CPT (qt )1 was used instead of (Nl )60. [15] (given in Ishihara and Yoshimine [16]).6 LL. % 28.7 0.167 0. undrained cyclic triaxial tests). and engineering judgment.149 0.6 15. the clean sand passing through the origin of coordinates). Note that although 35 cyclic triaxial tests were performed. mps 293 268 255 257 253 247 223 267 201 162 269 206 SM SP TR1 TR3 USCS UTR (Vs)1 γd Fines. on the shape of the clean (≤5% fines) curve at low penetration resistances.8 1.0 2. For the reevaluation of the SRS data. however. Prior to the reevaluation (discussed subsequently).1 Dr . Thus.SC.1 0.9 2.7 16.’s empirical chart [9] (i. % <5 15 20 32 45 10 <5 <5 <5 11 14 <5 (Vs) 1 . MPa 0. both Youd et al. For example.115 0. including results from Youd et al.05. isotropically consolidated. * Two of the three samples for Data Point 5 were NP. the trends of Seed et al.138 0. Table 1.3 15.3 20.1 16. That relationship (developed in 1994–1995) has recently undergone a reevaluation to take into account newer information (since 1995).3 5.5 16. only 12 data points are shown.5 19. The reevaluation included a review of all the SRS data but centered.6 15. SC SC SC SM.7 15.173 0.095 0. we adopted the Idriss and Boulanger relationship for the CPT to “construct” the revised SRS-specific relationships.4 16. These samples were also sent to the UCB laboratory for dynamic testing (stress-controlled.5 15.6 26.8 34. at higher penetration resistances. SP-SC SP-SM.5 9. This is due to grouping like material in terms of FC and [qt]1.. assuming the suite of curves was more or less parallel. curves. 184 Bechtel Technology Journal .7 10. with an average of about 24%.134 0.e.3 3.117 0. a suite of curves based on plastic FC was established. SM. [12] and Idriss and Boulanger [13] show the clean sand (≤5% fines) curve becoming flatter at low penetration resistances and intersecting the ordinate at a CRR value of 0.135 0. The samples tested were SC soils and had plastic FCs ranging from about 16% to 34%.2 CRRf 0.9 1.7 16. [12] and Idriss and Boulanger [13]. Table 1 summarizes the relevant data for the combined data set.5 to 3 meters [5 to 10 feet] of) 18 CPT soundings.139 field-corrected cyclic resistance ratio Dry Branch 1 Dry Branch 3 liquid limit Lower Tobacco Road non-plastic plasticity index normalized cone tip resistance clayey sand silty sand poorly graded sand Tobacco Road 1 Tobacco Road 3 Unified Soil Classification System Upper Tobacco Road normalized shear wave velocity (normalization) per Andrus and Stokoe [14] dry density The estimate of relative density (Dr) is based on Tatsuoka et al.

6.052 0. % 10 10 15 15 20 20 20 20 25 25 25 30 30 Data Point No.056 0. Using a constant factor to increase the curve across penetration resistances is consistent with the work of Polito [18] and Polito and Martin [19] for FCs below about 40%. the y intercept was 1.065. we used the low end of the strength gain factor range (1.7 26. (Table 1) 3 5 4 11 2 9 10 12 1 8 9 1 7 Actual FC. % 9. We believe this data point to be somewhat anomalous.43 Ratio Selected for FC Group 1. For example. 20%.138 0. 0.46 2.10 2.149 0.052 0.117 0. 25%. we developed the remainder of the SRS CRR curves for various FCs simply by applying the ratio of the sitespecific data (CRRf) of Idriss and Boulanger to December 2009 • Volume 2. This data point has not been included in the evaluation because it is not consistent with the results of the entire data set.Table 2.3 MPa (24 tsf).8 15.134 0. Number 1 the adopted Holocene clean sand curve (CRRI/B) of Idriss and Boulanger over all penetration resistances. Table 2 and Figure 4 Figure 4.11 2.1 3.7 33. Summary of CRR Design Curve Factors FC Curve.3) proposed by Lewis et al. which is.8 1.0 0.095/0.1 18.149 0.9 1.24 3.1 2.165 0.052 0. and 30%.1 than the Idriss and Test Data – With % Fines WSRC [11] Boulanger [13] clean Current Revision curve.6 2.6 2.079 0.058 0.6 times higher 0.45 2.7 18. Thus. [12] at low penetration resistances.6 28.052 0. [12]).055 0. we assumed the shape of the revised clean sand curve to be similar to that of the Idriss and Boulanger [13] clean sand curve.64 for Data Point 3 0.8 9. for the clean “aged” curve.64 1.93 3.173 CRRI/B 0.167 0.6 2.3 20.1 3. MPa 2. in turn.167 0.051 0.1 0.05 (the revised ordinate for the clean curve.3 times 0.4 3. similar to the shape given in Youd et al. Thus.050 Ratio CRRf/ CRRI/B 1. The ratios of the 0 4. Data Point 3 has a normalized tip stress ([qt]1) of 2. In the same 0 way.6 1.5 ing CRR from the Idriss σ' = 1 tsf 25% vo ~0.4 Note: Data Point 6 from Table 1 is not in Table 2.17 2.45 for Data Point 5.6 10.1 3.3 1.5 20.0 3.095 0. To develop the SRS “aged” clean curve.0 MPa (q t) 1 – MPa (50 tsf).2 29 34 26 the ratio would be about 0% 20 34 1.1 0. curves can be 0 50 100 150 200 constructed for FCs of (q t) 1 – tsf 15%. or 0.079 = 1. In addition.6 1.3 over all penetration resistances to the Idriss and Boulanger [13] clean curve to derive the CRR corresponding to the “aged” clean SRS curve.1 2.6 14.115/0.6 0. CRR vs CPT (q )1 CRR t 185 . for the revised SRS “aged” clean sand relationship.135 0. [17] for clean sands (discussed below).5 19.115 0.1 MPa and Boulanger [13] clean 20% 15% curve are 0. CRRI/B refers to the CRR using the Idriss and Boulanger [13] clean curve for CPT.4 CRRs at the correspond30% M = 7. we applied a factor of 1.3 5.5% 15% and 0.064 0. and Data Point 5 has a (qt)1 of 5.051 0. the 10% FC curve used Data Points 3 and 5 from Table 1.9 0.7 16.6 3.139 0.5 28. Youd et al. The resulting SRS 19 16 19 10% CRR curve would be 1. Using a constant factor for a given FC independent of the penetration resistance and adopting the shape of the Idriss and Boulanger [13] Holocene clean sand curve.4 19.3 ≤5% 22.67 3.056 0.058 = 10% 30% 1.93 2.44 2.24 3.6 (qt) 1 .9 2. consid10% ering both data points.2 0.5 CRRf 0.6 2.

The results of some of the more significant findings are summarized below.5 g. Grain size tests showed that the sands at the sites are non-plastic (NP). 11]. [31]. Northridge. The strength gain in this case was calculated to be about 3. Lewis et al.” a conservative relationship (C A = 1. and trenches.” Skempton [22] discusses the evidence for increase in the deformation resistance of sand with the increased duration of sustained loading. overconsolidation.g. including some of the same data evaluated by Skempton. (Note: The 1.3 g resulted in computed strength gains of 1. [17]. earthquake. Lewis et al. Schmertmann [2.05 log (t/100)) is developed to account for aging through the parameter (N1 )60/Dr2.3 factor was applied above for the reevaluated SRS “aged” clean curve. He finds that the ratio between the normalized SPT (N1 )60 blow count and the square of the relative density (Dr2) varies with the period of sustained loading. Seed [21] considers the cyclic resistance of laboratory-prepared samples and of hydraulic fills of different ages (up to about 3. and Leon et al.. They conclude that the parameter (N1 )60/Dr2 is influenced by particle size. [17] conclude that a reasonable range of acceleration is between 0. [17] reviewed data collected by Martin and Clough [24. We consider this relationship to be a lower bound of potential strength gain with time. 25]). The studies included borings.8. after deposition.6 to 3. Independent studies carried out by several investigators estimate the epicentral acceleration at somewhere between 0.) Arango and Migues [27] performed investigations after the occurrence of the January 17. Kulhawy and Mayne [23]. Lewis et al. clean. [9] was found to be about 2. Similarly. the earthquake’s moment magnitude has since been estimated at between about 7 and 7. For the evaluation of these data. Kulhawy and Mayne [23] compile the values of the same parameter as Skempton [22]. 26]. using a peak ground surface acceleration of 0. No quantitative data are available regarding the magnitude of the event or of associated peak ground accelerations. among them are Youd and Hoose [20]. [32]. The area selected for the study was within the Gillibrand Quarry site in the Tapo Canyon. Talwani and Cox. Skempton [22]. with FC less than 5% (14 locations) and 10% (5 locations). and aging.000 years) and concludes that the data indicate “the possibility of increases in cyclic mobility resistance on the order of 75% over the stress ratios causing cyclic pore pressure ratios of 100% in freshly deposited laboratory samples. Although they acknowledge that some of the data may be “imprecise. due to long periods of sustained pressure in older deposits. an upper boundary was established that separates the maximum cyclic stress ratios tolerated by the soil with no liquefaction from those sites that experienced limited to widespread liquefaction. the increase in dynamic strength ranges from 1. BSRI [10. [30]. South Carolina. for several fine and fine to medium sand deposits of known geologic age. using a lower bound acceleration of 0. He considers the increase in penetration resistance blow count (N-value) a reflection of the increase in resistance to deformation. . overconsolidation. [28]. Lewis et al. 1994. and Martin and Clough [24. The data show that compared to the Idriss and Boulanger [13] Holocene clean curve for CPTs.summarize the evaluation results for each FC curve (Figure 4 also shows the BSRI [11] relationship).0. 25] and Dickenson et al. For their study. earthquake. Bechtel Technology Journal Strength gain is influenced by particle size. and aging. respectively. [17] calculated the induced cyclic shear stresses at the depths of interest for each deposit.3 to 1. Seed [21].2. In the same way. Skempton reports that strength gains (increases in (N1 )60/Dr2 relative to those predicted for samples in the laboratory) were reported for normally consolidated sands of about 14% and 57% at 10 years and >100 years.4 for FCs ranging from 10% to 30%. [17] review published data compiled from the 1886 Charleston. California. A lower boundary was established showing the minimum stress ratios required to cause liquefaction at those sites that experienced marginal liquefaction and liquefaction. The sands in the remaining sites were described as poorly graded sand–silty sand (SP-SM) material.3 and 0.2 + 0. (N1 )60/Dr2. Review of Data on the Performance of Aged Soil Deposits Several investigators have addressed the issue of soil aging. piezocone probes. Arango and Migues [27].000 years and located 10 miles inland.5 g. Amick et al. The strength gain of the boundary relative to the clean sands curve in the empirical chart by Seed et al. however. 186 Lewis et al. Martin and Clough [24. [29].000 to 230.0 g. velocity profiles. Obermeier et al. The beach processes led to sands and silty soils being concentrated in the highest portions of the beach ridges.3 g and 1. Relic liquefaction features have been investigated by many along the eastern seaboard (e.5. Features were found primarily in the sands and silty sands of two ancient ridges dating back 130. 25]. [31]. Dickenson et al.

The four sites range in age from 546 years to 450.north of Los Angeles.37. was estimated to vary between 0. In nearby Simi Valley.5 earthquake. CPT (qc)1.6 to 2. an old deposit of sand showed no signs of liquefaction.548 200. therefore. be ignored. the sand is lightly cemented.5 g 20 10 5 9 4 5 15 2.3 2 1.000 546–450. based on results of laboratory testing. it was previously buried by as much as 460 meters (1. Microscopic examination reveals a high degree of quartz grain overgrowth—evidence of age and burial.80 and 1. [17] Arango and Migues [27] Current Evaluation 1E+04 1E+05 1E+06 1E+07 1E+08 Figure 5. however.50 for Holocene-age sands from Seed et al.5 1. Although this deposit is now exposed in outcrops. Factor 1. California.000 4. The range of field CRRs.548–450. The parameters reviewed were SPT N-values. the increase in dynamic strength ranges from 1. such that it can support vertical faces when dry but is weak enough to crush between one’s fingers with the slightest pressure.0 9 10 0. fine quartz sand (SP) with less than 5% NP fines. CPT soundings.7.000 years old. [9].3 to 2. [28] Lewis et al. A total of 18 stress-controlled cyclic triaxial tests to classify and determine the static and dynamic strengths of the sand were carried out at the Geotechnical Laboratory at the UCB. years 3. [28] investigated the effect of age at four sites in the South Carolina coastal plain. It is relatively uniform. 0 1E-02 1E-01 1E+00 1E+01 1E+02 1E+03 Age.0 Generalized Upper Trend Strength Gain Factor 0. The results of their evaluation indicate that these coastal plain soils had increased resistance to liquefaction by a factor ranging from 1. In its current state. and undisturbed block sampling techniques.000 3.6 (compared to the Youd et al. Strength Gain with Age December 2009 • Volume 2. averaging 4%. Leon et al. and adopting a predicted.3 g 10 4 7 10 1.0 Adapted from Skempton [22] Note: Labels next to symbols indicate maximum fines content. years Seed [21] Kulhawy and Mayne [23] Skempton [22] Leon et al.510 feet) of overlying soil. induced cyclic stress ratio equal to 0.7 Age. The specific factors and ages reported for each of the four sites are as follows: Site Ten Mile Hill Site A Ten Mile Hill Site B Sampit Gapway Age does play a major role in the strength of soil deposits and cannot. [12] relationships for Holocene soils). Area acceleration levels exceeded 0. and normalized shear wave velocity (Vs)1. resulting in the failure of a small water-retaining dam in the quarry. with an average of 1. Number 1 187 .0 Holocene GEOLOGIC EPOCHS 10 Pleistocene Pliocene Oligocene Miocene Eocene 30 25 5 3. The FCs from samples at all of the sites ranged from 0% to 9%. This sand has been estimated to be approximately 1 million years old.5 g. test pits. The field exploration program used drilled and augered boreholes. induced by a magnitude 7. Based on these results.

Knowledge about the subsurface conditions will increase. therefore. [9] are limited to the relatively (geologically speaking) young soil of Holocene age. AND CONCLUSIONS T he CPT has enhanced the capability to perform subsurface exploration within Bechtel and at the SRS. This has been commonly recognized as far back as 1978 (Schmertmann [33]): “Although engineers with much CPT experience in a local area sometimes conduct site investigations without actual sampling.000. using the CPT results in a program that allows maximum flexibility and 188 ACKNOWLEDGMENTS Through the years. be ignored.) However. It has been through this direct interaction that we have arrived at the methodologies and conclusions described herein.92 + 0. The case histories reviewed in this paper confirm the observations of Professor Schmertmann—namely that age does play a major role in the strength of soil deposits and cannot. the need arose to define the cyclic resistance of older sands. But. The result is more high-quality data. the results are also compatible with the extrapolated trends suggested by Seed [21]. however. including several at the SRS. the amount should decrease with increased use of the CPT. and time and cost savings. Skempton [22]. and that strength does increase with the passing of time. as they show other data with higher (N1 )60/Dr2 for ages up to 108 years. Furthermore. While there will always be a need for soil borings and laboratory testing.000 years. It is interesting to note that trends shown on Figure 5 relating strength gain with age using the work based on SPT N-values (Skempton [22] and Kulhawy and Mayne [23]) are at the low end of the data shown. Properly verified site-specific correlations between CPT parameters and laboratory testing add a dimension that can be very powerful when assessing site conditions and performing design-related activities. provide confidence in the validity of the investigations. it was necessary to carry out the field and laboratory test programs and also the literature review.Using a CPT early in a project adds flexibility to the site characterization program and allows greater site coverage with a given budget in a shorter period of time. Professor Schmertmann has been involved with many Bechtel projects. Note that for the SRS reevaluated data. including “pinpoint” sampling and testing of targeted strata. it appears that the Kulhawy and Mayne relationship can be used as a lower bound for the data shown. the technical community widely recognizes that the geotechnical properties of sand deposits are influenced by their age. The results. For use at the SRS. particularly for data older than about 1. in any exploration program is still effective communication with decision makers at all levels. TEST RESULTS. EVALUATION. In either case. Figure 5 compares the predicted strength gain from the SRS studies reported above and the historical data reviewed. prior CPT data can greatly reduce sampling requirements. we note the consistency between the results of the SRS investigation and the field data from South Carolina and Southern California. Bechtel Technology Journal . Lacking information about the performance of the sands at the site. summarized in Figure 5. With these attributes. the strength gain is relative to the clean CPT curve from Idriss and Boulanger [13]. Using the Seed et al. Using the CPT early in a project adds flexibility to the program and allows greater site coverage with a given budget in a shorter period of time. We also acknowledge the contributions of Laura Bagwell and Rucker Williams (former Bechtel employees and still at SRS) for their patience and much-needed assistance in preparing this manuscript. Cyclic resistance data about the behavior of soils under dynamic loading summarized in the widely accepted empirical chart by Seed et al.” In terms of aging. and a similar relationship (using the same functional form suggested by Kulhawy and Mayne) can be used for an upper bound trend (C A = 1. and Kulhawy and Mayne [23]. (Note: The trend of Kulhawy and Mayne is an acknowledged conservative trend. however. Fulltime geotechnical oversight enhances and facilitates the needed communication and allows quick and early decisions to be made. The key component. excellent repeatability and data reliability. Initial program development following the suggestions of Sowers [4] provides a reasonable starting point. [9] relationship results in strength gains of approximately 10% to 20% less. We owe a great deal to Professor Schmertmann for his wisdom and insight and his fundamental knowledge of soil mechanics. This may be an indication that the SPT N-value is a poor indicator of strength gain with time for very old deposits.23 log(t/100) ). affords superior stratigraphic definition through continuous or near continuous data. in general one must obtain appropriate samples for the proper interpretation of CPT data.

4–5. II.W. Polito and J.19. icevirtuallibrary. see http://www. R. “Advantages and Limitations of the Observational Method in Applied Soil Mechanics. Géotechnique. No. Seed. G. ASCE. Arango.” Journal of Geotechnical and Geoenvironmental Engineering. 1994. Idriss. 1990.L. “The Influence of SPT Procedures in Soil Liquefaction Resistance Evaluations. and T. I.” in Seismic Hazards in the Soil Deposits in Urban Areas report.M.” Journal of Geotechnical Engineering.M. Chichester. Idriss and R. Virginia Polytechnic Institute and State University. 19. pp. Shibuya. ASCE. No. Lewis. Kimball. “In-Tank Precipitation Facility (ITP) and H-Tank Farm (HTF) Geotechnical Report (U). WWWdisplay. December 10. December 1985. 32– docs/CPT%20for%20Predicting%20 Settlement%20in%20sands. pp. Englewood Cliffs. 1015–1025.H.1969. “The Mechanical Aging of Soils.asce. VA. Christian. Boulanger. Gould.vt. Ishihara and M. 817–833. Peck. servlet/GetPDFServlet?filetype=pdf&id= JGGEFK000127000005000408000001&idtype= cvips&prog=normal. (BSRI). pp. 127. 171–187. Aiken. 1993. L. 408–415. Seed. Y. “Method of Evaluating Liquefaction Potential and its Evaluation. Inc. pp.1680/icien. http://ascelibrary. Vol.F.” PhD Thesis. May 1994.aip. R.aip. WSRC-TR-95-0057. G.S. [4] [5] [6] [7] [8] [9] [10] Bechtel Savannah River. I. FL. Brazil.REFERENCES [1] J. Jr. W. Standard_handbook_for_solid_and_ hazardous_waste_facility_assessments. Foz do Iguassu. access via http://cedb. 1425–1445. Sato. http://ascelibrary. Moriwaki. and R. 102.R. Harder. ASCE National Capitol Section.C.K.pdf. “Standard Handbook for Solid and Hazardous Waste Facility Assessments. December 2009 • Volume 2.html.engr. Geotechnical Investigation (U).M. R. Vol. Ministry of Education of Japan. “Evaluation of Settlement in Sand Deposits Following Liquefaction During Earthquakes. Mellors. 1288–1330. pp. Yoshimine.” Seminar on Site Characterization. Tokimatsu. and S. “Without Site Investigation Ground is a Hazard. Sara. Ishihara. Koester. cgi/WWWdisplay. “Problems of Site Characterization.” Prentice Hall. J.insitusoil.B. West Sussex. PDFs/2004/Idriss_Boulanger_3rd_ICEGE. 1979. J. 75–109 (in Japanese) Reliability-Statistics-Geotechnical-EngineeringGregory/dp/0471498335#noop. Polito. and T. Andrus and K. Issue 2.R.” John Wiley & Sons. Issue 5. K. II.D. Andrus. S. [15] F. Mitchell.P. pp. [14] R. 1.pdf. Harder.F. “Liquefaction Resistance of Soils: Summary Report from the 1996 NCEER and 1998 NCEER/ NSF Workshops on Evaluation of Liquefaction Resistance of dp/0024138703. January 7–9. R. Chung. access via http://www. SC.” Journal of Geotechnical and Geoenvironmental Engineering.asce. T.” Proceedings of the 11th Pan-American Conference on Soil Mechanics and Geotechnical Engineering. WSRC-RP-93-606.1680/geot. 2004. access via http://www. 2003. Vol.F. June 1969.P. ASCE. (BSRI).” Journal of the Soil Mechanics and Foundation Division. Arango. 11. Stokoe.B. May 1970. Schmertmann. May 2001. SM3. 1999.K.pdf. August 8–12.” Proceedings of the Joint 11th International Conference on Soil Dynamics and Earthquake Engineering (ICSDEE) and the 3rd International Conference on Earthquake Geotechnical Engineering (ICEGE). 1995. J. see http://www.K. G.L. and K.P. K. Vol.H.jiban. [2] [3] [12] 4th edition.H. WWWdisplay. Zhou.” Lewis Publishers. SC.” Ninth Rankine Lecture. Boca Raton. GetPDFServlet?filetype=pdf&id=JGGEFK0001 27000010000817000001&idtype=cvips&prog=n ormal. I. No. Martin. NJ. H. W. GetPDFServlet?filetype=pdf&id=JGGEFK0001260 00011001015000001&idtype=cvips&prog=normal. 10. G. content/article/10.D.cgi?9104271. No. November 2000. Martin. P173–188. 9. Ross. 96. access via http://cedb. Castro.P.H.cgi?8503442. http://ascelibrary. J. Vol.D.2. 127.aip. Baecher and J. Schmertmann. Vol. Liao. http://cee.N.R.or. [19] C. Vol. ASCE. see http://www. see http://www. M.lib. CA. 1011–1043. J. Blacksburg. Dobry.ucdavis. [16] K. see http://openlibrary. Robertson. Stokoe. No. “Savannah River Site. “Effects of Nonplastic Fines on the Liquefaction Resistance of Sands.asce. S. Littlejohn. Tatsuoka. Berkeley. “Static Cone To Compute Static Settlement Over Sand.1994.” Journal of Geotechnical Engineering. Geotechnical Engineering Committee. Issue 10. Issue 2. http://www.B. [18] C. England. Marcuson.S.cgi?0105841. pp.issmge.B. 1999. M.” Soils and theses/available/etd-122299-125729/ unrestricted/Dissertation. Japan. Cole.T. Number 1 189 . ASCE. Sowers. “Semi-Empirical Procedures for Evaluating Liquefaction Potential During Earthquakes. Youd. [11] Bechtel Savannah River. [17] M.” Proceedings of the ICE – Civil Engineering. 126. pp.W. Power.” Report No.E. pp. Tokyo. Replacement Tritium G. Hynes. P. pp. October 2001. “Introductory Soil Mechanics and Foundations: Geotechnical Engineering. 72–78. “Reliability and Statistics in Geotechnical Engineering. 821–829. pp. 117.” Report No. Inc. M. 12. and access via http://cedb.aspx?refid=176. March 1992. “Liquefaction Resistance of Soils From Shear-Wave Velocity. [13] I. Aiken. L. “Liquefaction Resistance of Old Sand Deposits. web/ http://scholar. September 1991. Vol.171. “The Effects of Non-Plastic and Plastic Fines on the Liquefaction of Sandy Soils.” Journal of Geotechnical and Geoenvironmental Engineering.

pp. 1. 36. D.” Géotechnique. [23] F. R. access via http://cedb. [28] E. WWWdisplay. [33] J. Schmertmann. DC. Jacobson. 379–381. Hallbick. hydroelectric. “Paleoseismic Evidence for Recurrence of Earthquakes Near Charleston.S. [22] A. [26] J. As a member of the 190 Bechtel Technology Journal . FHWA–TS-78-209.1986.” for the “Sobre Envejecimiento de Suelos” symposium. Nuclear Regulatory Commission.W. J. R. see http://nisee.W. pipelines.” Journal of Geotechnical Engineering.[20] T. South Carolina.” this paper was published in March 2008 in From Research to Practice in Geotechnical Engineering (ASCE Geotechnical Special Publication [GSP] No. June 1988. Skempton. Bechtel Corporation.W.berkeley. hotels. Gassman. The paper has been edited and reformatted to conform to the style of the Bechtel Technology Journal. 180)— a volume honoring Dr.H. Issue 3. Cannon. Final Report 1493-6. Weems. Particle Size. John H.” EPRI EL-6800. Seed.E. Gohn.” Report on Research.W. Charleston. “Seismic Parameters From Liquefaction Evidence. “Liquefaction Susceptibility and Geologic Setting in Dynamics of Soil and Soil Structures. biblio. 1993. New Delhi. 197.” Proceedings of the Sixth World Conference on Earthquake Engineering. GetPDFServlet?filetype=pdf&id= JGGEFK000132000003000363000001&idtype= cvips&prog=normal. 1996. Maurath. 313. [27] I. Arango and R. National Conference on Earthquake Engineering.R. Mayne. Vol. Clough. S.N. “Manual on Estimating Soil Properties for Foundation Design.osti. airports. p. August 1994. Billington. Electric Power Research Institute (EPRI).” submitted to the U.cgi?5014380. “Holocene and Late Pleistocene Earthquake-Induced Sand Blows in Coastal South Carolina.L.C.1680/geot.jsp?osti_id=6151611. access via http://www. Hoose. Washington. San geotechnical/EL-6800. September 1986. U. access via http://www. 425–447.” Journal of Geotechnical and Geoenvironmental Engineering. Clough. theme parks. Youd and S. Mexico. Geological Survey. “Paleoliquefaction Features Along the Atlantic Seaboard. 1978.E. “Guidelines for Cone Penetration Test Performance and Design.cgi? article/10. Issue 3. 2. National Science Foundation Grant No.asce. He has received three Bechtel Outstanding Technical Paper awards. G.” Science. bridges. July 1985. Department of Civil and Coastal Engineering. 363–377. DC.E. [29] P.icevirtuallibrary.” H. 8..aip. Martin and G. Migues. pp. D. “Evaluation of the Engineering Properties of Sand Deposits Associated With Liquefaction Sites in the Charleston. No. Kulhawy and P. he has been involved with nearly every type of project that Bechtel has completed—from fossil and nuclear power to LNG. South Carolina Vicinity. January 1977.3. Mike has authored or co-authored more than 25 technical papers and has written or co-written several hundred internal reports for Bechtel.S. Vol. and P. ASCE. Dickenson. Schmertmann. 2. “Soil Liquefaction and Cyclic Mobility of Evaluation for Level Ground During Earthquakes. No. [25] J. 229. abstract/229/4711/379. mass transit and tunnels. and smelters—and even a palace for the Sultan of Brunei.R. pp. SC. 4711. http://www. Clough. 1990. Vol. pp. [32] D. It is reprinted with permission from the American Society of Civil Engineers (ASCE).L. Powars. 2189–2194. p. Vol. He started work at Bechtel immediately after college as a field soils engineer on the WMATA Metro subway project in Washington. 1986. SC Area: A Report of First-Year Findings. ASCE. No. and G.S. Vol. Under the title. and H. Palo “Investigation of the Seismic Liquefaction of Old Sand Deposits. Martin. Bolton Seed Memorial Symposium.vulcanhammer. Markewick. August 1990. 201–255. “Geotechnical Setting for Liquefaction Events in the Charleston. February 1979.W. Washington. 1345–1361. 132. DC. Lewis is Bechtel’s corporate geotech n ical engineering lead and heads the Geotechnical Engineering Technical Working Group. “Site Characterization Philosophy and Liquefaction Evaluation of Aged Sands— A Savannah River Site and Bechtel Perspective. Mexico City. October 1990. Federal Highway Kemppinen. [31] S. [21] H. CMS-94-16169. BIOGRAPHIES Michael R. [30] S. Aging and Overconsolidation. Amick. Mike is an ASCE Fellow and a member of both the International Society of Soil Mechanics and Geotechnical Engineering and the American Nuclear Society working committee on seismic instrumentation at nuclear power facilities. mines. http://ascelibrary.F.pdf. Vol.W.E. P. Talwani and J. Gelinas. [24] J. “Accounting for Soil Aging When Assessing Liquefaction Potential. During his 35 years with the company. “Standard Penetration Test Procedures and the Effects in Sands of Overburden Pressure. pp. 105. 120.H.” Journal of the Geotechnical Engineering Division.S.” Proceedings of the Third U. Vol.B. CA. “Update on the Mechanical Aging of Soils. Schmertmann. Relative Density. CA. US Department of Transportation.” Report NUREG/CR-5613.sciencemag. and H. pp. G.” Report No. F. access via http://cedb. R. University of Florida. March 2006. 3. The Mexican Society of Soil Mechanics. Professor Emeritus.425. Martin and G. access via http://www.S. railroads.R.36. Talwani. Cox. Leon. E.H. Moore. India. Vol.B. Offices of Research and Development.

with particular emphasis on site response and liquefaction analyses. He originally joined Bechtel’s San Francisco office as chief geotechnical engineer. Ignacio Arango. both in Civil Engineering. Ignacio has been retained as a corporate geotechnical engineering consultant on matters related to geotechnical and geotechnical earthquake engineering. He received two technical research grants from Bechtel and two technical research grants from the National Science Foundation. and for Woodward Clyde Consultants. for which he prepared books containing the material presented and provided them to all seminar participants. an MS from the Massachusetts Institute of Technology. Ignacio is currently a member of the ASCE. and the corporate manager of geotechnical engineering. as well as a book chapter on earth dams in Design of Small Dams (published by McGrawHill). Washington State. He is a licensed Professional Engineer in California. Cambridge. Mike received his MS and BS. Ignacio began his engineering career with Woodward-Clyde-Sherad and Associates. California. Ignacio has received three Civil Engineering degrees—a PhD from the University of California at Berkeley. Mike has lectured at various universities and at several local. which is focused on seismic issues related to the nuclear renaissance. Ignacio has made numerous technical presentations at conferences in multiple countries throughout Europe. Chile. McHood is a senior geotechnical engineer in Bechtel’s Geotechnical and Hydraulic Engineering Services Group. He is a licensed Professional Engineer in South Carolina. he has given weeklong seminars in Colombia. Number 1 191 . In addition. he was the principal author of the NEI white paper regarding shear wave velocity measurements in compacted backfill.Nuclear Energy Institute Seismic Task Force Team. including the annual Sowers Symposium at The Georgia Institute of Technology. and Argentina. Mike has co-authored five technical papers—three related to liquefaction and two related to earthquake ground response. California. Utah. Florida. and later worked for a civil/geotechnical engineering practice in Colombia. and his undergraduate degree from the Universidad Nacional de Colombia. He is a licensed Professional Engineer in Maryland. retired in 2003 after 18 years with Bechtel. He is a member of the ASCE. state. and Illinois. the Earthquake Engineering Research Institute. and national ASCE meetings. and the Americas. with recent assignments involving work on nuclear power plants as well. from Brigham Young University in Provo. Mike holds a BS in Civil Engineering from the University of Illinois. where he was a Bechtel Fellow. for Shannon and Wilson. a position that involved him in all projects requiring geotechnical input. December 2009 • Volume 2. Most of his career has been spent at the nuclear facilities at the DOE Savannah River Site. and the International Society of Civil Engineers and is an honorary member of the Sociedad Colombiana de Ingenieros and the Sociedad de Ingenieros Estructurales del Ecuador. Medellín. Ignacio has authored or co-authored 62 technical papers published in several journals and conference proceedings. Asia. He has more than 17 years of experience in this field. PhD. a principal vice president. Michael D.

192 Bechtel Technology Journal .

Inc. (BNI). availability. 105 mm projectiles. to design. The Pueblo stockpile consists of two chemical agent types—HD (distilled mustard) and HT (mixture of HD and T agents). the munitions are unpacked. The project team decided early in the design phase to prepare a discrete event model of the destruction process to conduct “what-if” analyses for design decisions and predict the plant’s overall operating schedule. availability. The PCAPP design uses chemical neutralization followed by biotreatment to destroy the agent. and close the PCAPP facility. under the direction of the US Army Figure 1. and planning for maintenance and spare parts needs. Cutaway of a Typical Mortar and 105 mm Projectile Paul Dent pmdent@bechtel. preventive maintenance. These blister agents are stored in three different caliber munition types: 155 mm projectiles. The project also funded early developmental testing for key first-of-a-kind (FOAK) equipment and performed associated throughput. chemical agent destruction. First. All rights reserved. blister agent. Assembled Chemical Weapons Alternatives (ACWA) program. operations schedule. pilot test. an integrated contractor team consisting of Bechtel National. first-of-a-kind (FOAK) equipment.600 tons of chemical agent.2 in. Its purpose is to safely and efficiently destroy the stockpile of chemical weapons stored at the Pueblo Chemical Depot (PCD). plant life-cycle cost evaluations. selecting unit operating scenarios. A cutaway view of a typical mortar and a 105 mm projectile is shown in Figure 1. The facility must destroy more than 2.600 tons of mustard chemical warfare agent. construct. systemize. disassembled. and URS Corporation (formerly Washington Demilitarization Company).com The Pueblo stockpile totals approximately 2. Parsons. reliability. Uncontaminated dunnage © 2009 Bechtel Corporation. and suggestions for future application to chemical process plants. This paper describes the development and use of the model. throughput model. Pueblo Chemical Agent-Destruction Pilot Plant (PCAPP). chemical process T August D. random and expected equipment failure rates. PhD cmyler@bechtel. application of TRAM evaluation data during equipment design and testing. throughput. Myler. and 4. Christine Statton cstatton@bechtel. Benz adbenz@bechtel. including processing rates for equipment. 193 . and repair durations. throughput and availability analysis (TAA). and maintainability (TRAM) evaluations. Colorado. routine plant Wilson Tang Craig A. ACWA selected the Bechtel Pueblo Team (BPT). reliability. The model and TRAM evaluations have proven invaluable in conducting “what-if” as well as throughput and availability analyses. operate. The model also allows for a stochastic analysis of plant operations. and demilitarized in the enhanced reconfiguration building (ERB). chemical weapon demilitarization.EVALUATION OF PLANT THROUGHPUT FOR A CHEMICAL WEAPONS DESTRUCTION FACILITY Issue Date: December 2009 Abstract—The Pueblo Chemical Agent-Destruction Pilot Plant (PCAPP) is being designed and built to safely and efficiently destroy the stockpile of chemical weapons stored at the US Army Pueblo Chemical Depot (PCD). Battelle Memorial Institute. mortars. and maintainability (TRAM) INTRODUCTION he Pueblo Chemical Agent-Destruction Pilot Plant (PCAPP) is being built in Pueblo. Keywords—Bechtel Pueblo Team (BPT). evaluating system and equipment redundancy.

Bechtel Pueblo Team biotreatment area cavity access machine evaporator/crystallizer enhanced reconfiguration building first of a kind distilled mustard (chemical agent) HT ICB MTU MWS PCAPP PCD PMD RR TAA TRAM mixture of HD and T chemical agents immobilized cell bioreactor munitions treatment unit munitions washout system Pueblo Chemical Agent-Destruction Pilot Plant (US Army) Pueblo Chemical Depot projectile/mortar disassembly reconfiguration room throughput and availability analysis throughput. APB. Assembled Chemical Weapons Alternatives brine concentrator Bechtel National. BPT BTA CAM E/C ERB FOAK HD (pallets and boxes) and energetics (bursters and propellants) are shipped off site for disposal at commercial waste treatment facilities. The facility layout and a physical representation of the three major process areas—ERB. During PCAPP facility design development. Design features and operating parameters that shorten overall operations schedules tend to result in lower project life-cycle Enhanced Reconfiguration Building (ERB) Biotreatment Area (BTA) Agent Processing Building (APB) Figure 2.ABBREVIATIONS. it was apparent that the project’s life-cycle cost would depend highly on the facility’s operations schedule. PCAPP Site Layout Showing Three Major Process Areas 194 Bechtel Technology Journal . the munition bodies are deformed. availability. Inc. ACRONYMS. The key treatment steps within these three major process areas are illustrated in Figure 3. and BTA— are shown in Figure 2. reliability. utilities. off-gas treatment. and rinsed in the automated cavity access machines (CAMs) and thermally treated. AND TERMS ANR APB ACWA BC BNI agent neutralization reactor agent processing building US Army Element. treatment of leaking or reject munitions. or other ancillary functions. and maintainability The PCAPP will destroy projectiles and mortars containing the vesicant chemical agents HD and T. It should be noted that the flow diagram does not show ancillary equipment and operations such as interim storage. Next. drained. Chemical neutralization of the agent followed by biotreatment is performed in two major process areas: the agent processing building (APB) and the biotreatment area (BTA).

downtime. To aid in evaluating potential design changes and to improve schedule. Key Treatment Steps Within Major Process Areas cost. unless increased capital or operating costs for additional treatment equipment prove to be excessive.g.g.ERB APB BTA Stockpile Munitions Unpack and Baseline Reconfiguration Propellants Drained Munitions Bodies MWS ICBs Damage Munitions Treated Effluent Hydrosafe Caustic Sludge Agent and Wash Water PMD Debursted Munitions ANRs Water Recovery The PCAPP is divided into three process areas: ERB (explosives separation) APB (chemical agent destruction) Fuses and Bursters Water Salts & Solids MTUs Uncontaminated Dunnage to Disposal Uncontaminated Energetics to Disposal Clean Munitions Bodies to Recycle Clean Solids to Disposal BTA (liquid effluent treatment) ANR APB BTA ERB agent neutralization reactor agent processing building biotreatment area enhanced reconfiguration building ICB MTU MWS PMD immobilized cell bioreactor munitions treatment unit munitions washout system projectile/mortar disassembly Figure 3. so the project selected this approach as the basis for final plant design. and work schedule) and equipment capacity and availability (e. DEVELOPMENT AND USE OF THE TAA MODEL iGrafx Analytical Tool The TAA model was developed using a discrete event model created in Corel® iGrafx® Process™ 2005 for Six Sigma (v.. and preventive maintenance). and it is currently used to evaluate the impacts of various funding scenarios. The software allows December 2009 • Volume 2. The TAA model has been updated and expanded as the detailed facility design has been completed. a detailed throughput and availability analysis (TAA) model was developed. utilities supply. several alternative treatment schemes and configurations were evaluated to: • Select the appropriate number of parallel processing “lines” • Determine utility and support services requirements • Evaluate operating labor requirements • Select the munitions campaign sequence The analysis indicated that concurrent operation of the APB and the ERB (as opposed to an early enhanced munitions reconfiguration) would lead to the lowest life-cycle cost. confirm operations schedules.. design and expected throughput.10) software. munitions feed. and support life-cycle cost estimate updates for the project. The iGrafx analytical tool allows discrete-event modeling that reflects process behavior for each of the key treatment steps in the PCAPP. Using the TAA model. Number 1 195 . including the effects of resource availability (e.

interlinked. Key steps of the iGrafx model. resource assignment. Enhanced Reconfiguration Building in iGrafx 196 Bechtel Technology Journal . To model the PCAPP facility. To reduce model complexity. weather. functions. Load All 155 mm Projectiles Load Aluminum Burster 105 mm Projectiles Load Remaining 105 mm Projectiles Load HD 4. which is necessary to describe all critical PCAPP treatment steps. The munitions—referred to as “transactions”—flow between each treatment step as the model runs. 5. and explosives). or decisions. The flow of the munitions between treatment steps is controlled using attributes. are shown in Figures 4. leak check. The TAA model was developed using a discrete event model that reflects process behavior for each of the key treatment steps. Mortars PMD_1 Munitions Type PMD_2 PMD1 Surge Leak? PMD2 Surge PMD3 Surge Yes Count Leakers Leak? No Yes Count Rejects Reject? No No Hold Maintain Repair PMD3 Yes Count Rejects Yes Count Leakers PMD_3 Yes Count Leakers Leak? No Projectile/ Mortar Disassembly (PMD) Machines No Reject? Yes Count Rejects Reject? No No Hold No No Hold 1 2 3 0 1 Maintain Repair PMD1 1 1 2 3 0 0 1 1 Maintain Repair PMD2 1 1 2 3 0 0 1 1 1 On_Line 2 3 4 2 On_Line 2 3 4 2 On_Line 2 3 0 4 2 4 2 4 2 4 2 1 5 Maint. and security) are evaluated independently of the TAA model. the details of campaign changes and the effects of external influences (for example. Time 6 Queue PMD2 Release 5 Repair Time 6 5 Maint. Each treatment step on the flow diagram has an assigned behavior or activity. propellants. Because an individual munition may contain several subcomponents that must be processed and tracked (for example. Time 6 Queue PMD1 Release 5 Repair Time 6 5 Maint. loss of outside utility supply. The model must also account for subactivities within each treatment step (for the ERB: unpacking. work performance. and 6. fuse removal. and repack). Mortars Boxed (2:20) Reconfiguration Rooms (RRs) Process Cartridges Process Propellant Check Times Boxed? No Yes Reconfigure RR_2 RR_1 Process Boxes and Associated Pallets RR1 RR2 PMD Path PMD_3 PMD_2 PMD_1 PMD3 PMD2 PMD1 Release Process Pallets Unpack Load HT 4.2 in. and logic expressions in iGrafx. agent and munition type. burster removal. which may include batching. subprocess. the model must reflect the interlinking activities necessary to process the subcomponents.2 in. delay. sequential flow diagram of key activities within each treatment step was prepared. and an overview of how the steps are interlinked. munition packaging. a detailed. splits of attributes. The facility is modeled on a first-in/first-out approach. Time 6 Queue PMD 3 Release 5 Repair Time 6 Figure 4.consideration of both batch and continuous operations.

Number 1 197 . Time 6 Distribute 5 Repair Time 6 5 Maint. Time 7 6 Repair Time 7 4 1 4 1 Agent Neutralization Reactors (ANRs) Reprocess Batch (Clear) Agent_Hydrolysate1 Agent Hydrolysate Hold 1 Reprocess Batch Fail Failed Batch? Pass Release ICB Hold Tank Failed Batch? Fail Reprocess Batch Pass Release 6 Maint. a detailed. Treated Munition Bodies 1 Start Agent Reaction 0 1 2 Repair 3 0 4 1 5 Repair Time 6 Count Munition Bodies 2 Load for MTU1 MTU1 On_Line MTU1 Release Agent Feed 1 Maintain 1 2 3 0 4 1 5 Repair Time 6 1 2 3 0 Maintain RX1 On_Line Repair 1 2 Maintain 1 2 3 0 Repair RX2 On_Line 1 2 3 0 3 0 4 1 4 1 6 Maint.Munitions Washout Systems (MWSs) MWS1 Surge 1A MWS2 Surge 2A Maintain 1 2 3 0 1 Repair 1A 1 1 2 3 0 0 1 1 Maintain 2A Repair 1 On_Line 2 3 4 2 On_Line 2 3 0 4 2 4 2 4 2 1 5 Maint. Time 7 Distribute AgentRX_1 6 Repair Time 7 Reactor 1 Release Release Batch Agent Reactor AgentRX_2 2 Agent 0 Hydrolysate 1 Surge 1 Distribute 2 Agent Hydrolysate Hold 2 Agent_Hydrolysate2 Reprocess Batch (Clear) Figure 5. interlinked sequential flow diagram of key activities within each treatment step was prepared. Agent Processing Building in iGrafx December 2009 • Volume 2. Time 6 Distribute Munition Bodies 5 Repair Time 6 Munition Bodies Release Agent 1 Count Munition Bodies Load for MTU2 Munitions Treatment Units (MTUs) 1 2 Repair MTU2 On_Line MTU2 Release 3 0 4 1 5 Repair Time 6 MWS1A Represents All Required Cavity Access Machines Release Agent MWS2A Represents All Required Cavity Access Machines Maintain 2 3 0 4 5 1 Repair Time 6 Agent and H20 Surge To model the PCAPP facility.

For newly developed and first-of-a-kind (FOAK) equipment items. Time expected operating conditions. and maintainability (TRAM) evaluation team (including the equipment designer/fabricator and representatives of the operations and maintenance team. or number of munitions per hour or per batch) • Capacity of buffer storage and tankage to allow for accumulation of material between upstream or downstream batch operations • Expected maintenance/failure/repair frequency and duration for the activity or specific elements of the activity (time interval between events and event duration) The above inputs are developed for each activity considering past experience with the same or similar equipment or are based on engineering judgment of equipment performance under 198 0 0 0 The TAA model is a detailed mathematical simulation of the physical PCAPP facility that can be readily modified or manipulated to evaluate alternative designs and operating scenarios. reliability. availability. a probability occurrence set is also provided so that the model can simulate overall facility operation on a random. input to the TAA model includes: • Duration of the process activity (minutes per round or per batch) • Capacity of the process activity (gallons. The iGrafx model also includes necessary decision and/or logic steps to define how the material is batched through each process step. Monte Carlo basis. among others). Given this criticality. Time 7 E/C_2 Maint. Time Repair Time 1 7 6 Repair Time 7 E/C1 Release E/C Processed E/C2 Release 5 E/C_1 Distribute 1 5 6 7 Maint.Immobilized Cell Bioreactors (ICBs) Start for ICB Treatment ICB Open Valve Close Valve Distribute ICB01 ICB02 ICB03 ICB04 ICB05 ICB06 ICB07 ICB08 ICB09 ICB10 ICB11 ICB12 ICB13 ICB14 ICB15 ICB16 ICB17 ICB18 ICB19 ICB20 ICB 1 ICB 2 ICB 3 ICB 4 ICB 5 ICB 6 ICB 7 ICB 8 ICB 9 ICB 10 ICB 11 ICB 12 ICB 13 ICB 14 ICB 15 ICB 16 ICB 17 ICB 18 ICB 19 ICB 20 ICB 21 ICB 22 ICB 23 ICB 24 To Water Recovery 1 2 3 24 Start for E/C Feed Brine Concentrator Feed 0 1 11 22 1 2 3 Repair Repair On_Line On_Line BC1 BC2 Maintain 1 1 2 3 Maintain 2 3 24 0 24 24 6 0 1 5 Evaporator/Crystallizer (E/C) 1 2 3 24 1 5 6 24 0 Start for Brine E/C Feed E/C Feed 0 1 11 22 1 2 3 Repair Repair On_Line On_Line E/C1 E/C2 Maintain 1 1 2 3 Maintain 2 3 24 0 24 6 0 5 Figure 6. Time 7 BC_2 Maint. pounds. how material is distributed between parallel equipment trains. For each activity input. Bechtel Technology Journal . Biotreatment Area in iGrafx For each specific activity. the inputs are based on the results of equipment shop testing and input from a throughput. Brine Concentrator (BC) 1 5 6 Repair Time 1 7 6 Repair Time 7 BC1 Release BC Processed Hold BC2 Release 5 BC_1 Distribute 1 5 6 7 Maint. these judgment inputs are critical to generating a realistic operations model.

enhanced equipment reliability. Decision and/or logic steps are used to track how often a process step enters maintenance or failure mode and when the process step is available to receive the next transaction (i. The throughput and availability model is used to develop a probabilistic operating case. investigate potential design improvements. and 125 process steps. As the software runs. and processed quantities at given time intervals An example of a customized graph created in the model to track the quantity of munitions stored in a buffer storage area over a single simulation run is provided as Figure 7. Using the TAA Model The primary use of the TAA model has been to determine and verify the most probable life-cycle operating case based on a given set of input data. The TAA model was used early in the design phase to select plant configuration and to estimate throughput and reliability requirements. Once the flow sequence model is defined and specific input data are entered into the model. The software takes more than 10 minutes to run the entire munition stockpile through the model for a single simulation run. In summary. days 105 mm 4. use of multiple parallel trains or added surge buffer capacity. Results from 30 or more different simulation runs are generated and averaged to obtain a probable overall schedule and to evaluate bestcase and worst-case schedules. the software will run to virtually “destroy” the entire stockpile on a discrete-event basis. The results from model runs can be summarized in many standard output formats as well as in customized outputs such as: • Overall time required for processing all munition bodies and hydrolysate • Equipment utilization statistics • Equipment out of service statistics • Tracking and monitoring of buffer storage quantities. Its use identified the equipment that had the greatest impact on throughput and allowed the project to focus technical risk reduction efforts (particularly early equipment demonstration and testing) on those areas with the greatest impact on overall facility cost and operations schedule. when an agent hydrolysis reactor is available to process the next batch of agent and is not busy reprocessing a failed batch). peak utility requirements. The probable impact on operating schedule based on any or all of this variable input data can be evaluated and used to support final design decisions and operating strategies and in predicting funding needs for the project. Figure 7.and when equipment is not available to process munitions or agent due to limiting conditions upstream or downstream of that step. Number 1 199 . processing times.. Analysis of results for differing input data can shed light on potential design improvements.e. operations pinch points due to equipment capacity limits. and possible changes in weekly work schedules. the TAA model is a detailed mathematical simulation of the physical PCAPP facility that can be readily modified or manipulated to evaluate alternative designs and operating scenarios. and identify pinch points in the process. Each simulation run is generated using a different random number so that no two simulation runs produce the same results.2 in. more than 5 million attributes are created and tracked. Number of Munition Bodies in APB Munition Body Storage Building (MWS Surge) Number of Rounds 155 mm Time. 15 counters at each key location. The PCAPP TAA model includes more than 25 decision points. Custom Buffer Storage Quantities December 2009 • Volume 2.

62 66. Based on the results of the different runs. followed by draining and washout. and the TAA model in optimizing plant operations. 84. FOAK testing. 74. the forthcoming FOAK testing of the MWS at the designer/ fabricator’s shop will be carried out in a test program equivalent to over 6 weeks of continuous. Testing in this manner will confirm design capacity. and maintenance data for special. The analysis examined how each system independently affects the overall operations schedule assuming the same reduction in mechanical availability. as discussed below. 24/7 operations on simulated projectile or mortar rounds (both pressurized and non-pressurized). For projectiles. it is helpful to consider a specific example relating to the PCAPP project’s MWS. availability. the empty burster well is crushed into the munition body. FOAK equipment being supplied by vendors. define preventive and corrective maintenance frequency and durations. 64. such as buffer capacity size and throughput rates. it can be concluded that reduced availability in the MWS had a much greater impact on the operations schedule than reduced availability in the ANR system. newly developed FOAK equipment designs that have not been fully demonstrated on an industrial scale. The MWS also uses a commercial robot for munitions handling and transfer between several washout stations in each MWS process line. and maintenance performance of that equipment.The TAA model has proven invaluable for “what-if” analyses by trending the impact that changing different process variables has on the operations schedule. and vendor data using historical operations data from other operating chemical destruction plants and from industrial experience with similar equipment. After completion of FOAK demonstration testing. THROUGHPUT. The FOAK test program at PCAPP has been planned based on a TRAM evaluation. availability. the bottom of the munition body is cut off with a wheeled cutter. material and energy balances. AVAILABILITY. weeks N/A 22 51 N/A 2 12 MWS 86. T RAM evaluations involve a disciplined collection and examination of relevant throughput. the TAA model is 200 Bechtel Technology Journal . “What-If” Analysis with Reduced Mechanical Availability Change in Mechanical Availability Compared to Base Case N/A (Base Case) –10% –20% N/A (Base Case) –10% –20% being further updated with the results from the detailed equipment testing program for plant-ready. evaluate failure mechanisms for Key System Availability. The ANR system has a single availability because no changes to the system are required for the different munitions. followed by gravity draining of the agent and then washout of the round using high pressure water. RELIABILITY. MWS availability differs. As mentioned previously in this paper. MWS is one of the key treatment steps at PCAPP and utilizes an entirely new agent access and removal process for projectiles and mortars. the TAA model was coordinated with the engineering design development of the process flow diagrams. the TRAM program results are essential input for updating the TAA model to increase the credibility of its operations schedule and life-cycle cost estimates. During the early design phase. Based on a TRAM evaluation. Table 1 shows the results from a “whatif” analysis of a series of runs made using reduced equipment availabilities for two key PCAPP systems—munitions washout system (MWS) and agent neutralization reactor (ANR) system. For mortars. AND MAINTAINABILITY EVALUATIONS The TAA model has proven invaluable for “what-if” analyses by trending the impact that changing different process variables has on the operations schedule. depending on which munition type is processed during a campaign. To aid in understanding the roles played by TRAM evaluation. The TRAM analysis is used to determine the extent of operational testing that should be done on plant-ready equipment to confidently predict the probable throughput. reliability. At this time. reliability. Series of runs were performed to determine the sensitivity of the operations schedule to changes in mechanical equipment availability as well as to other factors that could affect production rates. %* Additional Processing Time. plus results from an early prototype equipment testing effort. 72 76. 52 ANR 89 79 69 * The MWS is retooled for each munition type between campaigns. Table 1.

and maintained “evergreen. when modeling case studies can readily influence treatment step scope and capacity. reliability. TRAM data on these key treatment steps is available based on past industrial experience. if the processing plant uses equipment and systems that are new designs. This continuous belt. spare equipment. • Immobilized Cell Bioreactor—Immobilized cell bioreactor (ICB) treatment is widely practiced on commercial and domestic waste streams and is well demonstrated. when modeling case studies can readily influence treatment step scope and capacity. they are considered lower-risk processes that do not require FOAK testing and full TRAM analysis. and define spare parts and assembly needs. and shutdown and closure plans. redundancy. including realistic estimates for startup/shakedown and initial testing. a realistic TAA model should be based on a disciplined approach to developing and demonstrating relevant TRAM information for FOAK undemonstrated equipment and systems within the plant. two other key FOAK treatment steps at PCAPP (refer to the simplified block diagram of Figure 3) were the subject of detailed TRAM evaluations: • Munitions Disassembly—This operation involves an automated enhanced reconfiguration of munitions to remove explosive components. resistance-heated oven has been modified from a proven commercial design. The TAA model should be prepared early in the design process. These operations are similar to operations performed by the projectile/ mortar disassembly (PMD) machine used in US baseline demilitarization facilities. Operations and maintenance issues should be considered during TAA modeling. but these estimates are later refined as the design progresses and plant configuration is optimized. and all hydraulics have been replaced by electric drives. then redesign or upgrade of components will be considered. and burster removal. Initial modeling is usually based on rough estimates of capacity. This extensive FOAK test program will provide the data needed to develop a much more realistic operations schedule and cost for PCAPP. it is extremely useful to perform a detailed TAA to ensure that the facility scope. shown in Figure 3. Besides the previously described MWS. and full lifecycle cost have a sound. detailed campaign change modifications. or a complex processing plant such as PCAPP. The model should be updated as key design and operations decisions are made. If failure modes are shown to exist during testing. Because the four steps listed above have been previously demonstrated at industrial scale. To this end. booster charge and booster cup removal.” December 2009 • Volume 2. plant-ready MWS support equipment from other suppliers is also being shipped to the designer/fabricator’s facility to allow integrated system testing. After testing is completed. which involves several sequential batch and continuous treatment steps. For PCAPP. buffer storage. It includes nose closure (fuse or lifting lug) removal. increased or decreased equipment sizes. the TRAM assessment will be updated and inputs to the TAA model will be revised to reflect increased or reduced downtime. • Water Recovery—Water recovery using evaporation and crystallization is also widely practiced on commercial waste streams similar to the ICB effluent and is well demonstrated. Further. [1] The remaining four key treatment steps at PCAPP. • Agent Neutralization Reactor—Agent neutralization of mustard was fully demonstrated at the Aberdeen Chemical Destruction Facility. operations schedule. credible basis. are as follows: • Unpack and Baseline Reconfiguration— These two primarily manual operations have been practiced at various Army depots numerous times. Number 1 201 . the PMD elements have been upgraded to use a commercial robot for munitions handling and transfer between disassembly steps. CONCLUSIONS AND RECOMMENDATIONS F The TAA model should be prepared early in the design process. Future PCAPP operating conditions will be simulated to the extent possible to make the testing representative. • Munitions Treatment Unit—The munitions treatment unit (MTU) is a muffle oven designed to thermally treat washed munition bodies to destroy any residual agent before releasing the metal bodies for offsite recycling.MWS components. and availability for individual steps. More detail on the above three FOAK treatment steps for PCAPP can be found elsewhere. repair or replacement work in toxic areas. and other activities and functions. The TAA model should be used freely to investigate alternative treatment configurations such as parallel treatment trains.

His remarkable 50-year Bechtel career encompasses experience in chemical process design. waste treatment. He has taught chemistry and engineering at the United States Military Academy and the University of cwd/2007/pres/craig-myler-pres. Gus has performed conceptual studies and detailed design and operation reviews for the US Army’s chemical weapons destruction facilities in Aberdeen. http://www. “Munitions Processing for the Pueblo Chemical Agent Destruction Pilot Plant. Myler.. Christine has held increasingly responsible positions. Washington. petrochemicals. and/or other countries. BIOGRAPHIES Christine Statton is a senior mechanical/chemical engineer on the PCAPP and Equipment Testing” (http://www. she worked as a process engineer supporting plant operations. iGrafx. since late 2006. While on the ABCDF project. including senior resident engineer of utility systems and engineering supervisor of the Process and Mechanical Engineering Groups.Although it is possible to model every step and substep of the facility in great detail. in San held May 18–21. more than half of which have been presented in major public forums. 2009. she has worked for three Bechtel National. He is a member of the American Institute of Chemical Engineers. Since then. Brussels. TRADEMARKS Corel. PhD. This paper expands on a PowerPoint presentation entitled “Evaluation of Plant Throughput for a Chemical Weapons Destruction Facility Using Discrete Event Modeling..dstl. environmental remediation and regulatory compliance. Colorado. the Aberdeen Chemical Agent Destruction Facility (ABCDF) in Aberdeen. Results should be clearly communicated and documented for discussion and buy-in by key stakeholders (plant operations. Christine has more than 5 years of experience in design and operations of chemical agent destruction facilities. and iGrafix Process are trademarks or registered trademarks of Corel Corporation and/or its subsidiaries in Canada. She is an Engineer-inTraining in Washington. Gus has written more than 30 technical papers. Craig A. Equipment Throughput Reliability Availability and Maintainability (TRAM) pres/BenzA-01. nuclear. has more than 25 years of experience in the treatment and disposal of chemical agents and munitions.” presented at the 10th International Chemical Weapons Demilitarisation Conference (CWD 2007). allowing it to be modified easily and run quickly. Benz is a Bechtel Fellow and principal technologist with Bechtel National. he was project manager for Phase I planning studies for the Russian chemical weapons facility in Shchuch’ye for the US Defense Threat Reduction Agency. and project management. UK. Gus holds a BS in Chemical Engineering from Oregon State University and is a licensed Professional Engineer in California. During the past 15 years. and.A. it is best to avoid making the model overly complicated to keep the code as simple as possible. Earlier. research and development. One of her first tasks on the PCAPP project was to update an existing iGrafx model to simulate the 202 Bechtel Technology Journal . and safety engineering specialists. Myler and A. REFERENCES [1] C. plant operations to help forecast their duration to support a life-cycle cost estimate. polymers. fertilizers. Belgium. natural gas. Since joining Bechtel in 2001. Christine holds a BS in Chemical Engineering from Pennsylvania State University and is an active member of the Society of Women Engineers and American Institute of Chemical Engineers. Maryland. Benz. a Bechtel Fellow. Key to the successful implementation of a discrete element TAA model is keeping the code as simple as possible to fit the required application. Baltimore campus. His project oversight responsibilities range from chemical demilitarization facilities to high-level radioactive waste disposal projects. the PCAPP project. Craig has developed methods for data analysis and reporting for the US Army’s chemical demilitarization program and holds two patents related to chemical agent treatment and protection. regulatory agencies. May 14–18. Pueblo. His fields of experience include organic and inorganic chemicals. and Lexington. feasibility studies. projects: the Waste Treatment and Immobilization Plant in Richland. project engineering. Maryland. In addition. and alternative energy systems. etc.pdf.D.). process plant startup and operations.pdf). As chief engineer for chemical and nuclear engineering for Bechtel National. in Stratford-upon-Avon. Inc. which was presented at the 12th International Chemical Weapons Demilitarisation Conference (CWD 2009). California. The TAA model matrix and simulation results should be verified by independent calculation. he holds six patents related to chemical process engineering and project execution.. Inc. petroleum refining. 2007. the United States. Warwickshire. August D. chemical warfare agent destruction. Kentucky. owner representatives. he has overall responsibility for this discipline and manages the efforts of a large and diverse group of chemical. engineering management. Inc.

environmental.Craig is a senior member of the American Institute of Chemical Engineers and a member of the American Nuclear Society and Society of American Military Engineers. and is a Six Sigma Yellow Belt. which installed over 36. Report No. Wilson has 35 years of experience in process engineering.’s. petroleum refining. he leads the development. testing. including chemical agent destruction.000 temporary homes for those displaced by Hurricane Katrina. he was a process engineer for Food Machinery Corporation in California. He has more than 30 years of experience in planning. development of a hazardous waste master plan for the country of Thailand. and a BS in Chemistry from the Virginia Military Institute in Lexington. hazardous waste handling and disposal. DOE/EA-0447. Pennsylvania. December 2009 • Volume 2. and construction and operation of many railroad facilities. where he previously supervised the process design team that designed the equipment to achieve the desired plant throughput. Paul Dent is FOAK equipment manager for the Bechtel Pueblo Team executing the PCAPP project. Before joining Bechtel. waste minimization and pollution prevention. Before joining the Bechtel Pueblo Team. Wilson co-authored “Environmental Assessment for the Demonstration of Uranium-Atomic Vapor Laser Isotope Separation (U-AVLIS) at Lawrence Livermore National Laboratory. Inc. nuclear waste disposal. He is also currently assisting on the PCAPP project. He also worked in key positions for operations. constructing. turnaround of an unprofitable industrial recycling and transportation operation. Wilson holds a BS in Chemical Engineering from the University of California. and inorganic chemicals. and EPC management of the ABCDF. Washington. and government sectors. Wilson Tang is a senior process engineer for the Waste Treatment and Immobilization Plant in Richland. FEMA project. Berkeley.” May 1991. At PCAPP. Paul has a BSE in Civil Engineering from Princeton University and is a licensed Professional Engineer in Wisconsin. startup. and integration of FOAK equipment. nuclear fuels reprocessing. Paul was county manager on Bechtel National. Number 1 203 . and operating industrial facilities in the transportation. Paul joined Bechtel in 1998 with more than 20 years of experience in operations and project management for commercial hazardous waste and railroad companies. Craig received a PhD and an MS in Chemical Engineering from the University of Pittsburgh.. Inc. His prior accomplishments include development of a corporate project management program and a nationwide network of facilities for Chemical Waste Management.

204 Bechtel Technology Journal .

establishing the test variables to be examined. separate. wear plates. about 42 million liters (11 million gallons) of insoluble sludge could erode mixing vessels as a result of the mixing action the sludge undergoes in preparation for its vitrification. the US Department of Energy (DOE) is constructing the world’s largest radioactive waste processing facility. Accordingly. and conducting the testing. the PJM operates pneumatically in continuously alternating fill and discharge modes. The pretreatment (PT) facility is designed to chemically treat. The low-activity waste (LAW) facility will similarly vitrify the low-activity waste (soluble salts).com © 2009 Bechtel Corporation. impingement angle. The high-level waste (HLW) vitrification facility will use melters to immobilize the high-level portion (insoluble solids and higher percentage of radioactive constituents) of the waste in glass. Both the PT and HLW facilities contain vessels in which the waste will be mixed by pulse-jet mixers (PJMs)―36 vessels in the PT and 4 in the HLW. Because the vessels are not designed to be replaced and are in nonaccessible locations. At the WTP. 205 . the erosion mechanisms and rates must be well understood and accounted for in the design so that the vessels perform safely and reliably over the plant’s 40-year design life. and concentrate the waste received directly from the underground tanks. A PJM is a long cylinder with a tapered nozzle. Many of the site’s 177 underground tanks are closed or in the process of being emptied. some 42 million liters (11 million gallons) of sludge could erode mixing vessels when mixed by pulse-jet mixer (PJM) devices that direct highvelocity jets against the vessel walls. developing the simulant. Ivan G. pulse-jet mixer (PJM). Keywords—erosion. wear resistance INTRODUCTION The Hanford Site in Washington State is partially circumvented by the Columbia River and contains roughly 216 million liters (57 million gallons) of mixed waste (classified by the state as radioactive dangerous waste) from Cold War plutonium production. and their size and number are a function of the vessel and its contents. PJMs are typically arrayed around the periphery of the process Garth M. During process operations. radioactive waste. Of the 216 million liters. the WTP project undertook an erosion testing program to collect data under prototypic PJM operation and waste characteristic conditions. Evaluation of the test results indicates that all WTP vessels have adequate erosion wear resistance. Hanford. and earlier evaluations involved considerable interpretation and adjustment of existing data as the basis for assumptions used in earlier predictions. stainless steel.INVESTIGATION OF EROSION FROM HIGH-LEVEL WASTE SLURRIES AT THE HANFORD WASTE TREATMENT AND IMMOBILIZATION PLANT Issue Date: December 2009 Abstract—The Waste Treatment and Immobilization Plant (WTP) is being constructed at the US Department of Energy (DOE) Hanford Site in Washington State to treat and immobilize approximately 216 million liters (57 million gallons) of high-level radioactive waste. ULTIMET. Three main facilities constitute the WTP. located in a process vessel that is pressurized to expel its waste into a larger process vessel to cause mixing. The literature contains little information about erosion under PJM mixing conditions. the number of PJMs per vessel ranges from 1 to 12. the waste will be immobilized in a glass matrix and contained in stainless steel canisters for safe and permanent disposal. Papp igpapp@bechtel. including determining the waste characteristics to be tested. This paper describes the mixing program process and results. Duncan gduncan@bechtel. Of the 216 million liters. slurry. To facilitate the disposition of the waste. with both modes facilitating mixing. All rights reserved. In this Waste Treatment and Immobilization Plant (WTP).

2 to 0. [2] and Karabelas [3] (based on mineral–slurry wear data reported by Karabelas and by Aiming et al.5 for the slurry concentration term. mean particle size. Accordingly. The project used PREVIOUS EROSION WORK TAKEN INTO ACCOUNT V ery limited data was available on WTP slurry waste conditions before Bechtel performed the tests described in this paper. Evaluation of the testing data demonstrated that no adjustment to the established design of the vessels was necessary for wear resistance. The team also reviewed other work performed by Wang and Stack [5] and by Mishra et al. particle size distribution) to develop a simulant that replicated waste conditions. The discharge point of a PJM is nominally a distance of 1. average particle hardness. and slurry concentration. and impingement angle. Across the range of conditions studied. Because the vessels are not designed to be replaced and are located in high-radiation areas of the facility.8 for concentration. Thus. and 0. jet velocity. and approximately 0. Two respected subject matter experts reviewed the WTP erosion prediction methodology and recommended that. Parametric relationships used to compare slurry conditions were based on work published by Gupta et al. WTP vessels whose contents are mixed by PJMs are made of stainless steel (grade 304L or 316L). testing should be performed to provide greater assurance that the estimates were valid. erosion mechanisms and rates must be well understood and accounted for in the design so that the vessels perform safely and reliably over the plant’s 40-year design life. Enderlin and Elmore [1] had performed some testing and investigated a zeolite water slurry in a hydroxide solution as applicable to the DOE West Valley vitrification plant. the main concern is the PJM’s jet mixing action of solids that contributes to erosion of the vessel walls. Testing was performed at one-quarter scale to the actual design. so earlier evaluations involved considerable interpretation and adjustment of published experimental data. 2 for particle size.ABBREVIATIONS. The literature contains little information about erosion under PJM mixing conditions. For the WTP.5 times the nozzle diameter from the vessel wall. AND TERMS ASME conc.3 for the particle size term. slurry concentration. DEI DOE HLW American Society of Mechanical Engineers concentration Dominion Engineering. on average. 206 Bechtel Technology Journal . The results reported by Karabelas and by Gupta (under the specific conditions investigated) produced exponents in the range of 2 to 3 for the jet velocity term of the equation. The PJM jets typically have 10-centimeter (4-inch) diameter nozzles that discharge at a rate of 8 to 17 meters per second (26 to 56 feet per second). The project also performed numerous sensitivity analyses to assess the effect of different operating and waste input variables. a 10-centimeter PJM nozzle would be located approximately 15 centimeters (6 inches) from the vessel wall. and concentration terms so that erosion rates could be adjusted for different operating and feed conditions. [6]. of 3 for velocity. continuous flow. ACRONYMS. 4]..g. The project used mass loss data to develop calculation exponents for velocity. 0. US Department of Energy high-level waste inside diameter low-activity waste pulse-jet mixer pretreatment Waste Treatment and Immobilization Plant Very limited data was available on WTP slurry waste conditions before Bechtel performed the tests described in this paper. From this previous work. ID LAW PJM PT WTP recently published Hanford tank farm waste chemical and physical characterization (e. while it appeared to be appropriate and should yield reasonable results. Bechtel developed the exponential relationship of erosion (scar depth) to each of the main factors affecting erosion―jet velocity. to demonstrate design margin and robustness. the work Bechtel performed produced exponents. mean particle size. The predicted erosion rate for each affected vessel was evaluated against the available erosion design allowance for that vessel. size. the Bechtel project team undertook an erosion testing program to collect erosion rate data under prototypic PJM operation and waste characteristic conditions. and testing variables included pulsed vs. [4]) to determine erosion allowances for WTP waste slurries [3. Inc.

this feature was not included in the final simulant. Concentrations included 150. TEST VARIABLES est variables included pulsed vs. and with assessments of the PJM mixing capabilities. and average Mohs hardness of 3. Other morphology-related aspects were accounted for by using the same components found in the real waste. Corrosion of the stainless steel measured in all of the testing described below was shown to be negligible when compared to the erosion.WASTE PROPERTIES DETERMINATION he testing program used an updated assessment of Hanford tank farm waste chemical and physical characterization (e. hardness. the term was not used as a factor in the method. [10] The ability of the tank farm contractor to meet the acceptance criteria was assessed and confirmed in detail in Reference [11]. zeolite. also known as the 153 report. slurry concentration. These characteristics are included as part of the WTP waste feed acceptance criteria contained in ICD-19 – Interface Control Document for Waste Feed. in the size range found in the waste. An additional resource was a detailed evaluation that the project performed of 518 feed delivery batches contained in the Tank Farm Contractor Operation and Utilization Plan. average specific gravity of 2. and impingement angle. as the primary basis for assigning values to the erosion-important constituents of the waste. [8] That evaluation is documented in WTP Waste Feed Analysis and Definition. As a final control. the erosion rate B December 2009 • Volume 2. jet velocity. at their nominal sizes. 38. to take corrective actions. and every 24 hours the simulant was replaced and the average particle size restored to match plant vessel turnover conditions. to enable a running prediction of vessel erosion to be maintained and. Important waste characteristics are particle size. Rev. In addition to the particle properties affecting erosion noted above. particle size distribution) published in Reference [7]. The project prepared five simulant compositions to provide a realistic but bounding set of properties for testing. waste chemistry.6 and 4. 250. continuous flow. T SIMULANT DEVELOPMENT echtel also prepared a testing simulant that replicated waste feed conditions by comparison to the weighted mean particle size. Consideration was also given to the corrosive potential of the liquid fraction of the slurry against the stainless steel. The simulant comprised only those significant constituents found in the actual waste. and ferrite. Smaller amounts of other components were included to achieve the required hardness and density properties. for each feed delivery batch. which is conservative.g. pH. T predictions for the WTP vessels were shown to not be accentuated by the effect of corrosion. the team also evaluated hardness as a sensitivity parameter. mean particle size. average particle hardness. density. The hardness values were calculated averages based on vendor-supplied information on the primary particle constituents used to make up the slurries. While the 153 report indicated that larger particles were agglomerates that broke apart upon mixing or pumping. [9] Based on this input. Because hardness of the simulant primary particles was not deemed to be a significant contributor to the calculation method. and 350 grams per liter. Particle size reduction was noted in shakedown testing. and crystalline geometry. Number 1 207 . and liquid viscosity.1 Particle hardness was averaged to achieve 3. if necessary.6. including the erosive characteristics. the project measured the waste characteristics. nominal erosive feed characteristics were established as mean particle size of 24 microns. The matrix varied the jet velocity. mean particle size. The test program used an updated assessment of Hanford tank farm waste. The project prepared five simulant compositions to provide a realistic but bounding set of properties for testing. 1 US equivalents have not been provided for these and other metric values used in or resulting from the testing because test parameters and terms were developed based on the metric system. waste properties affecting erosion include solids concentration. and 54 microns. The simulants were based on weighted mean particle sizes that included 24. As a secondary investigation. and slurry concentration to determine the exponential relationships of these parameters as inputs to the calculation method described below. Bechtel designed the test matrix to gather the minimum information required to adequately predict the erosion over 40 years of WTP operation (Table 1). The waste particle characteristics are consistent with those used to evaluate critical slurry flow velocity to ensure that lines within the WTP will not plug. including various aluminum compounds (boehmite and gibbsite). Further.4.4 on the Mohs scale. the mean particle size was based on primary particle sizes.. 6. and morphology. The base simulant consisted of 15 components. hardness.

6 3.6 3. which used test coupons made from the ULTIMET® cobalt-based alloy to evaluate it as a potential weld overlay to the vessel’s stainless steel wear plate as an added erosion barrier. so the remaining testing was done with a continuous flow rate. as noted earlier―to provide a total of 96 hours of continuous wear. It made four consecutive 24-hour runs―replacing D 208 Bechtel Technology Journal . (DEI). the latter is the primary measurement of importance at the WTP. Figure 1 shows the test fixture used.6 3. Run 1 showed little difference between pulsed and continuous erosion rates. 20-centimeter (8-inch) diameter.6 3.6 3.Table 1. conducted the testing at its facilities in Reston. DEI also took several commercial grade measurements to supplement the understanding of the prediction of erosion behavior in the WTP vessels.6 3. Virginia.4 4. Test measurements required for use as direct input to WTP design calculations were taken in accordance with American Society of Mechanical Engineers (ASME) NQA-1 nuclear quality standards.6 3.6 Jet Velocity. Inc. and Figure 2 shows the primary one-quarter-scale test apparatus. Erosion Testing Matrix Test Run 1 Jet 1 Jet 2 Hold Run 2 Jet 1 Jet 2 54 54 24 24 24 24 350 350 350 350 350 350 Highest Wear Concentration? 39 39 24 24 24 24 24 24 Per Hold Per Hold Per Hold Per Hold Per Hold Per Hold Per Hold Per Hold 3.6 4. The tests used grade 316L stainless steel test coupons. or an equivalent scar depth of the vessel wall. grade 316L stainless steel (except as noted above) test coupons exposed to conditions geometrically similar to those of the WTP vessels.6 3. m/s 12 12 Jet Angle 90º 90º Replenish Yes Yes Flow Pattern Continuous Pulsed Pulsed or Not? Continuous Continuous Continuous Continuous Continuous Continuous Test measurements required for use as direct input to WTP design calculations were taken in accordance with ASME NQA-1 nuclear quality standards. microns 24 24 Solids Concentration. less.6 3. g/L 350 350 Average Hardness. CONDUCT OF TESTING ominion Engineering. except for Run 7 Jet 2 and Run 8 Jet 2. Testing used circular. The erosion measurements taken were mass loss and scar depth.6 14 12 14 12 14 12 90º 90º 90º 90º 90º 90º Yes Yes Yes Yes Yes Yes Particle Distribution. the simulant after each run. This investigation considered the concept that an intermittent jet could result in more.4 3.6 14 12 17 8 12 12 12 12 90º 90º 90º 90º 90º 90º 65º 90º Yes Yes Yes Yes Yes Yes Yes Yes Continuous Continuous Continuous Continuous Continuous Continuous Continuous Continuous 3.6 3. Run 3 Jet 1 Jet 2 Run 4 Jet 1 Jet 2 Hold Run 5 Jet 1 Jet 2 Run 6 Jet 1 Jet 2 Run 7 Jet 1 Jet 2 Run 8 Jet 1 Jet 2 Another secondary investigation looked at the effect of pulsed versus continuous flow of the PJM jet stream. Mohs 3.

40 Concentric Reducer 1-inch Socket Weld Flange (facilitates removal and inspection of 1-inch pipe nozzle) 1-inch. Figure 1. 40 Inlet Pipe Upper Flange (interfaces with 10-inch 150# slip-on flanges in tank lid) 3-inch to 1-inch. Two independent recirculation loops were used. Sch.000 gallons).5 x ID Pivoting Bracket (allows impingement angles other than 90 º) ASME SA240 Type 316L SS Test Specimen The volume of the test vessel was approximately 3.785 liters (1. Sch. Number 1 209 . each had a rotary lobe pump and coriolis flow meter.3-inch.049 ±0. Sch. Test Rig with Two Independent Recirculation Loops December 2009 • Volume 2. Test Fixture for Erosion Testing Figure 2. 40 Pipe Nozzle • ID: 1.005 inch • Entry Length: 6 x ID • Off-set: 1.

Figure 4 shows a stainless steel test coupon after 96 hours of exposure to jet wear with waste simulant.57 Hardness.05588 0. Stainless Steel Test Coupon After 96 Hours of Erosion Table 2.57 1.2879 6.6 3.6 3.57 1.02286 0. Erosion Testing Program Results Mass Loss.4029 5 2. No visual observance of erosion was detected other than a polished appearance. g/L 350 350 350 250 250 150 150 350 350 350 350 350 350 350 350 Coupon Scar.57 1. m/s 12 12 14 12 14 12 14 12 14 8 16.5691 4.4373 5.6 3.6 3. microns 24 54 54 24 24 24 24 39 39 24 24 24 24 24 24 Angle.6467 3 1. mm 0.03556 0. Mass loss data was used to develop exponents for jet velocity.02286 0. Nozzle Direction of Flow Impingement Surface Figure 3.3994 Test Run PJM Velocity.57 1.57 1. Post-Test Wear Pattern Figure 4.57 1.4 3.02286 0.6 3.4727 8 1.1083 6 0.5 12 12 12 12 Slurry Concentration. mean particle size. Mohs 3.57 1.05842 0.57 1.57 1.05080 0.57 1.6635 1.57 1. Micrometer readings were used to measure the scar depth.57 1.6 3.04064 Irregular 0. No visual wear was noticeable on the exposed face of the coupons other than a polished appearance. Testing results are shown in Table 2. and slurry concentration so that erosion rates could be adjusted for different operating and feed conditions. The incremental scar depth at successive 24-hour runs was relatively constant.6 3.0141 7.01016 0. AND CONCLUSIONS E xamination of the test coupons revealed a donut-shaped depression: the area directly below the centerline of the jet had less erosion than a ring around the center (see Figure 3).TEST RESULTS.3555 4.13 1.6 3. radian 1. Changes in the successive tests are highlighted in green.6 1 2 SS SS SS SS SS SS SS SS SS SS SS SS ULTIMET SS ULTIMET 210 Bechtel Technology Journal .02286 Irregular Mean Particle Size.02286 0. EVALUATION.0005 7 4.6 3.8756 1. g 3.4 4.02032 0.2894 2.2124 4 0.57 1.6 4. Stainless steel test coupons were exposed to direct flow simulating actual WTP plant conditions.6 3.6 3.03810 0.

Number 1 211 . 184. May 1995. WA. The work Bechtel performed as reported in this paper produced exponents on average that exceeded the range of conditions of 3 for velocity. weighted mean particle size.ingentaconnect.8 for concentration. S. No. The equation is as follows: Ew = Ewref Where: Ew Ewref Va Vref Pa Pref I G Cref H F E D Ia Sc = = = = = = = = = = = = = = = [ ][ ][ Va V ref Pa P ref n p ⎛G⎞ ⎛H⎞ (1− I ) ⎜ ⎟ + I ⎜ ⎟ (F )(E )(D)(Ia)(Sc) ⎜Cref ⎟ ⎜Cref ⎟ ⎝ ⎠ ⎝ ⎠ q q ] (1) scar depth at end of design life (m) scar depth of reference case (m) velocity of jet actual (m/s) velocity of jet from reference case (m/s) particle weighted mean diameter actual (m) particle weighted mean diameter from reference case (m) fraction of time for maximum solids loading normal solids concentration (wt%) reference case concentration (wt%) maximum solids concentration (wt%) vessel usage factor (fraction of time) PJM duty factor (fraction of time) design life (years) factor for impingement angle scale factor (1/4 to full scale) The predicted erosion rate for each PJM-mixed WTP vessel was evaluated against the available erosion design allowance for that vessel. Sehadri.R. Elmore. Enderlin and M. The predicted erosion rate for each affected vessel was evaluated against the available erosion design allowance for that vessel. Pacific Northwest National Laboratory. and slurry concentration. jet velocity. pp.W. 2. and slurry concentration. mean particle size. Gupta. test data is used as the reference case for scar depth.The test results were used in Equation 1 to predict the expected erosion rate over 40 years of operation. REFERENCES [1] C. Singh. and from the calculation a scar depth is estimated for a given period of facility operation (typically for the 40-year design life of the WTP). The test results provided exponents for jet velocity. R. with a weighted mean of 24 microns. In Equation 1. Slurry concentration at the WTP ranges from almost no solids content to a maximum of 20%. “Prediction of Uneven Wear in a Slurry Pipeline on the Basis of Measurements in a Pot Tester. The plant operating conditions and actual waste properties are then provided as input to the other parameters in the 00431648/1995/00000184/00000002/art06566. 169–178. 1997. and 0. Richland. [2] December 2009 • Volume 2. Particle sizes range from submicron to about 300 microns. Exponents were developed over the range of operating parameters of the PJMs. Vol. which range in discharge velocity from 8 to 17 meters per second (26 to 56 feet per second). and V. The project used computational fluid dynamics to scale the one-quarter erosion rates to full scale. Inc.” Wear.N. access via http://www. Letter Report for First Erosion/Corrosion Test Conducted for West Valley Process Support. The project also performed numerous sensitivity analyses to assess the effects of different operating and waste input variables to demonstrate margin in the design robustness. TRADEMARKS ULTIMET is a registered trademark owned by Haynes International. 2 for particle size.

et al. and Limerick. “An Experimental Study of Pipe Erosion by Turbulent Slurry Flow. 2. Ivan has a BSc in Chemical and Materials Engineering from California State Polytechnic University. April 1996. Rev.N. he worked briefly as a process engineer for Unocal Corporation at Unocal’s refinery in Los Angeles.” CH2M Hill Hanford Group. 2008. “Technical and Risk Evaluation of Proposed ICD-19 Rev. WA. [11] M.F. http://www. May 2007. No. 217. Richland. 24590-WTP-RPT-PE-07-001. 1..” Bechtel National. 212 Bechtel Technology Journal . the National Honor Society for Chemical Engineering. Inc. 2007.[3] A. November 2000. S. 24590-WTP-ICD-MG-01-019. 193. Garth was deputy manager for mechanical and process engineering. Garth M..ingentaconnect.. H. including the flowsheet and associated mass and energy balance. Rev. 00431648/1996/00000193/00000001/art06684.” Journal of Materials Science.pdf. resolution and closure of External Flowsheet Review Team issues. and T. Richland. No. Prior to working at Hanford. where he was a member of Omega Chi Epsilon. He has 35 years of experience in radioactive waste cleanup and nuclear plant design and operations. Vol. 4. Plutonium/Uranium Extraction (PUREX) Facility.A. Previously at the WTP. 24590-WTP-ES-PET-01-001.E. including the Fast Flux Test Facility (FFTF).springerlink. Duncan is deputy manager of the Process Engineering and Technology Department at the WTP project. Battelle-Pacific Northwest Division. Earlier assignments involved design and operating plant services for commercial nuclear generating stations.. pp. Hanover. pp. Hall. Vol. HNF-SD-WM-SP-012. Prior to joining Bechtel. Susquehanna. he was engineering manager for deactivation of the N-Reactor on the Hanford Site.” Wear. 297–306. Ziyun. September 18. 4. Papp has worked for Bechtel for 8 years and is currently the deputy manager for mechanical and process engineering at the WTP project. “An Investigation of the Corrosive Wear of Stainless Steels in Aqueous Slurries. February 2007. including Grand Gulf. R. 21. M.. Hall and R. R. During his time at Hanford. Singh.W. access via http://www. L. “Tank Farm Contractor Operation and Utilization Plan. and the 200 East Area Effluent Treatment Facility (ETF).” Proceedings of the Hydrotransport-5. WA. Principal responsibilities include management of the WTP process design. pp. He is a member of the American Nuclear Society. WA. access via http://www. F. Sashadri. he has worked on many projects. January 2007. [4] [5] [6] [7] [8] [9] [10] M. Los Angeles.” WTP-RPT-153. Prior to working at the WTP. 1. He has spent the past 11 years developing the WTP process design. 0. E2-15 – E2-24. and liaison with the Department of Energy on these and related issues. Wells. pp. and is a licensed Professional Engineer in California. Aiming. B. Garth was a nuclear propulsion-qualified division officer in the US Navy. Ivan has more than 20 years of process engineering experience in the nuclear industry. Rev. 1. Gimpel. Jinming. May 8–11. including operations and design. Report No. Inc. 6.M. No. Germany. and the project engineering manager for the LAW vitrification facility.pnl.” Bechtel National. May 1998.N. Mishra. BIOGRAPHIES Ivan G.” Wear. Pomona.A. 00431648/1998/00000217/00000002/art00147. and V. “WTP Waste Feed Analysis and Definition – EFRT M4 Final Report. California. Richland. Inc.” Bechtel National.N.J. “ICD-19 – Interface Control Document for Waste Feed.ingentaconnect. Karabelas. “Estimate of Hanford Waste Insoluble Solid Particle Size and Density Distribution. Kirkbride et al. content/l5464x5324747560/?p=5dd83b97025b4312 83b2a23d48d55345&pi=72. 5263–5273. 1978. “Study of Wear Characteristics and Solids Distribution in Constant Area and Erosion-Resistant Long-Radius Pipe Bends for the Flow of the Multisized Particulate Slurries. “The Erosive Wear of Mild and Stainless Steels Under Controlled Corrosion in Alkaline Slurries Containing Alumina Particles. Garth has a BS in Mechanical Engineering from the University of Southern California. Inc.. Rev. Rev. 73–77. Knight. http://www. 35. Stack.F. Fifth International Conference on the Hydraulic Transport of Solids in Pipes. Wang and documents/WTP-RPT-153.

PhD 219 Estimating the Pressure Drop of Fluids Across Reducer Tees Krishnan Palaniappan Vipul Khosla Yucca Mountain Management and Operations Alpine mining machines are used in the Exploratory Studies Facility to excavate alcoves and niches for scientific testing.Technical Notes Abbreviated Technology Papers TECHNOLOGY PAPERS 215 Effective Corrective Actions for Errors Related to Human-System Interfaces in Nuclear Power Plant Control Rooms Jo-Ling J. . Chang Huafei Liao.


Our team developed a complete list of MCR HSI-related error causal factors by reviewing past plant events and over one hundred academic publications on high-reliability industries ( © 2009 Bechtel Corporation. This error led to a partial core meltdown and the biggest nuclear incident in the United States. aerospace. We suspect that the underlying cause is the lack of formal guidelines because plants had already undergone thorough control room human factor evaluations decades ago. a review the authors performed of recent plant events from the Institute of Nuclear Power Operations (INPO) database showed that HSI continues to be a significant contributor to control room errors. . and their responses were analyzed to provide both the suggested corrective actions and the relative importance of each error type. more importantly. 1979. medical. human-system interface (HSI).EFFECTIVE CORRECTIVE ACTIONS FOR ERRORS RELATED TO HUMAN-SYSTEM INTERFACES IN NUCLEAR POWER PLANT CONTROL ROOMS Issue Date: December 2009 Abstract—A Bechtel-sponsored industry-wide study presents guidelines for correcting errors related to human-system interfaces (HSIs) in nuclear power plant (NPP) control rooms. This is done through the statistical process of varimax rotation using participant responses. March 28. The bare-bones categories help to reduce trending efforts and identify the types of causes experienced operators believe to be the most important contributors to MCR errors.). human performance. All rights reserved. As a result of this incident. using multivariate statistical methods and chi-square tests. all operating plants underwent a reevaluation process to identify and correct potential human factor errors in the main control rooms (MCRs). along with human error prevention Huafei Liao. Keywords—corrective action. A total of 138 licensed operators evaluated each factor. etc. While many human-system interface (HSI)related issues were addressed throughout this industry-wide reevaluation process. The results of the study also include guidance on training. and effective decision making. resource allocation. A total of 138 licensed operators from 18 NPPs participated by evaluating a list of common error causal factors and sharing their personal knowledge and experience as well as examples of “near miss” situations. PhD hliao@bechtel. human error. to arrive at guidelines containing suggested corrective actions for each potential type of HSI error. main control room (MCR). factor analysis. nuclear power plant (NPP) BACKGROUND O n Wednesday. Factor Analysis Factor analysis takes a complete list of causal factors and reduces it to a small number of representative categories. METHODOLOGY E ighteen commercially operated nuclear power plants (NPPs) participated in this Bechtel-sponsored study. The collective operator opinions were quantitatively analyzed. error prediction. Chang jjchang@bechtel. the control room operators at Three Mile Island Generating Station were unable to promptly identify a stuck-open pilot-operated relief valve in the primary system. Immediate corrective action tends to be confined to merely trending HSI-related errors instead of taking steps for extensive investigation. 215 Jo-Ling J. What types of errors may be corrected by operator training? Are procedural updates the most effective means of error prevention? Will an increase in management oversight improve or hinder operator performance? When is a design change the most appropriate corrective action? These are the questions we attempted to address when we commenced this study.

Table 1. the end users’ perceptions of HSI problems may be very different from those they face when the equipment is installed. [1] We used chi-square tests to determine if collective operator experience shows a tendency toward cognitive or physical for each type of error. inadequate staffing. the Decision-Action Model (Table 2) may be consulted for the most effective resolution and the Simplified Error Causal-Factor Structure (Table 1) may be used to determine the implementation time frame based on level of importance. Approximately one-third of the comments are related to equipment labels. and inconsistency in labels Limited Capabilities—Events that could have been avoided if additional information had been available. as modified to be used in the guidelines. While many modification projects at NPPs involve members of the operations department during the design phase. AND TERMS HSI INPO MCR NPP human-system interface Institute of Nuclear Power Operations main control room nuclear power plant Study Participant Comments Over 30% of the study participants provided additional comments. The suggested corrective actions presented in Table 2 use results from another study conducted by Chen-Wing/Davey. These were reviewed and. collaboration. Many comments are related to non-HSI issues. Action. and training.” that result from lapses of attention while monitoring displays or that occur while implementing intended plans. controls that provide no confirmation after state change. such as indications that do not accurately reflect plant conditions. ACRONYMS. such as trending or alarms/indications with levels of severity 2 3 Misoperations 4 Equipment Control—Errors that are usually due to overreliance on equipment 5 Design Issues—Errors caused by the design of the equipment. on the other hand. Simplified Error Causal-Factor Structure Category (Level of Importance) Error Causal Factor 1 Operations Uncertainties—Errors caused by doubts regarding the information presented on the job. each label should also be easily distinguished from others. or “slips. distractions in the field. procedural errors. However. operator action. Many operators also expressed a lack of trust in equipment. DISCUSSION T he results of this study provide NPPs with the basis for a formal corrective action guideline. When an MCR error occurs. time pressure. As a group. Decision represents cognitive errors related to operator knowledge and judgment. in the collective opinion of the surveyed operators. or both. These include operator vigilance. with the exception of non-HSI issues. operators believe that all labels should be consistent throughout the plant. since there are no predictions of equipment behaviors. Operator Decision versus Operator Action Each error causal factor pertains to operator decision. all comments fell into the complete list of causal factors that we initially created. communications. represents operator physical errors. Table 1 depicts the final representative category structure. such as controller spacing and function allocation 216 Bechtel Technology Journal . Errors that do not belong in any of the five categories listed in the table are considered to hold less significance.ABBREVIATIONS. specifically. it is often difficult to plan for contingencies. Specific attention should be paid to nomenclature and font size. The industry currently uses a similar two-part model: human performance errors (or regular errors) and technical human performance errors.

http://portal. indications. This discrepancy may be explained by the difference between perception and reality. Davey.” Proceedings of the Symposium on Human Interface 2009 on Human Interface and the Management of Information. pp. associated items (such as visual/ audio coordination and over-indication) were not ranked high based on the survey Search/2000-1. Safety. Number 1 217 .1610750&coll=GUIDE&dl=GUIDE& CFID=62697787&CFTOKEN=13180906. This phenomenon poses problems for the equipment designer. It should be noted that although most operator comments are related to the organization of information.C. San Diego. and L. http://www. Chang. While many modification projects at NPPs involve members of the operations department during the design phase.htm). Part II (held as part of HCI International 2009). Liao. 2009. and pre-job briefings Suggested Corrective Action: N/A Correct Decision + Incorrect Action • Control panel visually crowded • Controls too close together • Controls too far apart Incorrect Decision + Incorrect Action • Non-intuitive control • No alarm noting abnormal conditions and/or failures • Time limit to operation • Incorrect function allocation—manual actions designed to be automated Suggested Corrective Action: Modify control room and extend human factor reevaluation to the condition Suggested Corrective Action: Provide additional operator training. Decision–Action Model Correct Decision + Correct Action (No incident) Incorrect Decision + Correct Action • Unreliable indication • No feedback • Insufficient plant information • Display challenges • No trending • Poor color/sound coordination • Boolean indication • Equipment being operated incorrectly • Safety features being defeated • Equipment allowing failures • Over-reliance on equipment Suggested Corrective Action: Improve operations procedures.” Second Workshop on Human Error. Seattle. July 19– download/HEW98_Design_for_HE. REFERENCES [1] S. “Designing to Avoid Human Error Consequences. June 11–14.Table 2.” 21st Annual Canadian Nuclear Society Conference. the same operator may list labeling problems. an operator may respond that lack of information is the biggest HSI problem in the MCR. CA. Information and Interaction. 1998.N. Zeng. April 1–2. It is.J. But if asked to present examples.crew-ss. • E. “HumanSystem Interface (HSI) Challenges in Nuclear Power Plant Control Rooms. the end users’ perceptions of HSI problems may be very different from those they face when the equipment is installed. Canada. WA. H.cns-snc. general guidelines.crew-ss. 729–737. Ontario. vital to present operators with some form of equipment simulation during the design process to obtain insights into the real challenges. When confronted with an abstract question. Toronto. 2000. and feedback—leads to human performance errors. Davey. and management oversight It is vital to present operators with some form of equipment simulation during the design process to obtain insights into the real challenges.acm.cfm?id= 1610664.pdf (or access via http://www. “Criteria for Operator Review of December 2009 • Volume CNS_2000_Review_Criteria.pdf. http://www. and System Development (HESSD ’98). There is also agreement that lack of information—such as lack of alarms. Session 5. Chen-Wing and E. Workplace Changes. Operators as a group identified unreliable indication as the highest failure cause. peer check.L. which is part of information organization. ADDITIONAL READING Additional information sources used to develop this paper include: • J.

1. index.junis. 97–108. the 13th International Conference on Human-Computer Interaction. “Human-System Interface (HSI) Challenges in Nuclear Power Plant Control Rooms. Harry brings his technical expertise to bear not only in his Bechtel assignment. in July 2009. Janet holds an MSE from Purdue University. West Lafayette. and his Master’s and Bachelor’s degrees with a concentration in Control Theories and Control Engineering from Tsinghua University.apa. Huafei (Harry) Liao. Norman. Janet was an instrumentation and controls engineer with American Electric Power at the D. Beijing.pdf. and she is currently working on several technical papers on the topic. 9–22. • N. Monta. he worked on the North Anna and Edwardsport IGCC projects. http://facta. working on the 760 MW Trimble Unit 2 pulverized-coal-fired power plant. • D. Natio. “Categorization of Action Slips. 154. 2. pp. The Scientific Journal Facta Universitatis: Working and Living Environmental Protection. January 1981. China. 00295493/1995/00000154/00000002/ walep2000-02. Previously. K. Indiana. “An Intelligent Human-Machine System Based on an Ecological Interface Design. 88. For over a year. She is a member of Women in Nuclear and of North American Young Generation in Nuclear. Indiana.A. in Ann Arbor. pp. PhD. No.C. Vol. BIOGRAPHIES Jo-Ling (Janet) Chang is a senior control systems engineer in Bechtel’s Nuclear Operating Plant Services business line. and a BSE in Electrical Engineering from the University of Michigan. Cook nuclear plant. in San Diego. Grozdanović.• M. Her most recent. 1. No. Vol. Janet is a licensed Professional Engineer in Michigan. Vol. J. No. Janet’s area of interest is human factors. Makino.displayRecord&uid= 1981-06709-001. and M.” Psychological Review. March 1995.cfm?fa=search. “Methodology for Research of Human Factors in Control and Managing Centers of Automated Systems. California. she has led and supported a variety of design modification upgrade projects for Southern Nuclear Company. 218 Bechtel Technology Journal . Harry received a PhD with a concentration in human factors from the School of Industrial Engineering of Purdue University.” Nuclear Engineering and Design. 1–15. Before joining Bechtel. West Lafayette.” was presented at HCI International 2009. but also as a member of the editorial board for Human Factors and Ergonomics in Manufacturing and of the revision working group for ANSI/ ANS-58.8-1994 (R2008).” University of Nis. access via http://www. located in Kentucky. pp.ingentaconnect. is a control systems engineer with Bechtel Power Corporation. Itoh.

and revamps. sudden Splitting Flows Combining Flows Figure 1. Truckenbrodt BACKGROUND PROBLEM STATEMENT A ccurate estimates of the pressure drop across reducer tees in piping systems are critical to line sizing. This paper summarizes a study of reducer tee pressure drop estimates obtained from different calculation methods and examines how pressure drop values compare.ESTIMATING THE PRESSURE DROP OF FLUIDS ACROSS REDUCER TEES Issue Date: December 2009 Abstract—Accurate estimates of the pressure drop across piping system reducer tees are critical to line sizing and can have a significant impact on overall project safety and cost. Miller. pressure-reducing valve (PRV) laterals. Miller’s method. and Truckenbrodt method. in addition to the direction changes. This paper summarizes the findings of a study comparing reducer tee pressure drop estimates obtained from different calculation methods. as shown in Figure 1. the flow split ratio is not always the same as the area ratio. While simplistic calculations can estimate pressure drop when the area and/or flow ratios are very high (close to unity) or very low (close to zero). estimating the pressure drop in a reducer tee is difficult because significantly less experimental data is available for this type of tee than for standard tees. accurately estimating intermediate ratio values often involves approximations. accurately estimating intermediate ratio values often involves approximations. reducer tee. STUDY SIGNIFICANCE B ased on the type of reducer tee installation. Reducer Tee Flow Patterns © 2009 Bechtel Corporation. Keywords—entrance loss. I naccurately estimating the pressure drop across reducer tees can potentially affect safety and cost.1. especially for lowsuction-pressure compressors. Krishnan Palaniappan kpalania@bechtel. and examines how pressure drop values compare. irrespective of the flow pattern. At a reducer tee. such as the K method. the pressure changes caused by acceleration or deceleration also gain significant importance. reducer-tee investigations seem to be limited to area ratios greater than 0. This results in a scenario where. However. 219 . K method. Large chemical plants with thousands of tee fittings are designed so that the pressure drops for the four types of reducer tee installation are approximated to the branch tee pressure drop. Type 1 Type 2 Type 3 Type 4 Vipul Khosla vkhosla@bechtel. pressure drop. four different flow patterns are possible. flare networks. All rights reserved. and can have a significant impact on overall project safety and cost. While simplistic calculations can estimate pressure drop when the area and/or flow ratios are close to unity or close to zero. in many instances.

a theoretical equation proposed by Truckenbrodt (as summarized by Oka and Ito). was considered while varying the branch size from 8 inches (0.1 to 1. C Flow ratio = Q2/Q1 Q1 = 1500 m3/hr of water @ 40 °C and 590 kPaa Q2 varies from 0. Simplistic assumptions derived from methods proposed by Crane [3] are still used to quickly estimate loss coefficients.9 and for flow ratios ranging from 0.500 m 3/hr (396. Reynolds number Most modern grassroots plant designs provide for hydraulic design margins so that simplifying assumptions do not interfere with the system design intent. pumps.ABBREVIATIONS.05 to 0. along with simplifying assumptions for complex plant design.258 gph) of pure water at 40 °C (104 °F and 590 kPaa (85. METHODOLOGY ommonly used industry methods to calculate reducer tee pressure drop were critically examined. nominal bore (NB). pumps. In these instances. NB PRV PSV When an engineer performs adequacy checks of safety valve laterals. and a graphical method of estimating K values from Miller’s chart were evaluated. Reducer Tee Evaluation Parameters 220 Bechtel Technology Journal . a correct pressure drop estimate is critical in determining whether the system passes or fails. However.86 meter).20 meter) to 34 inches (0. there could be instances where approximations are unacceptable—such as when an engineer performs adequacy checks of safety valve laterals. a correct pressure drop estimate is critical in determining whether the system passes or fails. including correction factors proposed by Miller for systems where the Reynolds number of any branch of a tee is below 200.000 Using these four methods. To evaluate the different area ratios. AND TERMS K An empirically derived local pressure loss coefficient that accounts for losses encountered from pipe fittings such as reducer tees nominal bore pressure-reducing valve pressure safety valve A dimensionless number that expresses the ratio of inertial forces to viscous forces. The quantity of water flowing—1. two simplistic assumption methods based on Crane. Oka and Ito [1] summarized the available literature as empirical correlations based on theoretical equations for estimating loss coefficients in a reducer tee. or compressors. calculations were performed for area ratios ranging from 0. including the correction factor proposed by Oka and Ito based on experimental verification for small area ratios • Use the K value read graphically from Miller’s chart. or compressors.1 and the flow always remained in the turbulent zone. thereby quantifying the relative importance of these two types of forces for given flow conditions The four evaluated methods are summarized as follows: • Use a sum of standard tee pressure drop and a sudden contraction using Crane’s single-K method • Use a sum of standard tee pressure drop and an entrance loss using Crane’s single-K method • Use the K value predicted by an equation proposed by Truckenbrodt.1 x Q1 to 1. a reducer tee fitting with a straight run size of 36 inches (0. These parameters are illustrated in Figure 2. To completely understand how these various methods compare with one another. Miller [2] published a chart.0 x Q1 Area ratio = A 2/A1 = (D2/D1)2 D1 = D3 = 36" (fixed) D2 varies from 8" to 34" D2 = 8" to 34" D1 = 36" Q1 Q2 Q3 D3 Figure 2.6 psia)—was selected so that the highest system velocity was always below the erosion velocity up to an area ratio of 0.91 meter). This flow pattern was selected because it was found to be of interest in several safety valve inlet line calculations. ACRONYMS. A study was undertaken to provide engineers a guideline on various methods that can be used to reliably estimate the Type 1 flow pattern pressure drop.

292 0.440 0.674 6.347 1.214 0.986 1.324 1.227 0.216 0.258 gph).247 0.145 5.138 0.134 0.316 0.396 20.201 0.289 0. the pressure safety valves (PSVs) are remotely pilot operated because of high-pressure drops in the inlet piping.239 0.399 0.05 Standard Tee + Entrance Loss Truckenbrodt Correlation Miller’s Chart Note: Values in bold indicate the highest values for a particular flow ratio/area ratio combination. When the pressure drop on a reducer tee in the inlet line was calculated using the four estimating methods.1 to 1.358 0.232 0.165 0.090 3.330 0.205 0.294 7. ACTUAL PROJECT EXAMPLE A t the gas inlet to a gas processing facility.142 0.6 psi) to 1.3 0.726 0.634 4.51 Standard Tee + Entrance Loss Truckenbrodt Correlation Miller’s Chart Standard Tee + Sudden Contraction 0.863 3.823 1.340 0.256 0.206 0.9 0.142 0. Results of Pressure Drop Calculations in kPa Area Ratio Method of Calculation Standard Tee + Sudden Contraction Flow Ratio 0.519 0.218 0.130 0.250 0.197 0.491 9.427 41.5 0. Although these safety valves are set to protect at a relatively high pressure of 84 barg (1.218 0.331 0. Where safety valves have low set pressures.227 0.268 0.858 0.142 0.277 0.0 for each area ratio examined.523 0.218 0.545 0. or they could pose a potential safety concern if overlooked.278 8.241 0. For a single fitting.712 0.739 3.148 0.384 0.759 0.187 0.876 0.0 0.151 2.811 0. pressure drop limitations on the inlet and outlet lines are more stringent and can have a serious impact on the design. To calculate the area requirement.500 m3/hr (396.132 0.326 0.228 0. 0.266 0.190 0.240 0. A total of 440 calculations were performed to evaluate pressure drop across a reducer tee for various area and flow ratios.541 0.317 0.220 0.050 0.164 0.158 1. varying numbers ranging from 0.552 0.136 0.180 0.209 7.266 0. it is necessary to accurately estimate the pressure drop between the pilot line takeoff and the PSV inlet flange.965 10.167 0.201 0.1 0.30 Standard Tee + Entrance Loss Truckenbrodt Correlation Miller’s Chart Standard Tee + Sudden Contraction Where safety valves have low set pressures.247 0. The impact of the selected method and variations in results are best illustrated in the following actual project example.344 0.130 0.89 Standard Tee + Entrance Loss Truckenbrodt Correlation Miller’s Chart Standard Tee + Sudden Contraction 0.307 0.530 37.966 33.12 Standard Tee + Entrance Loss Truckenbrodt Correlation Miller’s Chart Standard Tee + Sudden Contraction 9.220 0.767 3.309 0.208 0.113 1.591 0. pressure drop limitations on the inlet and outlet lines are more stringent and can have a serious impact on the design.1 bar (15.148 0.308 0.7 0.590 19.116 1.334 0.207 0.145 0.9 psi) were obtained. three 6Q8 safety valves operating together are required to handle the blocked outlet relief case.218 psig).Table 1.774 0.524 0.8 bar (11.591 0.235 0.132 0.424 0.131 0.462 0.589 31. while the flow ratio through the branch was varied from 0.549 0.69 Standard Tee + Entrance Loss Truckenbrodt Correlation Miller’s Chart Standard Tee + Sudden Contraction 0.434 0. considering the huge volumes of liquid and gas that need to be handled in this facility.250 0. these variances could be serious enough to change the orifice designation because of insufficient installed area.131 0.493 0. The inlet flow was kept constant at 1.669 30.458 1.318 0. 221 December 2009 • Volume 2. Table 1 summarizes the results.134 0.217 0. Number 1 .314 0.187 0.137 0.211 0.150 0.918 2.921 6.188 0.317 0.674 3.590 0.427 18.588 39.

11 17. The calculation or approximation method used can significantly affect the system design.08 1. refining.14 1.0 1.71 2.0 20.00 1.125 or below.03 2. 208.51 1.0 1.60 23. Truckenbrodt’s correlation.46 2.08 1.00 1.46 1.01 1.0 1. including the correction factor proposed by Oka and Ito. 0.0 1.27 3.22 1.0 1.65 3.36 1.03 1. http://www.0 1. “Flow of Fluids through Valves. REFERENCES [1] K.Table 2.51 1. Krishnan has a BTech in Chemical Engineering from the National Institute of Technology. a process/systems engineer.46 2.11 1.00 1.3 42.62 1.23 1.” Technical Paper No. Using simplifying assumptions based on sudden contraction or entrance loss results in a conservative higher pressure drop.34 1.02 1.31 30.2 118 469 80. as a result.26 1. “Internal Flow Systems.01 1.31 5.00 1.S.jsp?KEY=JFEGA4&Volume=127.0 1.35 1.01 1. as shown in Table 2.8 264 60.11 1.03 2.16 1.00 1. 222 Bechtel Technology Journal . 1982.20 1.00 1. or when K values are more than 6.18 1.46 1. see http://www.00 1.00 1.00 1.08 12.81 2.6 359 70. Tamil Nadu. Fittings.5 The Truckenbrodt correlation with the correction factor proposed by Oka and Ito is recommended for area ratios lower than 0.03 1.03 1.15 1.90 2.2 20. A technical specialist in syngas facilities.07 1.46 1. 410M.4 46.internalflow. Calculated Using Truckenbrodt Method for Reducer Tees with Type 1 Flow 100. “Energy Losses at Tees with Large Area Ratios.05 1. % 90.58 5.83 8.46 1. The extension of Truckenbrodt’s correlation with correction factors proposed by Oka and Ito into this range yields higher K values.0 1.7 184 732 100. 302–318.41 1. Miller’s chart cannot be accurately read in this range.01 1.83 3.0 80.31 30.11 10.11 1.125 or when K values are more than 6.76 2. depending on the fitting’s flow and area ratios. Oka and H.00 1. Tiruchirappalli.4 46.31 30.0 30. yields pressure drop values that best match the experimental values at all flow ratios. NY.01 1.46 1.29 1. Angola LNG. he has over 15 years of industry experience working on the design of petrochemical.25 1.06 1. Vol.0 40.11 1.25 8.83 5.scribd. For small area ratios.01 1. and gas processing units.2 118 40.05 1.48 3. and Takreer refinery.40 3.64 1.00 1. For area ratios higher than 0.60 dbt. The use of the K values shown in Table 2 is recommended for area ratios lower than 0.01 1.32 1. D.8 30.125.00 1.11 1. Transactions of the ASME.66 2. the fitting’s flow pattern must be considered. joined Bechtel in 2005.20 1. India.83 4. K Values To Find Branch Flow Pressure Drop for Larger Pipe Sizes. % CONCLUSIONS hen it is necessary to accurately evaluate the pressure drop in tees.0 1.05 1. or when a single method with a conservative estimate of pressure drop is to be used across all area ratios and flow ratios.86 6.46 2. 2008. Flow Through the Reducing Branch.32 1.41 1.46 1.83 8.5 66.58 11.7 184 50.31 10.0 2.07 1.01 1. W [2] [3] BIOGRAPHIES Krishnan Palaniappan.46 2.46 2.46 1.28 1.0 1. Ito.17 2. January 2005.0 10.5 66.84 2.aip. 127.71 1.37 1. pp.” 2nd Edition.86 12.58 1. joined Bechtel in 2007 as a graduate engineer and has worked on refinery and LNG projects. He is a Six Sigma Yellow Belt.83 5.0 Area on Reducing Branch of Tee. pp. a senior process/systems engineer.20 1.5 1.05 1. 110–116. UK.83 8.04 Through-valves-Pipes-and-Fittings.3 38.0 50.49 6.0 1. New York.02 1. and Pipe.13 1.11 17. the pressure drops predicted are higher than those from the other methods.16 1.0 5. India.11 1.03 1.0 95.00 1.11 1. simplifying assumptions based on sudden contraction or entrance loss generally yield low pressure drops and are not recommended.56 1.” Journal of Fluids Engineering.15 2.0 60. 5.02 1.00 1.26 1.04 1. Vipul received a BE in Chemical Eng i ne er i ng f rom Pa njab University.4 90.46 1.00 1. Miller.81 2.02 1. Chandigarh.29 1. access via http://asmedl. Vipul Khosla. Crane Company.00 1.0 70.09 1.2 166 661 95. Miller’s chart is commonly used in this region. or when a single method with a conservative result is to be used across all area ratios and flow ratios.27 2. including Motiva crude expansion.0 149 593 90. Miller Innovations.00 1. Issue 1.18 1.93 2.0 1.

Sign up to vote on this title
UsefulNot useful