This action might not be possible to undo. Are you sure you want to continue?
Electrical Systems : A Guide for Facility Managers Facilities Management Library
Qayoumi, Mohammad H.
The Fairmont Press
isbn10 | asin
Electric power production, Electric power distribution.
Electric power production, Electric power distribution.
A Guide For Facility Managers
By Mohammad Qayoumi, PhD, PE
One Penn Plaza, 10th Floor New York, NY 10119
Library of Congress Cataloging-in-Publication Data
Qayoumi, Mohammad, 1951 -
Electrical systems: a guide for facility managers / by
Includes bibliographical references and index.
Electrical Systems: A Guide For Facility, Managers / by Mohammad Qayoumi.
© 1996 by UpWord Publishing, Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording or any information storage or retrieval system, without permission in writing from the publisher.
Published by UpWord Publishing, Inc.
One Penn Plaza, 10th Floor
New York, NY 10119
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
The information contained in this book has been obtained from sources that are believed reliable. Damages arising from errors, omissions or damages as a result of use or misuse of the data or information in this book are not the responsibility of the publisher, authors, editors or printers of this work. This work is published for professionals seeking information about the subjects contained herein. It is not the intent of this work to provide professional services such as engineering or consulting. If these services are sought, they should be rendered by properly trained, registered, regulated and insured professionals.
Dedicated to the memory of my father, Abdul Q. Qayoumi,
whose unquenchable thirst for education was a source of inspiration
Mohammad ("Mo") Qayoumi, PhD, PE is Vice Chancellor for Administrative Services at the University of Missouri-Rolla. A licensed Professional Engineer and Certified Management Accountant, Dr. Qayoumi has nearly 20 years of experience in facilities management.
Prior to joining the University of Missouri-Rolla, Dr. Qayoumi served both as a facilities administrator and adjunct faculty member at the University of Cincinnati and San Jose State University for 17 years. He has published more than 50 articles, a book titled Electrical Distribution & Maintenance and several chapters in various other books. In addition, he has made presentations at numerous conferences around the country and internationally. This year, Dr. Qayoumi will be serving as an examiner for the Missoui Quality Award Program.
Dr. Qayoumi holds a bachelor's degree from American University in electrical engineering, MS degrees in nuclear engineering and computer engineering, a PhD in electrical engineering and an MBA in finance from the University of Cincinnati.
There are many colleagues and friends whose encouragement and support have been instrumental in writing this book. I would like to express special gratitude to the following people:
My valued colleague and friend, Don Kassing, Vice President for Administrative Services at San Jose State University, was always encouraging and supportive of my professional efforts. I benefited greatly from his valuable advice throughout the years that I worked for him.
I owe a lot of gratitude to John T. Park, Chancellor, University of Missouri-Rolla, for his support and recognition of my professional efforts.
My editor, Craig DiLouie, was a great help in giving me valuable advice in writing the book as well as taking the time to organize the material in a nice order. It was a pleasure to work with him.
I would also like to express my gratitude to many product vendors who helped me getting the illustrations and latest information on various equipment.
Finally, the principal debt owed is to my wife Naija, whose unselfish support, dedication and encouragement made the writing of this book possible.
Chapter 1 - Electrical System Design And Management
System Capacity And Replacement
When Should The Electrical System Be Replaced?
Consequences Of Wide Voltage Variations
Combating Voltage Variation
Voltage Phase Imbalances
Increasing Power Factor
Reliability In A Series System
Reliability In A Parallel System
System Reliability Over Time
Sizing Circuits For Motors In A Distribution Panel
Sizing Wiring For Lighting Systems
Sizing Wiring For Other Resistive Circuits
Selecting Circuit Breakers
Needs Assessment In Two Steps
The Critical Path Method
Bidding The Project
Life Cycle Costing
Chapter 2 - Power Generation
Fossil Fuel-Based Generators
Operating Multiple Generators In Parallel
Applications Of Onsite Generators
Peak Shaving To Save Costs
Cogeneration To Save Costs
Chapter 3 - Power Distribution
Utility Distribution Systems
On-Site Power Distribution
Elements Of Transformers
Considerations For Operating Transformers
Low-Voltage Circuit Interrupters
Substations And Switchgear
Chapter 4 - Wiring And Cabling For Power And Communications
Wires In Parallel
Wire Splices And Terminations
Rigid Galvanized Conduit
Intermediate Metal Conduit
Electric Metal Tubing
Rigid Aluminum Conduit
Rigid PVC Conduit
Flexible Metal Conduit
Liquid-Tight Flexible Conduit
Termination And Splicing
Transmission Path Establishment
Channel Access And Allocation
Twisted Pair Cables
Fiber Optic Cable
Chapter 5 - Power Quality
Power Quality Problems
Sources Of Harmonics
Effects Of Harmonics
Methods Of Reducing Harmonic Effect
Sags And Swells
Power Conditioners: Tools To Reduce Power Disturbances
Power Quality Measurement
Power Disturbance Measurement
1. What Is The Source Of The Dirty Power?
2. What Specific Type Of Problem (i.e., Harmonic, Transient, Noise, Voltage Sag) Is The Main Concern?
3. What Is The Threshold Of Equipment Susceptibility?
4. Does The Interaction Between The Power Source And The Equipment Mitigate Or Worsen The Problem?
Chapter 6 - Short Circuits, Electrical Failures And Emergency Power
Sources Of Short Circuit Current
Types Of Faults
Coordination Of Protection Devices
Causes Of Electrical Failures
Protective DevicesSurge Arrestors
Emergency Preparedness And Standby Power Systems
Determining Standby Power Requirements
Determining The Appropriate System
Maintenance Can Prevent Short Circuits
Minimizing Impact Of Short Circuits
Emergency Preparedness And Standby Power
Chapter 7 - Rate Structures And Power Industry Trends
How Utilities Charge For Electricity
Regulation Of Utilities
Trends In The Electric Power Industry
Demand Side Management (DSM)
Deregulation Of The Electric Industry
Improvement In Power QualityPerhaps Guaranteed
Facility Managers Will Be More Savvy Power Buyers
Appendix I - Fundamentals Of Electricity
Units Of Measurement
Electrical Properties Of Material
Resistance And Temperature
Resistance In Parallel Circuits
Single-Phase And Three-Phase Systems
AC Voltage And Current
Electricity And Magnetism
Right Hand Rule
Appendix II - Bibliography Of Sources
Electrical Systems: A Guide For Facility Managers is designed to help facility managers, electrical engineers, maintenance executives and other decision-makers fully understand and optimize their electrical systems. These systems include all equipment and designs from onsite power generation to power distribution and system monitoring and protection.
Electrical Systems is written on two levels. Because power generation and distribution is such a complex field, highly technical information could not be avoided. However, every effort has been made to make this information fully understandable and usable in application. The other level is the management level. There are several major chapters addressing specific management aspects involving purchasing energy from the utility, power quality, premises wiring, providing emergency power and system protection, and safety, upgrades and maintenance. In addition, each chapter contains a ''Management Aspects'' section that boils down the chapter's contents into easily referenced and usable material for the management level. In this way, Electrical Systems is unique in that it targets the entire team that would be involved in decision-making during the design or upgrade of an electrical system.
These are the reasons why Electrical Systems was developedto bridge the gap between engineer and facility manager, and to help both work together to fully optimize their building's power generation and distribution systems.
Electricity plays an important role in our lives today. From its early beginnings nearly a century ago, the electrical industry has grown at a remarkably high rate, to the point that many of us take it for granted. As oxygen is needed for sustenance of living species, electricity is needed to sustain the endless array of electrical devices that provide our current lifestyles. In addition, electricity has some other unique characteristics that set it apart from other forms of energy. That is, it can not be stored in any appreciable quantities economically and on prima facie it appears a dormant source. This creates a unique set of challenges.
First, the generation and distribution of electricity requires real-time control. In other words, to maintain a steady-state system, the production and consumption of electricity must be the same at every instant of time. There is no mechanism to buffer the balance between the two.
Second, it is hard to get a tangible appreciation for the enormous amounts of energy that electrical networks possess. This is because, under normal conditions, the energy is flowing through the conductors with such ease, speed and elegance that our senses can not detect them. Usually, we become a believer in the vast energy potential of electrical networks only after a damaging result or fatality.
Finally, the inner working of electricity appears to be a mystery to many individuals. Unlike mechanical systems where most individuals have a sensory feel for how they operate, this is not the case with electrical systems. In short, electricity is one of the resources that plays a
critical role in many aspects of lives, yet our understanding of electricity falls short.
For facilities managers, electricity is both a challenge and an opportunity. As soon as an office experiences a power interruption, the first question asked of the facilities department is when will it be restored. This indicates that most building occupants are interested in how soon the service is restored rather than what might have caused the problem. Because of the complex nature of electricity, pinpointing the root cause of a failure can be a difficult problem to tackle. Unfortunately, many facilities managers take the easy way out and try to respond to the initial failure, and when service is restored abandon any further action.
A more prudent approach is to identify the root causes of failures so that future problems can be averted. This requires a better knowledge of electrical systems and the way individual circuit elements interact. Unfortunately, this knowledge base has shrunk in the past two decades. With the advent of more employment opportunities in solid-state electronics, many electrical engineering students pursue degrees in computer-related fields rather than power engineering. Consequently, much of the knowledge in the applied level currently resides with the electrical utilities and some of the manufacturers.
Most of the books that are available belong to two categories. One set of books are strictly for the technician, which gives them a step-by-step instruction of what they need to do without much insight into the principles behind their operations. The other category is written for individuals who pursue graduate research in power engineering. Therefore, suitable references that will cover the middle of the road, especially the level of knowledge required for facility managers, are few. This book intends to meet this particular need.
In Electrical Systems: A Guide For Facility, Managers, many important and complex concepts are explained in a non-technical and qualitative manner. The primary focus of the book is facility managers who have minimal or no technical background. Every effort has been made to minimize the number of formulae in the text and give the reader a general qualitative feel for the concepts. In addition to the traditional topics in electrical distribution, we will also explore maintenance, electrical safety, wiring and cabling, upgrades and renovations, working with technical people, power quality, how utilities charge for service, management aspects and the deregulation of the power industry. Power quality problems alone are costing American industry hundreds of millions of dollars a year. The deregulation of electrical generation is
creating tumultuous times for the regulated utilities, and potential opportunities for power purchasers.
Finally, on a personal note it was a joy for me to share some of my expertise with you and I hope this book will meet your needs and expectations.
Let us begin.
Electrical System Design And Management
Electricity plays a central role in our lifestyle, standard of living and economic prowess. Many wonders of science and technology have a very close connection with electrical energy. In this chapter, a number of managerial and design-oriented topics that facility managers must face are addressed in detail including system capacity and replacement, efficiency, reliability, maintenance, power factor and project management.
System Capacity and Replacement
An electrical system should not only have sufficient size to serve the designated load, but have adequate spare capacity to carry future anticipated load. For example, the switchgear assembly will have a spare cubicle for additional breakers. The transformers and cables have adequate spare margin for additional loads. There will be spare conduits for future wire runs and there will be extra space in the distribution panels for additional circuits. System capacity is an important parameter for facility managers because it can drastically reduce the cost of future incremental expansions. The reason is that during the construction project, increasing the system capacity adds only a small cost to the overall project, while such additions will be much more expensive later on.
For example, when installing conduits, the additional cost of materials between a 1 inch conduit and 11/2 inch conduit is less than 10 percent. The installation cost is practically the same. The larger conduit can accommodate several additional wires. Now in
Some of the management and design concepts discussed in this and future chapters are of a technical nature and use technical jargon as a matter of necessity. For information and explanations of these concepts and all other fundamentals of electricity, see Appendix I for help
the future when an additional circuit is needed, with the larger conduit, we can utilize the spare capacity. On the other hand, if there is no spare capacity, installing a new conduit for the additional circuit will not only be much more expensive, but it will take a longer time and could impact the operation as well. Another important point to mention is that unlike mechanical systems where oversizing can result in large losses, in electrical systems oversizing can only increase losses by a small amount, and in many cases can actually reduce losses. For instance, using a larger size conductor not only provides spare capacity for future loads, but also results in lower line losses and a better system regulation.
When Should The Electrical System Be Replaced?
There are a number of parameters that can trigger the need to replace the facility's current distribution system. These should be viewed as indicators where further analysis and evaluations for replacing a distribution system are initiated:
Age Of The Equipment - Since there are few moving parts in an electrical distribution system, the physical and wear and tear in most cases is negligible. Usually, the age of the system per se is not be a serious problem.
When the insulation protecting the switchgear ages until it becomes brittle and flakes off, however, a short circuit or electrical failure will result. Therefore, if the equipment is 2530 years old, it should be considered a candidate for replacement.
In many cases, a more limiting factor in this regard is equipment obsolescence, where the original manufacturers of the equipment may no longer support a particular line of products due to changes in technology, business divestiture or lack of market share. Under these circumstances, securing spare parts will become a major hassle for many facility managers. This issue can become critical and may lead to significant downtime and service interruptions. For instance, practically all manufacturers of air-magnetic circuit
breakers and oil switches for voltages ranging from 2400V to 13.2 kV have stopped producing their product lines using these technologies for more than a decade. That is one of the major reasons why facility managers have modernized their substation equipment.
System Capacity - If load demand exceeds the current capacity of the power distribution system, it is a good opportunity to investigate various options in meeting the additional load. Augmenting the system with additional feeders and leaving the existing system essentially untouched is a simple, low-cost alternative. However, without investigating the age and condition of the current system, this approach may turn out to be myopic. It does not make much sense to make additions to the current system now, only to replace it within a few years. In the long run, it may be less costly to replace some or all of the existing system as part of system expansion.
High Voltage Drop - According to the National Electrical Code® (NEC®), voltage drop should be limited to two percent for feeder circuits and one percent for branch circuits. Lower voltage can cause undesirable characteristics for power systems such as much lower illumination for incandescent lights, malfunction of electronic devices, and lower torque and higher current for electric motors, which heats up the motors and reduces their useful life.
The root cause of higher voltage drop can be a load increase in the system or a maintenance problem. The first thing to do is see if there are any loose connections in the system. Loose connections will increase resistance in the conductor path and result in a higher voltage drop. This problem can easily be pinpointed with the use of an infrared detector. This simple test is recommended before starting any plan to upgrade the distribution system.
If there is no maintenance problem, the next step is to see if the load in the circuit has increased, which in turn has increased the line voltage drop. If the answer is yes, there will be three primary optionsadd another circuit in parallel to the existing circuit, segregate a portion of the load to a new circuit, or replace the existing cable with a larger one.
Changing The Voltage - When the load increases in any electrical system, it is recommended to evaluate the possibility of going to higher incoming and distribution voltages. Higher voltages improve the system efficiency and reduce voltage drop. Moreover,
the system capacity is increased greatly. When the system voltage is doubled, the power-carrying capacity of the same size conductor is increased by a factor of four. For instance, if a distribution system is upgraded from 208V to 480V, the power-carrying capacity is increased by a factor of 5.3. Similarly, when a 110V lighting system is upgraded to a 277V system, the same size wire can carry more than five times as many lamps.
System Reliability - If the system experiences higher than normal failures and the frequency of the failures is increasing, it may be time to look at the condition of the existing distribution equipment and evaluate the need for its replacement. The first area to investigate is to see if the lower reliability is due to a lack of adequate maintenance. Otherwise, if the major cause of low reliability is related to the equipment, then replacement of current distribution will be warranted.
Safety Concerns - If the equipment has deteriorated to a point where it may cause danger to people or property, then it definitely is a candidate for replacement. Obvious warning signs include lack of adequate safety devices, exposed high-voltage surfaces, dilapidated insulation, cracked and leaky terminations, etc. Since electricity is an unforgiving force, no chances should be taken.
Lack of adequate care that results in serious injury or fatality not only holds owners and operators of the distribution system liable, but in many situations facility managers can be held criminally liable as well.
System Automation - The proliferation of automation is prevalent in almost all industries. However, automation has two direct impacts power requirements will increase, and power failures will become more costly than before (downtime of automation systems results in bigger losses). Since automation projects are capital-intensive, upgrading or replacing the distribution system may not add a high percentage to the automation project. The return on such an investment will be high due to averting or reducing potential failures and capacity shortages.
High Maintenance Cost - Because of age, equipment condition, poor design and/or ambient condition, the maintenance cost of the power distribution can be excessive. To get a complete picture, maintenance costs can be compared to prior periods or
benchmarked with other entities. When the repair hours or cost increases beyond a certain threshold, it makes sense to evaluate the replacement of the distribution system. A modernization plan will be the most cost-efficient alternative given all of the direct and indirect costs of unscheduled downtime.
For any system, we ideally would like to be able to utilize all of the power that has been put in the system. In other words, the output product should be the same as the input raw material. In reality, however, all systems have losses and electrical systems are no exception. This implies that the output power will always be less than the input power, with losses experienced as heat and noise. By the law of conservation of energy, the difference between the input and output powers is equal to system losses. Efficiency is equal to the ratio of output power to input power:
As can be seen from the above formula, since input power is always larger or equal to output power, the maximum value of efficiency will be 1. This is also intuitively obviousotherwise, an efficiency of above 100 percent will imply the violation of the second law of thermodynamics. Although electrical systems have higher efficiencies compared to mechanical systems, when dealing with large electrical systems the losses add up and can be significant.
The most obvious reason for improving efficiency is based on economic considerations. With today's cost of energy, efficiency issues cannot be ignored. In fact, the significance of power losses go far beyond the most obvious economic considerations. To elaborate on this, the losses in electrical systems convert to thermal energy and dissipate as heat which result in elevating the temperature of system components. As heat is the primary reason for all electrical equipment failures, the higher temperatures contribute to shortening the life of electrical equipment. Therefore, a 10ºC increase in the temperature of insulation lowers the life expectancy of that equipment by half (see Appendix I).
To address this problem, auxiliary cooling and ventilation devices
will be needed to lower the operating temperature of the electrical equipment. The addition of this equipment increases the installation as well as ongoing operating costs. It should also be kept in mind that the auxiliary equipment generates its own share of heat and ironically contributes in raising the overall temperature. So as we can see, the losses and solutions to abate the problem end up having a cascading effect.
Another problem of high system losses is the adverse impact on system capacity. When the system efficiency is low, the available output capacity will be lower. This necessitates increasing the output rating of system components by one or more sizes. The result will be a higher installation cost. Since electrical equipment is only available in standard sizes, as we choose the higher sizes there is a good possibility that a system component will be lightly loaded. Most system components, with the exception of cable, have a lower efficiency at lower loads than close to full-load operations. Consequently, the efficiency the overall system will be adversely affected by going to the higher-size components.
As we can see from the above discussion, low efficiency has many adverse impacts on power systems. This is why it is important for facility managers to pay close attention to efficiency of electrical systems. It can avoid potential problems that include an increase of installation and operating costs, shortening useful life of equipment, and equipment failure. Sometimes, choosing higher-efficiency components translates into a higher first cost which can be recovered in operating savings. To gain the analytical tools to evaluate such options quantitatively, a few basic financial concepts are briefly discussed later in this chapter.
Regulation is defined as the percentage of voltage difference between full-load and no-load voltage. Distribution components such as cables, transformers, etc. have some resistance. At no load, when the current is zero, the line voltage will be at its highest level. When the power circuit is loaded and the current is increased, the system losses will also increase, which results in a lower terminal voltage delivered to the load. In other words, the more load that is connected to the system, the higher the voltage drop will be.
Usually, distribution systems are designed to limit the total voltage drop to less than 10 percent. One obvious suggestion is to
step up the no-load voltage enough to compensate for the drop caused by the full load, but unfortunately both higher and lower voltages will cause problems.
Consequences Of Wide Voltage Variations
Voltage regulation is an important factor for facility managers because the performance and life expectancy of equipment can be negatively affected.
For an electric motor, the voltage varies inversely with the line current. A 10 percent drop in the voltage will translate into a 19 percent drop in the starting and running torque as well as maximum overload capacity. The lower torque could negatively impact the motor capability to serve the desired load. Moreover, the higher winding temperature will reduce the life expectancy of the motor significantly. On the other hand, a 10 percent increase in voltage will increase the motor torque and overload capacity, but can also result in severe winding overheating, which shortens the motor's useful life.
Voltage variations have similar effects on many other devices. For example, for a 120V incandescent lamp, a 10-volt variation increases its light output by 34 percent, but will reduce its useful life by two-thirds. Similarly, a 10V drop reduces the light output by 25 percent and its useful life will be extended by a factor of 3. For a fluorescent lamp, both the lower and higher voltage outside the normal operating level will reduce efficiency and shorten useful life. High voltage causes overheating of the ballast, premature blackening of the lamp ends, and early lamp failure. Low voltages cause difficulty in starting, which will shorten life. For high-intensity discharge (HID) lamps such as high pressure sodium, metal halide and mercury lamps, a 10 percent drop in the line voltage will result in a 30 percent drop in light output. Low voltage conditions require repeated lamp starting, which in turn will reduce useful life. In addition, higher voltage will raise the arc temperature which could damage the glass enclosure.
Combating Voltage Variation
As one can see from the above discussion, voltage regulation is an important parameter and proper care is needed. There are a number of parameters that can affect regulationsystem efficiency, power factor and large load variation. If the system efficiency is improved by lowering the distribution system losses, the voltage
variation between full- and no-load will be reduced. Improving the system's power factor will reduce system current for the same level of real power, which will in turn lower line losses and improve voltage regulation.
Changing Transformer Taps - Seasonal load variation can be counteracted by changing the transformer taps. For example, if electrical consumption is significantly higher in the summer, line voltage can be increased in the summer by changing the transformer taps. The small boost in voltage will compensate the higher voltage drop caused by the additional load. However, it is important to change the transformer taps at the end of the summer, because otherwise the line voltage will be higher than the normal operating voltage. For more sensitive electronic equipment which requires tighter voltage regulation, a dedicated regulator is recommended.
Voltage Phase Imbalances
In addition to regulation, another voltage variation issue that should be discussed relates to voltage phase imbalances. For a three-phase four-wire system which has both three-phase and single-phase loads, if the single load is not balanced between the phases, this will result in phase voltage imbalance. For a balanced three-phase system, the voltage between any of the two phases will be identical. For an unbalanced system, this will no longer be the case. If a motor is connected to an unbalanced voltage source, the motor efficiency will drop and the rotor temperature will rise.
Many electronic devices such as computers are affected if the voltage imbalance is more than 2 percent.
The best way to address the problem of voltage imbalances is to segregate the three-phase and single-phase loads and feed them via separate transformers. If this is not practical, try to balance the single-phase load. This means divide the single-phase loads into three equal groups to the degree possible and connect each group to a different phase.
For AC power, the voltage and current may or may not be in phase. In other words, the voltage will not necessarily increase and decrease simultaneously with the current. In fact, the voltage can be in phase and still be leading or lagging the current (see Figure 1 -
Relationship between real and apparent power.
1). This phase differential causes a variation between apparent and real power, with the result being power factor, which is expressed as a decimal or a percentage.
It is important for electrical distribution systems to have a high power factor (0.90 or greater) for several reasons:
1. A high power factor reduces distribution losses. For example, if the power factor of a 100 horsepower, 208V motor is increased from 0.85 to 1 the line losses will drop by 35 percent.
2. A high power factor will help in stabilizing system voltage, improving regulation (see above discussion on benefits of good regulation).
3. A high power factor eliminates costly utility penalty charges. The utility is likely to assess a penalty on the electricity user if the user's facility employs low power factor equipment because the utility will have to employ extra equipment to handle the load.
4. A low power factor decreases system capacity and system efficiency. Often, low power factor equipment requires double the wiring, which poses a much higher cost to install and operate.
Increasing Power Factor
The power factor of a distribution system is commonly increased by adding capacitors to the power system. If adequate capacitors are added, the effective system power factor will approach a value of 1 (or 100 percent). So the question is how to determine the amount of capacitance needed to increase the power factor of a system from a current level to the desired level. Tables of power factor combinations are provided by capacitor manufacturers. It is important to mention that when adding capacitors, care should be taken not to overcompensate power factor. There are two disadvantagesadding more capacitors is a waste of money, but more importantly, overcompensation results in higher than system voltages in a localized area, which can damage some equipment.
It is essential for facility managers to realize that after capacitors are installed, proper care can ensure long life. Otherwise, the life of a capacitor can be shortened by overheating, overvoltage or physical damage. So periodic inspection of capacitors must include checking the ambient temperature, ventilation, line voltage and capacitor fuses. If capacitors must be disconnected, it is important to discharge them by connecting a heavy duty 50 kilo-ohm resistor between terminals and ground. This is an important safety matter because discharging capacitors by short-circuiting them can result in sudden release of large amounts of energy that may lead to personal injury.
System reliability refers to the probability that a system is available for use. System reliability is a stochastic rather a deterministic factor, therefore. To have a better appreciation of the related concepts, let us look at fundamentals of statistics and probability.
The most popular continuous distribution is the normal distribution. Although the distribution is an idealization, it is often a reasonable approximation of a real situation. The graphical representation of normal distribution is shown in Figure 12 as an inverted bell shaped curve. As we can see from the graph, this distribution is symmetric, unimodalthe mean, median and mode are all equal. Although as one moves away from the average value
in either direction the value drops rapidly, it approaches zero only asymptotically.
The normal distribution curve has another useful characteristic relating the probability of occurrence and the deviation from the average. This means that for 68.26 percent of times the probability of occurrence will be within ±1 standard deviation, and similarly the probability for ±2 and 3 standard deviations will be 95.46 percent and 99.73 percent. For most practical real life problems, ±3 standard deviations is viewed as the system capability.
Example: Based on an empirical study about the life expectancy of a certain brand of light bulbs, the following data was gathered:
Number Of Occurrences
Hours Of Useful Life
Assuming that the life expectancy of the light bulb is normally distributed, what can be said about this brand of light bulbs?
Solution: First let us find the average age of the light bulb, which is:
Now let us find the standard deviation:
(X - X)
(X - X)
N(X - X)
Normal probability curve.
Variance = 6,264,750/190 = 32,972
Standard deviation is the square root of the variance, or 182.
From the above results, we can infer that the average useful life of the bulb is 1,295 hours. There is 68.26 percent probability that a bulb will have a useful life between 1,113 and 1,477 hours. Similarly, there is a 95.46 percent probability that a bulb will have a useful life between 931 and 1,659 hours, and there is a 99.73 percent probability that a bulb will have a useful life between 749 and 1,841 hours.
From this example, one can see the strengths of this methodology in determining when to expect most of the lamp failures and when it is the best economic time to do group relamping or replace burned-out lamps as needed.
Additionally, the above methodology can be used to determine the probability of failure of any element in the power system. This example was a simple case of looking at the failure rate of one discrete component. Now let us investigate when we have two or more discrete elements connected in series or in parallel.
Reliability In A Series System
For a series system, the total reliability of the system is equal to the product of the individual reliabilities. For example, the reliabilities of the cable, the transformer and the circuit breaker are 0.9, 0.92 and 0.96. The overall system reliability is equal to:
Which proves the point that a chain is only as strong as its weakest link.
Reliability In A Parallel System
For a parallel system, the total system reliability is equal to the product of unreliabilities subtracted from one. For example, the reliabilities of three cables connected in parallel are 0.9, 0.92 and 0.96. The overall system reliability is:
Which shows that the overall system reliability will be better than the best individual element in the system. But actual electrical distribution systems consist of many parts where some are connected in series and some in parallel. Given the reliability of individual components, the overall system reliability can be calculated by analyzing the system in a combination of series and parallel modals.
System Reliability Over Time
When an individual equipment item or system is put in operation until it fails, statistically one can identify three specific areas as shown in the ''bath tub curve'' diagram in Figure 1-3. These three areas have different causation patterns for failure.
The first part is referred to as the burn-in period. The failures are usually due to a design flaw or lack of effective quality control during manufacturing or field installation of the equipment.
The second part is referred to as the normal operating period,
where the failure rate is relatively low and constant. The failure is predominantly random, but can be positively influenced by good operating and maintenance procedures. However, a major reduction in failure rate is only possible in a redesign situation.
The third part is the wear-out failure period. The failure is due to equipment insulation fatigue, embrittlement and breakage. A reduction in the failure rate requires a good preventive maintenance program and replacement of critical parts.
As we can see from the diagram in Figure 1-3, the failure rate increases over time as we would intuitively expect. This is a qualitative view. However, as facility managers we are interested in the length of time that equipment or systems will operate without failure. In other words, for repairable equipment we are interested in "time between failures" (TBF) and for non-repairable equipment we are interested in "time to failure."
To calculate this, normally the second period of the bathtub curve (the normal operating period) is evaluated. As mentioned earlier, the failure rates in this area are constant. The distribution of TBF will be exponential as shown in Figure 1-3.
By definition, the average or mean time between failure (MTBF) for an exponential graph is the inverse of the failure rate. Using MTBF, one can determine the probability of survival over time.
Example: Assume that MTBF for an incandescent bulb is 4,000 hours. What is the chance that the bulb will be operating after the first 100, 1,000 and 10,000 hours?
R = Chance of survival rate
T = Specific time period of fault-free operation
MTBF = Mean time between failure or reciprocal of failure rate
Exp = Exponential
Graphical representation of reliability versus time.
This means that there is a 97.5 percent chance that the bulbs will be working after the first 100 hours, a 77.8 percent chance they will be working after 1,000 hours, and an 8.2 percent chance after 10,000 hours.
Some people have difficulty with the physical significance of MTBF. MTBF refers to an average time between failures and assumes the equipment can be placed in operation after each failure. Since it is an average parameter for a large sample, the failure of individual equipment could be quite different. For the above analysis, the failure rate was assumed to be constant, which fits most actual situations. MTBF is not the same as operating life or service life or any other index that determines overhaul time. The scope reliability studies consisted of:
Collection and evaluation of component failure data.
Determining the reliability standards.
Development of mathematical models and solutions.
Verification and evaluation of results.
The most difficult element of a reliability study is data collection, because incomplete or contaminated data can lead to erroneous results. In addition to component failures, another element that affects system reliability is related to human error.
Human error can be divided into three types: random, systematic, and sporadic.
Random errorresults from the random variability of a system. This is influenced by a variety of factors such as personnel selection, capability, training and type of supervision.
Systematic error is due to systematic variability such as organizational structure, feedback processes to employees, tools and equipment available to the staff.
Sporadic errors are like anecdotal information based on occasional episodes. They are hard to predict and difficult to control.
Maintainability is the combination of design characteristics of a system that will permit and facilitate the accomplishment of maintenance tasks under normal conditions by personnel of average skills. Maintainability is concerned with bringing a product to its useful condition, and is a function of how equipment is designed, how it is installed, the amount of clearance around the equipment, and the level of self-diagnostics that the system has.
Maintainability is directly related to reliability, because effective maintenance will positively impact equipment life expectancy or system reliability. Maintainability can be broken into two categoriespreventive maintenance and corrective maintenance.
Preventive maintenance refers to the periodic replacement of parts and check up to preclude failure.
Corrective maintenance involves action after a problem has been detected. Two parameters measure the degree of maintainability: mean time to detection (MTD)referring to the average time needed to diagnose a fault, and mean time to repair (MTR)the average time to fix the fault.
Availability of an equipment item or a system is the probability that the equipment or system performs satisfactorily under normal operating conditions. It therefore takes into account the operating time as well as the idle time. Availability is the ratio of operating-time to operating-time-plus-downtime.
Relationship between reliability and availability.
The motivation behind studying these concepts is to find ways to improve equipment and system availability. To have a better appreciation of the concept of availability, let us examine the interrelationship of the concepts introduced in this section. As we can see from Figure 1-4, system availability can be increased first by increasing MTBF and reducing MTD and MTR. There a number of different approaches to determine system availability. The common ones are discussed below:
Network Method - This approach is based on the assumption that every system is a collection of individual components that are connected in series and parallel. Therefore, system reliability can be calculated in this manner. Afterwards, by knowing the system MTD and MTR, the availability can be determined. This is a simple and straightforward method. One of the limitations of this approach is the assumption that component failures are always independent. In other words, component interactions cannot be taken into account.
Fault Tree Analysis - The fault tree analysis is related to the network method. It is a systematic approach to system failure events, subsystem failure events and component failure events which cause them.
The fault tree is constructed as a logic tree where the arcs represent system, subsystem or component failure events, and the vertices represent logic operations that relate the failure events with their inputs and outputs. A failure that originates from a single event will be at the root of the tree. It will be the main failure. The next level will represent the causes and the result of that failure. The same process will be repeated to subsequent levels.
Usually, the aim of the fault tree is to analyze and determine the probability of the root event. In regards to calculating availability, it will be the same as the network method.
Simulation Method - The two methods discussed above, although powerful, are not very practical for actual systems. To overcome this limitation, computer simulation modeling is used. Here, after observing the system over time, the system reliability can be estimated.
Simulation is treated like a series of real experiments, whose events are made to occur at times determined by the random processes based on predetermined probability distributions. One of the common approaches is the Monte Carlo method, which is easy to set up. The main shortcoming, however, is the large number of experiments required in most cases. So the computer time can be excessive if the system includes many independent states. But the simulation approach is still the most viable option.
Today, there are many simulation software packages available for PC applications.
Improving System Availability - There are several general approaches to improving system availability. It can be accomplished by improving system reliability, increasing MTBF and reducing MTD and MTR.
System reliability is increased first by choosing reliable system components. Component reliability is ensured by having a good preventive maintenance program. The system topology also plays an important role. If a system is organized as primarily many components connected in series versus another which is primarily parallel components, the second system will be more reliable assuming all other things are equal.
Another factor that can improve system reliability is having a standby power source.
Finally, the MTD and MTR can be decreased by having good system documentation, qualified personnel, effective training, etc.
Preventive maintenance (PM) is the anticipation and correction of equipment failure before it occurs. The underlying principle behind PM is to extend the useful life of equipment and to minimize failure and service interruption.
Frequency Of Preventive Maintenance - The frequency of PM on electrical systems is always a subject of controversy, because too frequent PM is not only a waste of resources but can also adversely impact the life expectancy of certain electrical components. There are a number of factors that generally affect the frequency of PM, including ambient heat, dust, humidity, vibrations and corrosive operating conditions.
To develop PM frequency, use the manufacturer's recommendations as a guideline. This needs to be adjusted based on prior experience records, operator's knowledge and the critical role of the equipment. For example, a main substation will need much more attention than a transformer for an individual building.
Coordination With Available Resources - An important element of developing a PM program is coordination with the resources available. Failure to do so is one of the reasons that many PM programs are not successful. Many institutions develop elaborate plans without any regard to staff availability.
For example, suppose 2,000 man-hours of PM has been identified annually, while in reality the department cannot give more than 1,000 hours. In such a situation, as the work orders are given to the maintenance staff on a monthly or weekly basis, a large number are returned to the work control center undone. As a result, the PM backlog continuously increases and the maintenance staff loses faith in the PM program in a relatively short period of time.
Moreover, since the little PM that is completed is not done in a systematic manner, the system could potentially have increased failures which will further reduce the time available for PM.
Relationship To System Reliability - Another problem with many PM programs is their lack of consideration of system reliability.
PM programs are often developed in isolation based on individual PM recommendations without any regard to system reliability.
To overcome these problems, especially in the face of continuing resources shrinkage, the PM program must be based on and derived from the level of desired system reliability and its related cost. A variation of the approach is to examine the current level of staffing and how many man-hours can be allocated to PM. Given that number, prioritize the PM tasks based on system reliability and choose the tasks totaling the available number of hours.
Creativity - As we can see, developing a good PM program is as much an art as it is a science. Creativity and innovation can significantly impact the program. The PM program should be viewed as a dynamic entity and not a static one. In other words, one should constantly experiment with frequency of tasks to determine their impact on system reliability.
For example, one might reduce the frequency of subsequent PM by 10 percent and see if system reliability is impacted negatively. If not, continue this process until the negative impact outweighs the PM time saved. This will be a good way of optimizing the program and free up maintenance personnel for other tasks.
Shutdown Plans - Finally, since most PM for electrical systems necessitate de-energizing the system, an important part of implementing a PM program requires adequate preplanning and developing a good shutdown strategy. Key elements include giving adequate advance notification to areas that will be affected by power shutdown, determining parts and tools that will be needed, and ensuring that adequate staffing is on hand when the PM is to be done.
Maintenance Procedures - A good PM program should include careful inspection of individual components for cleanliness, tightness, discoloration and hot spots. Other PM tasks include adjusting and lubricating moving parts, and testing equipment.
The major source of electrical failure is insulation breakdown. In fact, more than 80 percent of electrical failures are insulation-related. Usually, the presence of moisture, dust, oil and grease result in poor air circulation which can cause cracking, treeing, corona and eventually flash-over. Other factors that negatively impact insulation dielectric value include excessive heat, vibration, overvoltage and aging.
Based on these facts, proper care of insulation is a central element of performing PM tasks for electrical systems. It is important to use the right type of solvents for cleaning insulators, as improper solvents can damage the dielectric value of insulators.
One way to find out if equipment insulation needs attention is to scan for accumulations of dust, dirt and/or moisture, visually or by an insulation test. Other indications include a 10-15ºC temperature rise and the presence of hot spots. A simple way to periodically check temperature rise is through the use of infrared detectors. The main advantage of surveying electrical equipment temperatures with infrared is that no equipment shutdown is required. It is also a non-contact approach, which is more safe.
Inspection Record-Keeping - Keeping good records of inspection can give better insights into potential problems that might not be otherwise obvious. Certain forms of deterioration are so slow that it is difficult to detect them in a single inspection. Combined with prior inspection, a particular trend of potential failure might become evident. For example, measuring the insulation value of a cable without any past data might appear to be satisfactory, but when compared to prior tests, the insulation value has dropped a certain percentage during each test. It will be easy to extrapolate an approximate failure time for the cable.
Shutdowns - The success of an effective PM program is based on good planning. When deciding to have a shutdown, try to pick a time that will constitute minimal impact for the operation. PM should be scheduled during relatively slow periods. The shutdown might only occur in the evenings or weekends, for example. Determining when a shutdown can occur is one of the first steps to PM planning.
After management has agreed upon a date, notify all parties involved and seek input from the occupants as to the level of emergency power needed during the shutdown. If it is possible, plan the shutdown at a time when the weather is moderate. That reduces the need for further contingency planning.
After determining when the shutdown can occur, assemble the manpower, supplies and equipment need to performed the required tasks, such as cleaning agents, hand tools, cleaning cloth, test equipment, vacuum cleaners, walkie-talkies, etc.
Preparing a detailed written procedure for the shutdown is another important step. The reason for a written procedure is to assure that no steps are forgotten during the actual shutdown. The
procedure should be fully understood by all members of the PM team, and a dry run is strongly recommended.
Before performing PM on an electrical equipment, secure all power-sensitive equipment. For example, make sure that all sensitive electronic equipment has been shut down and that the elevators have been brought to the ground floor and turned off.
The next step in the procedure is shutting off the main secondary breakers. The reason for this step is two-fold: By shutting down the secondary power, the emergency generators should automatically start and energize the critical loads; and by shutting off the secondary main breakers, the current across the primary breakers will drop close to zero. This way, the primary switches are opened, and there will not be a large current to spark across the switch blades. When the primary switches are turned off, it is highly recommended to verify that the switches have been de-energized.
It is a good practice to check the voltage before starting the PM procedures, because even if the primary breaker is turned off there might be an additional path for flow of electricity to the system. In addition, because of system capacitance (electric charge stored in the system), there can be a significant amount of energy that needs to be drained to the ground. Failure to drain this energy can result in electric shocks if anybody comes into contact with the system's exposed elements. In addition, after turning off the primary system, the switches should be locked so that nobody will be able to turn it until the work is completed. If there is more than one crew working, it is a good idea to have every crew put on their own set of locks. This way, the power can not be turned on until everybody has finished the work. This is especially important when crews at various dispersed areas are working simultaneously.
After de-energizing the system and opening the equipment and panels, take out the main fuses, if any, and ground all three phases. At this time, observe and record any abnormalities in the equipment. In other words, inspect the parts for hot spots, discoloration, loose connections, dirt accumulations, presence of white powder residues, etc. Recording such abnormalities can facilitate future system analysis in determining the root cause of problems.
Turning The Power Back On - At this point, the specific equipment PM tasks can be completed. When the work is completed, and we are ready to turn the power back on, the procedure shown below is recommended:
1. Account for all personnel who working on the system. This way, we can be sure that nobody is still working on the system.
2. Inspect all tools to make sure nothing has been inadvertently forgotten inside the switchgear cubicles. That is one of the reasons why individuals working on equipment are asked not to have loose items in their pockets when performing maintenance functions. This way, one can ensure that no foreign objects will mistakenly drop between live parts of the switchgear.
3. Remove the grounding wires and install the fuses. Afterwards, the switchgear panels can be installed.
4. Energize the primary system and check for potential for all three phases. If everything is normal, then the secondary main breakers can be re-engaged. Make sure that the breakers are turned on one at a time. Otherwise, if many equipment items are started all at once, a new power demand peak might be established which could increase the facility's electric bill.
These steps give us a systematic framework to perform the required tasks in a safe and efficient manner. Due to the unforgiving nature of electricity, safety must be given the utmost level of importance. Promoting a safety-conscious attitude can minimize potential accidents and mishaps.
A good PM program can significantly reduce the likelihood of system failures, but it cannot eliminate failures and interruptions. The power company does not guarantee continuous service either. For certain equipment, power interruptions can be quite costly, so in addition to a good PM program, emergency standby power options should be evaluated.
To reduce future maintenance problems, start with good design. The key to good design is optimizing a number of sometimes apparently conflicting parameters such as cost, flexibility, safety, reliability and functionality. This means looking at every design situation with an open mind and trying to search for the innovative solutions that many times will not be obvious. As one can see, this will require more effort and time, but in the long run, it should yield an excellent return in reliability, avoided downtime and lower costs.
Involve the in-house maintenance staff in the early design stages. While design engineers usually try to incorporate good practices in electrical systems, few designers have hands-on experience in maintaining electrical distribution systems. Despite their best intentions and care, they might still miss certain elements that will be more obvious to the facility manager and the in-house maintenance staff.
Contemporary management theories have successfully demonstrated that every body of knowledge consists of two different types of informationcodified information and tacit knowledge. Codified information is the explicit, clear-cut information that can be easily extracted from a given set of data. This is the kind of information that is easy to acquire. By contrast, tacit knowledge deals with more implicit, fuzzy information that is hard to quantify and understand unless one has experienced the particular situation. The people who are close to the particular task can understand it best. Therefore, in our situation, combining the experience of the facility manager (also armed with manufacturers' literature and the NEC) and maintenance staff with the expertise of design engineers should yield the best results in the design of the system.
The remainder of this section offers some simple procedures for electrical system design that will provide a helpful, if general, pencil-and-paper analysis. The reader may want to review other chapters of this book as well as Appendix I, where we discuss fundamentals of electricity, to fully appreciate the procedures.
Sizing Circuits For Motors In A Distribution Panel
1. Make sure that all motors to be connected have the same voltage rating. Segregate the single-phase from the three-phase units.
2. Add the full-load currents for all three-phase motors. If the current is not stated on the nameplate, the current can be calculated from the rated voltage and horsepower using the formula:
I = Current in amperes
HP = Motor horsepower (it is multiplied by 0.736 to convert it to Kilowatts)
V = Line voltage (multiplied by 1.732 which is the square root of 3)
3. If there are single-phase motors, find the motor currents using the above formula without the 1.732 multiplier in the denominator. (Notice that the line voltage for single-phase and three-phase systems will be different.)
4. Divide the single-phase motors into three equal groups where each group can be connected to a separate phase. This is done to minimize the current in the neutral wire as well as optimize the size of the incoming cable. In practice, it may not be possible to exactly divide the motor load into three equal groups, but make them as equal as possible. Take the largest current of the three as the single-phase current that will be added with the three-phase currents in Step 5.
5. Add the total current from the three-phase with the largest single-phase current.
6. To allow spare capacity for future growth, multiply the total current by 1.5 (anticipated 50 percent future growth). Since conductors need to have 25 percent additional capacity above the full load, multiply the current value of the largest motor by (0.25 × 1.5) and add it to the last figure. This will give us the total system current.
7. Using the wire tables in the National Electrical Code book, determine the wire size and conduit size.
8. Check the voltage drop to make sure it is below 2 percent. If it is higher, choose a larger-size wire and repeat the calculation until the voltage goes below 2 percent.
9. Make sure the first disconnect switch is not more than 50 ft. from, and within eyesight of, the motor.
10. For a single motor, the trip current for the branch circuit fuse should be 25-40 percent higher than the overcurrent protection limit of the motor breaker. The idea behind this is to make sure that a proper level of selectivity is maintained. In other words, in the event of a short circuit, the motor overload will trip before the branch circuit trips and causes a larger blackout.
11. For a system with several motors, the trip level of the branch circuit is set at the trip current for the largest motor plus the full load current of the remaining motors.
Sizing Wiring For Lighting Systems
For lighting systems and other resistive loads the calculation is similar. However, the differences are worth mentioning and to reduce confusion, we will discuss each element separately. In calculating the wire size for lighting, the following procedure is used:
1. Lighting circuits are always single-phase circuits. So determine the total current by using the formula:
I = current in amperes
kW = total kilowatt lighting load
V = system voltage (in the United States it is either 110V or 277V)
2. The feeder should be sized only to full-load current, so there is no need to design for spare capacity.
3. Divide the branch circuits into three equal sizes so the load can be equally distributed among all three phases to determine the feeder capacity.
4. To size the branch circuit wires, multiply the branch circuit current by 1.25.
5. After getting the wire size from the National Electrical Code tables, calculate the voltage drop for the feeder cable as well as the branch circuit wires.
6. The voltage drop for the branch circuit must not exceed 1 percent and the drop for the feeder circuit must be less than 2 percent. If any of the voltage drops do not meet the above tolerances, then go to higher size wires until the voltage criteria is met.
7. Determine the conduit size from the National Electrical Code table.
8. The trip value for the branch circuit and the feeder should be chosen at a value that is not larger than the current-carrying capability of the branch circuit wire or feeder cable respectively.
Sizing Wiring For Other Resistive Circuits
For other resistive circuits use this procedure:
1. Calculate the full load current and add 50 percent to allow for future growth.
2. Divide the single-phase loads into three equal groups to distribute the load.
3. To calculate the wire size for the branch circuit, multiply the branch circuit by 1.25
4. Look up the wire sizes for the branch circuit and the feeder from National Electrical Code tables.
5. Check the voltage drop and make sure it is lower than 3 percent. Otherwise, increase the wire size until the voltage drop is less than 3 percent.
6. Look up the National Electrical Code tables for conduit size.
7. The trip value of the branch circuit and the feeder should not be larger than the current-carrying capability of the branch circuit wire or the feeder cable respectively.
Selecting Circuit Breakers
There are a number of valuable tips that can assist in choosing the appropriate circuit breaker.
Voltage, Ampacity And Trip-Setting - The circuit breaker voltage rating must be the same as the system voltage or higher. For example, in a 208V/110V system, one can use a 208V/110V or 480V/277V circuit breaker, but for a 480V/277V system one must not use a 208V/110V circuit breaker. Since the standard line frequency is 60 Hertz (Hz) in the United States, the circuit breaker should also be rated for 60 Hz (in Europe, it is 50 Hz). The ampacity of the circuit breaker should be the same or larger than the full system load. The trip-setting range for the unit must include the required trip level. Moreover, the interrupting current capacity of the circuit breaker must be the same or larger than the maximum available short circuit current. This is an important safety parameter, because the interrupting capacity signifies the highest short circuit that a circuit breaker can safely interrupt. If the interrupting capacity is lower, however, there is a possibility that the circuit breaker will shatter into pieces under a bolted three-phase fault.
Derating The Circuit Breaker - The ambient temperature limit for circuit breakers is 75ºF. If there is a possibility for the temperature to be higher, the unit should be derated based on manufacturer's recommendations. In addition, there are two other conditions that will necessitate derating the circuit breakerspecial loads that are frequently cycled and power systems that are rich in harmonics. Molded circuit breakers are normally derated by 20 percent of their continuous current because of the heat accumulated in them. This is also the case if several units are installed in the same enclosure. The reason behind this is the heat that will be accumulated within the enclosure. Derating the circuit breakers is critical because the system may experience frequent nuisance trips otherwise. On the other hand, if the trip limit of the circuit breaker is set at a level which is higher than what the load can withstand, electrical loads such as motors, lights and other equipment may experience damage.
It is only on rare occasions where a facility manager has the luxury to join an institution where all of the systems are brand new and every element in the system is working as intended. In most actual organizations, the power system might be cluttered with a near-indecipherable network of many different components of varying age, condition and relative importance. So the challenge that practically all managers face is how to develop and implement
a successful capital renewal program that addresses all of the needs of a power distribution system.
One of the major barriers to implementing a program is the overall cost. The majority of senior-level corporate managers do not have a good appreciation of the need for the significant sums of money needed in this regard unless the senior-level manager is the head of a large facilities department. So financial justification is one of the important responsibilities of the facility manager. As the reader well knows, in most organizations there are more projects and programs than the total funding available to satisfy all needs. This means a facility manager must have the ability to successfully defend the need for capital renewal funds for the electrical distribution system.
To secure funds, one should be able to concisely explain the condition of the distribution system, potential consequences if the funds are not spent, how the probability of failure will increase, and what that will mean for the organization in terms of opportunity loss. One cannot overemphasize the importance of this crucial step.
In many cases, the amount of funds needed to replace all of the needed elements may be so large that it cannot be handled in one business cycle. In this event, the facility manger should develop a medium- to long-range capital plan. The institution will have more time to budget the required funds and the whole process can be done more smoothly.
Another important issue to consider is the needed downtime to complete the project. Even if the total funds are available right away, we might not be able to interrupt the operation for long enough to accomplish the entire project in a short time-frame.
Needs Assessment In Two Steps
The first phase of the capital renewal process is performing a needs analysis. This constitutes a comprehensive audit of the current condition of the equipment, including its prior maintenance, records, the critical nature of the system fed by the power circuit, and technical obsolescence. In many cases, the condition of different circuit elements may vary quite significantly based on ambient temperature and other factors impacting the system.
The second step is to investigate what is available in the industry. Depending on the specific element that needs to be upgraded, a number of more compact, easier-to-maintain and more-efficient products might be available that can help justify the investment financially and via improved performance.
The Critical Path Method
Every process consists of many discrete tasks and it is critical that one knows the relationship among these individual tasks in a network arrangement, including which ones must be accomplished first before we are able to move on. To help with this job, we can use the critical path method.
The purpose of the critical path method is to find the minimum amount of time required to complete a particular project. This is accomplished by arranging all of the individual tasks based on their logical relationship on the order they can be accomplished, and then finding the length of all possible paths in the network. The longest path in the network is referred to the ''critical path,'' which determines the minimum amount of time the project needs to be completed. Instead of concentrating on every task in the network, we need only concentrate our efforts along the critical path.
Bidding The Project
After the design phase of the project is completed, it is time to find a construction or electrical contractor. Since many construction projects today involve a number of steps, they are almost always in writing.
A construction document has two general parts which are called the special conditions and the general conditions.
Special Conditions - The special conditions contain plans, drawings and specifications that are unique to a particular project. It denotes the individual trades needed, licenses required, working hours, the amount of maximum amount of time allowed for the project, and the level and quality of work and performance.
General Conditions - The general conditions deal with other surrounding issues such as legal matters, the role and level of subcontractors, the approval process, the bonding and insurance requirements, etc.
The main elements of a general contract include a notice to bidders, bid forms and instruction to bidders, an agreement contract, and other legal requirements. Construction contracts are generally developed as one of two types: lump-sum price or cost-plus.
Lump-Sum Contracts - The lump-sum contract stipulates that the contractor will be paid a certain fixed amount of money
regardless of the cost to the contractor. In this technique, the risk related to price fluctuations is passed on to the contractor. Needless to say, the contractor in this scenario normally adds a percentage to the bid to protect themselves against potential fluctuation. If they are able to manage the job well, they might be able to maximize their profitor end up in the loss if the job is not managed well.
Cost-Plus Contracts - The cost-plus system is used in two situations:
1. When the project is complicated, and significant cost uncertainties preclude using a lump-sum contract.
2. If the institution has worked with a contractor for a long time and a good comfort level has developed between the two parties.
Here, both entities make an a priori agreement on the percentage of overhead that the contractor will be paid as profit for the project. They determine what costs are reimbursable and what costs are included in the overhead and profit percentages. Overhead costs might be determined by an agreed-upon scale such as the prevailing wage rate for an area. So the contractor has to submit payroll sheets plus invoices for all materials as a backup to verify his expenditure.
Soliciting Bids - There are two main ways that institutions solicit bids from contractors: Invite specific contractors to make a proposal, or advertise to all potential bidders to tender a bid.
In the public sector, the latter is the most common approach, and many agencies are bound by law to accept the lowest bid. This approach is open to criticism that quality and performance may be sacrificed. To address this concern, for larger contracts many public sector agencies have instituted pre-qualification of contractors. Therefore, many public sector institutions, by legislation, are bound to accepting the low responsive bid.
By contrast, non-public entities can evaluate all bids and decide which proposal meets their needs in terms of service, quality and price. In addition, unlike the public sector, they will be in a position to negotiate with a contractor.
Timing - An important element of a contract, especially in a renovation, is the length of time for the contract to be completed. This can be particularly important for electrical projects. To safeguard
against delays in the projects, in many cases the bids will include liquidated damages and/or bonuses for early completion. If a contractor fails to complete a project in the stated amount of time, then for every unit of time the contract is late, a penalty in the amount stated in the contract is assessed. If a contract has a bonus clause, then the contractor will be paid extra for on-time or early completion.
It should be noted that liquidated damages and bonuses should be reasonable and have some logical connection to the potential opportunity loss for the company. If the liquidated damages are very low, it will not have any effect, while if they are set at a very high level the contractors will add a larger fudge factor in their bids and the bottom line will be a higher cost to the institution.
Payment - Other important elements of a contract include progress payment methodology, the approval of subcontractors, and change order procedures.
Payment Schedule. The payment schedule is usually arranged so the contractor will be paid only for the work that is completed; often, there is a lag time until the contractor receives payment. If the time lag stated in the contract is larger than the industry norms, the contractor will naturally add a certain percentage to cover the additional cost.
Subcontractor Approval Process. The subcontractor approval process is put into a contract to safeguard against the general contractor putting excessive pressure on subcontractors. To clarify this point: Where a project is being bid, the subcontractors will give prices to different general contractors. When the successful general contractor is determined, sometimes the contractor will shop around for new subcontractors or put pressure on the existing ones to lower their earlier price. The approval price is intended to minimize such practices.
Change Order Procedures. Change order procedures are a mechanism to add items that were not in the original contract. When possible, it makes good sense to ask for unit prices in the bid specification. This will be one way to keep the cost of change orders. But in practice, one of the main reasons for change order procedures is to accommodate additions to the work scope that stem from errors and omissions in the contract.
Usually, renovation projects are more prone to such change orders than new construction. Change orders can be minimized by taking more care in writing the specification and checking the actual
conditions. Make sure that the specification is complete, clear and not subject to different interpretations. Otherwise, the door is left open to cost overruns and potential delays, litigation and other problems.
Life Cycle Costing
As mentioned earlier, having a better understanding of financial decision-making is required for all facility managers in today's environment. One important topic is life cycle costing.
When attempting to choose between competing capital expenditures, it is temptingly easy to make the decision based solely on the initial outlay. But over the long term, this can lead to a poor financial outcome. More and more organizations are realizing that in making choices between various options, the total cost over the useful life of the equipment must be considered that takes into account operating and maintenance costs as well as initial cost. This discipline is called life cycle costing.
The most common scenario is the choice between two electrical equipment items. One has a low first cost but a high energy and maintenance cost. The other has a higher first cost but is more energy-efficient and is easier on maintenance. Traditionally, the lower first cost item would win the competition for purchase, but with the skyrocketing cost of energy and the pressing demand on maintenance departments, facility managers in increasing numbers are retrofitting their building systems to produce a satisfying payback on the electric bill and free up maintenance personnel for more important tasks. One need look no further than the lighting retrofit boom of the early '90s that has moved into areas such as HVAC and energy management to see this trend in action.
There are a number of capital budgeting techniques that can assist us making such decisions. The most popular techniques are payback, net present value and internal rate of return.
Simple Payback - Payback is the technique which determines how long it will take to recover the extra initial capital for the equipment that costs more but operates on less. If the recovery period meets corporate standards or is otherwise quick, the option will be accepted. For example, suppose we need to decide between purchasing two electric motors when our old motor has failed. Motor A costs $5,000 and Motor B costs $5,500. The annual energy cost of the motors, due to losses, is $200 and $100 respectively. Choosing
Motor B requires an $500 additional initial cost, but consumes $100 less on energy annually. So the additional $500 can be recovered in 500 ÷ 100 = 5 years. This means Motor B has a 5-year payback.
Now if our decision criteria for investment is more than five years, Motor B will be accepted. Otherwise, it will be rejected. Note that if we were to consider retrofitting the existing motor rather than considering a more-efficient motor at failure, the payback would be much too long. As a side note, another lesson learned is that while retrofit payback periods work for lighting, they do not work very well for motors in many cases, which is why they are evaluated for upgrade at failure.
The advantage of the simple payback method is its inherent simplicity. But it has a number of serious shortcomings. First, the time value of money is not considered. This is especially important if the extra money has to be financed with debt. In addition, the benefit of the option is not taken into consideration after the payback period, unless we add a few more factors and project further to generate a simple cash flow statement.
Despite these deficiencies, simple payback is the most popular method because it is quick and easy to apply.
Net Present Value (NPV) - This technique addresses the aforementioned shortcomings of the payback method. The net present value method accounts for the individual savings over the useful life of the equipment and then each future saving is discounted to its present value. Essentially, it takes into account the future value of money and translates that into today's value of money.
The discount rate is chosen based on the acceptable interest rate for the institution. The sum of all future savings discounted to present value, minus the additional capital outlay, is called the net present value of the option.
If the net present value is positive, the option is accepted. Going back to the earlier example, assume that the motors have an average useful life of 15 years and the discount rate for the time value of money is 12 percent. The present value multiplier of the annuity will be 6.55 (this factor can be determined from an interest table). The net present value of Motor B will be (6.55 × $100) - $500 = $155. Therefore, Motor B is accepted. It should be recognized that the acceptance and rejection of an option is heavily influenced by the discount rate chosen. To illustrate this point, assume the discount rate is changed form 12 percent to 20 percent. In this case, the multiplier will be 4.29 and NPV = (4.29 × $100) - $500 = -$71. Motor
B will be rejected. Net present value is a very objective method of calculating the financial viability of a project, as well as evaluating different options.
The strength of the method lies in the fact that other financial considerations such as tax implications, depreciation and debt financing can also be incorporated. The disadvantage of the method is its relative complexity and the unfamiliarity of many facility managers with NPV.
Internal Rate of Return (IRR) - The internal rate of return is similar to the net present value method. Here, we are trying to find the discount rate assuming an NPV of zero. If the required rate is lower than the IRR calculated, the option will be accepted. Going back to the above example, the IRR for Motor B will be 17 percent. Therefore, if the required discount rate is less than 17 percent, the option will be accepted. The IRR is slightly more complicated than NPV and it is commonly used by banks rather than facility managers.
Power generation occurs when mechanical energy is converted into electrical energy. The electricity will travel to its point of utilization to be converted most often into light by lamps or back into mechanical energy by electric motors. In this chapter, we will review how electric power is generated both at the utility level and on-site for emergency and remote needsor to save operating costs via peak shaving cogeneration.
Generators are electromechanical devices that convert mechanical energy into electrical energy. The machine rotates a shaft within a magnetic field. As the generator rotates, the magnetic flux changes cyclically, inducing alternating voltage in the armature winding.
Electric generators can be DC or AC devices. The AC generators can be synchronous or asynchronous units. However, the most common type of generators today are synchronous. In a synchronous generator, the armature winding is on the stator and the field winding is on the rotor. Although it is technically possible to reverse the roles, the main reason for this arrangement is to minimize the current flowing to the rotor.
The prime movers for generators also vary greatly. The most common ones are hydro-turbines, nuclear or fossil-based steam turbines, gas turbines, and reciprocating engines.
Hydroelectric units convert the potential energy of running water into electricity with the help of a water turbine. Because these plants require large amounts of water flowing down a significant incline, hydroelectric power is restricted to specific geographic locations. Usually, these plants require a large amount of capital to build, but the operating costs are much lower compared to other types of plants. In the United States, less than 10 percent of the total power generated comes from hydroelectric plants, while the percentage is much higher in Canada.
Nuclear energy is generated by the fission of heavy isotopes such as uranium. The caloric energy released in the fission reaction is removed by a coolant such as water gas. In the United States, two types of reactors are in commercial useboiling water reactors and pressurized water reactors.
Boiling Water Reactor - The water coolant in the reactor boils into steam which drives a turbine.
Pressurized Water Reactor - The high-pressure, high-temperature coolant water from the reactor transfers its thermal energy through a heat exchanger and steam is generated for the turbine. It is important to note that nuclear energy is converted to thermal energy and from that point onward, in principle, there is no difference between a nuclear plant and a fossil fuel plant.
Fossil Fuel-Based Generators
There are different fossil fuel sources used for generating electricity. The most common ones are coal, fuel oil and natural gas. All coal-fired plants burn coal in boilers to produce steam, which feeds the steam turbines. Comparing a coal plant to a nuclear plant, although the fission reaction is only 0.1 percent efficient, one pound of Uranium 235 produces the same amount of thermal energy as 1,400 metric tons of coal. By contrast, fuel oil or natural gas generators are internal combustion engines. The smaller units use reciprocating engines while for larger units, gas turbines are used. These units can be brought on line quickly. Since steam turbines operate at high efficiency, they are used as base-load generators. In contrast, gas turbine generators are used to meet peak needs.
In this section, we will cover the technical aspects of generator operation, including components, operating characteristics and DC generators.
The main components of a generator are the magnetic circuit, DC field winding, alternating armature winding, and the mechanical structure.
Rotors - There are two types of rotorssalient pole rotors and cylindrical rotors. Salient pole rotors are normally used with slow-speed prime movers such as hydroelectric generators while cylindrical rotors are used with high-speed rotors such as gas turbines.
The rotor is normally powered by an auxiliary power source such as a battery or a small DC generator. The current is conducted to the rotor by using slip rings and brushes. A more common way to supply excitation field to the rotor is by feeding a small part of the armature current through a diode and silicone-controlled rectifier (SCR). The diode converts the armature AC to DC and the SCR provides voltage regulation. The generator is initially excited by residual magnetism.
For relatively larger units, there is yet another way to provide excitation for the rotor. This is accomplished by having a small auxiliary rotor on the same shaft. The auxiliary rotor is excited by the residual magnetism. The AC power generated in the windings will go through a set of rotating diodes before supplying the main rotor. Since the entire exciter field is on the same shaft as the main rotor, there is no need for slip rings and brushes, which is why they are called brushless generators.
Since the presence of slip rings and brushes are a source of maintenance, brushless units are preferred.
Both the rotor and the armature are composed of laminated sheets to reduce eddy current losses. These losses, combined with other losses, result in total unit efficiency.
The armature winding is a function of the desired voltage. The windings can be arranged as wye or delta connection. If both single-and three-phase loads need to be served, then a wye connection is used. However, it is essential that the single-phase load be distributed evenly across the three phasesotherwise, large-phase
imbalances will result in high negative sequence current levels in the armature. This will induce 120 Hz power on the rotor, which heats up the generator and shortens its life.
In this chapter, and in all chapters where we discuss the operation of electrical equipment and conductors, remember that heat is the main enemy to service life.
Windings - The two main types of windings are lap and wave windings. In lap windings, both ends of the coil have a familiar diamond shape, while in wave windings, the two open ends of the coil are connected to opposite ends of poles. Wave windings are used when higher voltage is required and similarly, lap windings are used for higher current.
Synchronous machines (both generators and motors) have unique characteristics regarding the rotating speed and the frequency of both the voltage and current in the armature.
Rotating Speed - The speed determines how many rotations the shaft in the machine will take per minute crossing the electrically induced magnetic field, creating the electric potential that is necessary for current to be induced. The speed of any given generator is determined by the number of poles it has.
For a 60 Hz frequency, the synchronous speed will be 3,600 rotations per minute (RPM) for a two-pole machine, 1,800 RPM for a four-pole machine, 1,200 RPM for a six-pole machine, etc.
Typically, steam turbines are fast-running machines and are best fit for 1,800 and 3,600 RPM, while hydro-turbines are better fit for slower speeds and as such will have more poles than steam turbine generators.
Sizing - To determine what size generator we need, we must know the type of load it will be servicing. Generators are sized based on the kW and kilovolt-ampere (kVA) requirements. The horsepower of the prime mover is determined by the load kW, while the kVA is determined by the total load current.
When generators are sized, it is important to take into consideration not only the continuous load, but the starting torque of major loads. In addition, if the system power factor is below 90 percent due to magnetic core saturation, the generator output in kVA will drop. When a generator is in operation, the current that is
drawn to the load creates a counter torque that is directly proportional to the current. That is why when more power is drawn from the generator, the counter torque will increase, thus necessitating the prime mover to exert more force to cancel out this opposite force. Otherwise, the generator will slow down and potentially come to a stop.
Generators, therefore, require an automatic mechanism where the torque can change instantly. This is accomplished with the engine governor which controls the generator speed.
Generator Ratings - There are three types of generator ratings:
Continuous Duty. The continuous rating represents the constant output of a generator when it operates continuously without any interruption 24 hours a day, seven days a week.
Prime Power. The prime power rating is used when a generator is the sole power source during normal conditions. In most cases, there is a certain level of load variation which means the average load will be somewhat smaller than the continuous load. Therefore, the prime rating will be relatively higher than the continuous rating.
Standby Power. The standby rating takes into consideration that a generator can supply a larger load for a shorter period of time as compared to the continuous rating. That is why the standby rating of a unit will be higher compared to the other ratings. Standby rating is especially important for emergency generators, of course.
Nameplate Ratings - It is important to note that the nameplate rating of a generator is based on 40ºC ambient temperature at sea level. This means if the unit is operated at higher elevations or hotter climates, the unit should be de-rated based on the manufacturer's recommendations.
Operating Multiple Generators In Parallel
Many times, one generator is not large enough to meet the load, so more than one unit is needed to operate in parallel. For generators to operate in parallel, they must have the same number of phases
and the same line voltage. The generators also must be equipped with synchronous gear if the parallel units have no phase angles. To elaborate on this, let us explain how two generators can be operated in parallel.
First, one generator is brought to operating speed and normal voltage, and connected to a moderate load relative to the size of the unit. Then we start the second generator and bring it to a speed which will be about 10 percent higher then the operating frequency. After adjusting the voltage, it is time to synchronize both units where the voltage phase angle differences between the two units are close to zero.
Note that a speed difference between the two units is essentialotherwise, if both are traveling exactly at the same speed, the phase angle between them will remain constant and they cannot be paralleled. But looking at the synchronoscope, we can observe that with the difference in speed, the phase angle between the two changes continuously. At this time, if in observing the phase difference the angle between the two units is a few degrees, then the second generator should be switched on to electrically connect with the first unit in operation.
Both generators can then be adjusted to share the load in proportion to their kW ratings. If additional units are needed to be paralleled, the same procedure can be repeated. When a generator is connected to the power grid, usually referred to as the infinite bus, it is common to start the generator as a synchronous motor by connecting it to the power grid. As it reaches the synchronous speed, the prime movers fire up. As the prime mover torque increases, the phase angle increases in the positive direction and the unit operates as a generator supplying power to the infinite bus. The real power flow is a function of the sine of the phase angle. That is why the maximum real power transfer occurs when the angle is 90 degrees. The reactive power flow is a function of the voltage difference between the generator and the infinite bus. The reactive power flow will be from the high-voltage source to the low-voltage source.
The most common type of generators and electric motors are AC units. The reason for this is the inherent simplicity, lower cost and robust nature of AC machines. However, there are many industrial processes where DC power is required. For instance, in many chemical processes such as electrolytic action, unidirectional current is needed. In some cases, AC power is converted to DC for
the particular application, although in many other situations DC generators are used for special-purpose applications. The electric supply for a DC machine can be provided by a motor-generator set or an electronic converter. The main elements of a DC generator are the stator field, the armature, the commutator and brushes. The stator field and armature of a DC generator is similar to an AC generator. In DC generators, however, the armature is the rotor and the field is the stator, while the opposite is true for an AC machine.
The commutator is the device which makes it possible to build a rotating machine with a unidirectional current. In other words, the commutator mechanically converts the AC current of the armature to DC current. The brushes are made of copper or carbon and are used to electrically connect the rotating armature to the stationary power system.
There are four different variations of DC generatorsseparately excited, shunt, series and compound generators.
Separately Excited Generator - As the name would suggest, when the stator field is powered by an external source it is called a separately excited generator. As the armature is driven by the prime mover, an electrical potential is induced across the brushes. If the direction of rotation or the polarity of the stator field is reversed, the polarity of the induced voltage will change as well. The voltage level will modulate directly based on the armature speed. Similarly, if the speed is kept constant, the voltage will increase proportionally to the field current until the core is saturated. Any further increase in the field current will not change the armature voltage. For such units, the difference between the no-load and full-load voltages are less than 10 percent.
Shunt Generator - With a shunt generator, there is no external source to power the stator field. Here, the stator field is connected across the generator. When the generator is started, a small voltage level is induced due to residual magnetism in the stator. As the generated voltage induces a current in the stator field, the armature voltage will increase. This process will continue until the stator field core is saturated and the voltage level will remain constant from that point onwards.
On the other hand, if the terminal of the unit is short-circuited, the voltage induced will drop rapidly and the generator will not be affected. One way to control voltage below the saturation level is to connect a rheostat in series with the stator field. The difference between full-load and no-load voltage is about 10-15 percent.
Series Generator - A series generator is obtained if the stator field is connected in series with the armature. When the generator is started during no-load conditions, a small voltage is induced. If the load is increased, the voltage will steadily increase until the core is fully saturated.
Compound Generator - When a generator has series and shunt field windings, it is called a compound generator. If the two magnetic fields have the same polarity, it is called a cumulatively compound generator. If they have the opposite polarity, the generator is then called a differentially compound generator.
For a cumulatively compound generator, the relationship between the full-load and no-load current is dependent on the size of the series winding. When the full-load voltage is lower than no-load voltage, it is referred to as an under-compounded unit. Similarly, if the full-load voltage is higher than no-load voltage, then it is an over-compounded unit.
With a differentially compounded unit, the terminal voltage drops drastically when the current is increased. A good application for a differentially compound generator is arc-welding equipment.
Applications of On-Site Generators
Normally, most facilities receive electrical power from the local utility. But there are many circumstances where generators are used by individual facilities, including for remote power, emergency power, peak shaving and cogeneration.
In remote installations where connections to the utility power grid are not practical, then one or more self-contained generation units are required. Examples include roadway, construction and agricultural projects.
There are many critical electrical loads where power interruption may not be tolerated such as mainframe computers, digital PBXs and hospital operating rooms. In such cases, a standby generator is required.
Standby generators are connected to the load via a transfer switch. Both the normal utility power and standby generator will be connected to the transfer switch as well as to the load.
Under normal conditions, the transfer switch will connect the load with utility power. When the main power is interrupted for any reason, the generator will be turned on in about six seconds and the transfer switch disconnects the load from the utility source and connects it to the generator.
As soon as normal power is restored, after a predetermined delay the transfer switch will reverse the operation.
There are several issues to deal with when we have a standby generator system. First, the transfer from normal power to standby power (and back again) is an open transition. This means every time the transfer switch operates, there is a momentary power interruption to the load. If such a momentary power loss cannot be tolerated by the load, then the generator has to be part of an uninterruptible power system (UPS).
Transfer switches can operate manually or automatically. Most transfer switches automatically change from normal to emergency power, but the transfer from standby power to normal power is done manually. The reason for this is to reduce an unwanted power interruption for the load.
A final issue is that the transfer switch needs to work both with normal and standby power. This brings to mind the adage that the most flexible element of a system is the most powerfuland the most critical. Failure of the transfer switch will negatively impact the power availability for the load regardless of the power source.
Most utilities in the United States charge large customers not only for the amount of energy consumed in kWh, but also the rate of energy use (demand for power expressed in kW). In a few parts of the country, the utility has a time-of-day rate for energy, while most areas separately charge for demand.
The demand cost will be higher during peak energy usage. For instance, during summer peak periods, the utility's demand charge will be at its highest in the afternoon hours. In most cases, the demand cost can be a significant portion of the electric bill.
To alleviate high demand costs, many institutions try to limit the power demand. There are three ways to achieve this.
One way is to install energy-efficient components in the lighting and mechanical systems wherever possible to reduce total load in kW.
Another way is to shut down non-essential loads during peak demand hours.
If shutting down non-essential loads is not feasible, then a standby generator can be installed to operate when the demand increases beyond a certain predetermined value. In other words, a preset peak point is set. If demand for power exceeds that point, then the building load is automatically transferred to an on-site standby generator. By taking over for the utility during these peak times, the peak demand charge will be lower and the utility may offer a reward for helping it free up resources to meet the peak needs of other customers.
The peak-shaving generator may run separately by isolating a certain designated load. This will be the least expensive option from an initial-cost point of view. A better option is to equip the generator with synchronizing gear so that it can operate in parallel with the utility power. With this option, no power interruption for any load will occur, which is vital for critical applications such as credit card processing, key chemical processes, operating rooms and computer mainframes.
In any process when two or more forms of energy are generated simultaneously, we have cogeneration. But more specifically, cogeneration refers to producing electrical and thermal energy simultaneously.
At the turn of the century, many large industrial plants were operating cogeneration facilities using the exhaust steam from their processes. By the 1920s, more than half of the electrical power consumed by such industrial processes was generated locally using cogeneration units. As electrical utilities grew larger and built larger and larger plants, they were able to drastically lower the cost of electricity by economy of scale. The lower cost of electricity made many of these cogeneration units comparatively uneconomical to operate. Consequently, most of them were eventually shut down. By the 1960s, less than 10 percent of energy consumed by process plants was generated locally.
Following the Energy Crisis of the early 1970s, the U.S. Congress passed several energy laws, including the Public Utilities Regulatory Policy Act (PURPA) of 1978. This opened a new chapter in promoting the use of cogeneration systems.
Why the renewed interest? The typical efficiency of a utility generator is roughly 35 percent and heating boilers typically have a 78 percent efficiency. By contrast, a cogeneration unit can typically have efficiencies of about 90 percent, representing a far better
utilization of primary fuels. Not everybody was excited about the return of cogeneration, however. Most utilities saw it as a threat and a potential dilution of their revenues. The utilities were protective of their power distribution systems and did not allow customers to connect these units to the power grid.
With PURPA, the utilities had no choice but to allow all qualifying facilities to connect to the power grid and also buy back their excess power at a reasonable rate. This can save large utility customers a lot of money in operating costs. The customer will have to decide and plan whether all of the power generated will be used on-site or whether some will be purchased from the utility. Another decision is whether to generate excess power and sell the excess back to the utility.
During the early 1980s, many cogeneration units were built. However, by the end of that decadewith the termination of investment tax incentives, changes in national and international energy markets, and the active role of some electrical utilities to create disincentivesthe growth of cogeneration systems was drastically reduced. From a technical point of view, it is still a viable option, but the future of cogeneration is uncertain at this point, being highly dependent on deregulation of the electric power industry.
Types of Cogeneration Systems - A number of different types of cogeneration systems are available. If the primary fuel is first used to generate electricity and the exhaust of the system is utilized to produce useful thermal energy, this is called a topping cycle. If the primary energy is first used for an existing industrial process, and the exhaust is utilized to produce electricity, this is called a bottoming cycle. In some facilities, both cycles are used, called a combined system.
A cogeneration system is either base-loaded for electricity or thermal energy. If it is base-loaded for electricity, the steam will be load-following, and vice versa. It is difficultif not impossibleto find a unit that can fully meet both electrical and thermal needs.
From an electrical point of view, there are two types of cogeneration units: synchronous and asynchronous types. A synchronous unit utilizes a three-phase synchronous generator with synchronizing gear. Here, the asynchronous AC power is electronically converted to DC and inverted to constant-voltage, constant-frequency AC power. Asynchronous generators are simple, rugged units that have a low cost. Due to higher harmonic levels, however, they are not used for units above one megawatt.
Protective Devices - Depending on the size of the cogeneration unit and the particular electric utility, the level of protective devices required can vary greatly. The typical basic requirements include relays for current fault coordination, protection against islanding, and resonant conditions. Additional protections can include relays against loss of excitation, overspeeding, motoring, overloading, etc.
The are two main reasons why generators will be needed in many facilities:
1. Emergency applications (see Chapter 6).
2. Reducing peak electricity consumption to reduce power costs.
Each will be addressed separately. For a summary of generator types, applications and characteristics, Table 2-1 provides a helpful management summary.
The main purpose of these units is to reduce downtime for critical equipment. The proliferation of technology such as communications and data processing systems, computers and other automation devices has fundamentally changed the maintenance environment. Commercial and industrial activities today rely on electrical systems far more than they did in the past. For example, many information processing systems operate on a real-time basis. Therefore, electrical distribution systems must operate under much more stringent conditions to keep these critical operations moving. The need for standby power will continue to increase for most industries and facility managers will need to have a better understanding of its, application and maintenance.
Key management considerations for standby generators are presented below. For more information about standby power, see Chapter 6.
Primary Fuels - The first issue to consider for standby power is the type of primary fuel required. A true standby unit must be self-contained, which means it must be able to operate independently
Table 2-1. Management table showing types of generators and key characteristics.
Reduce utility costs
Reduce overall energy costs
Sole electric source, such as needed for remote power needs
Based on demand
Less than 1,000 hours per year
Over 8,000 hours per year
Oil or propane
Gas or oil
Gas, oil, coal
Gas, oil, coal
Need Of Synchronization Devices
of electrical inputif the building is cut off from the outside, the generator must still be able to provide electricity for a predetermined period of time. For this reason, there should be a reasonable amount of the primary fuelpropane, fuel oil, etc.stockpiled on-site. (Based on this requirement, a natural gas generator is not considered a true standby unit, because it has to rely on the integrity of the natural gas distribution system. For life safety applications, this can be an important consideration; the Joint Commission for Accreditation of Hospitals, for example, does not consider a natural gas generator to be a true standby unit.)
Sizing - In sizing standby generators, it is important to identify the critical loads that need to be connected to the generator as well as the power factor of the individual loads. This will determine the total current requirements.
The next important factor is investigating the locked rotor current of large motors. Since induction motors may require a starting current that can be up to six times higher then the full load conditions, this needs to be checked if there are large electric motors among the critical loads. The manufacturer's literature can serve as a valuable reference in this area.
If the locked rotor of large motors is not taken into account and the generator is not sized accordingly, the starting inrush current can slow down the generator and take it out of sync, which will cause it to stall in short order.
Control - Finally, for standby generators, transfer switches play a critical role, because this is the only device that has to be in good operating condition both during normal and standby power conditions. Therefore, periodic maintenance of the transfer switch is critical.
Peak Shaving To Save Costs
The application of generators as peak-shaving devices will continue to grow, because the need for managing electric peak demand will increase in the future. Since electric energy cannot be stored in any appreciable quantities, the production and consumption of electricity has to constantly be the same. According to the National Electric Reliability Council (NERC), before the end of this decade there is a fifty percent chance that demand for electricity will outstrip supply. This situation will prevail even if all current generation capacity planned will come on line according to schedule.
The problem is, in many areas, the existing transmission lines may not be able to handle the excess load.
This does not mean that the utilities will be unable to meet normal load; the shortages will occur during the hottest and coldest days of the year when aggregate demand for power peaks due to excessive heating and air conditioning needs. During the past few years, for example, extreme temperatures in the summer and winter caused some utilities to curtail electricity to large customers. The economic impact of such cutbacks can be severe.
The combination of electricity shortages and utility deregulation (see Chapter 8) will embolden utilities to ask for rate schedules that will increase the price differential between peak and off-peak times. Utilities may also shift large customers onto curtailable rate schedules. Facility managers must anticipate this new environment by devising methods to flatten peak demand use.
The traditional solution is to turn off non-essential loadsthe lights in daylighted areas, for example. This may not be feasible in many situations, however. A better solution is to investigate the installation of a generator which can substitute for the utility during peak demand periods. This has created a new strategy for load management called peak shaving (or peak sharing).
Under peak shaving, a generator is installed in the facility as a standby unit. When the utility directs the customer, the generator will operate parallel with the incoming power. The net affect will reduce the demand on the utility grid, and the utility rewards the customer for its help meeting the peak demand.
Normally, such generators will operate less than 200 hours per year. The installation cost of such units ranges between $200 to $400 per kW. The size of the units is usually smaller than the 2,000 kW size.
Based on annual energy savings, typical payback for peak-shaving generators ranges from 2 to 5 years. The payback period is inversely proportional to the number of hours per year the utility asks a customer to operate the unit. So the less the generator is required to operate per year, the shorter the payback period will be.
There are a number of utilities that offer incentives for peak-shaving generators. Naturally, utilities that have difficulty meeting the electric demand of the customers are more likely to offer such incentives. These are typically located in the northeast and on the west coast.
Facility managers are encouraged to pursue the economic feasibilities of peak-shaving projects. There are many success stories
for peak-shaving generators in many different applications all around the country, but the important factor they all have in common is the number of hours the unit operates each year. If this factor can be held to less than around 700 hours per year, the economic success of the project will be high.
Cogeneration To Save Costs
Another approach to reducing peak electric demand is the cogeneration plant. Such plants are more expensive than peak-shaving units. In most cases, the cost per kW can range from $700 to $1,500. In contrast with peak-shaving, the economic success of cogeneration relies heavily on the least amount of downtime in a year. Cogeneration units should typically stay on line 95 percent of the time. This is because with peak-shaving units the facility is chipping in during peak periods, whereas with cogeneration the facility is operating its own power-generating plant to carry some or all of the electrical load.
The key to the financial success of cogeneration units is having a constant thermal load that can use the rejected heat of the unit, such as a swimming pool, laundry service, domestic hot water need, etc. Based on the uncertainties of proposed power industry deregulation, for the short term facility managers may find it desirable to only evaluate the financial feasibility of small-package type units. These are units which are typically smaller than 1,500 kW in size.
Regardless of the type of generators in place, having a good maintenance program is a must. Usually, there is only a moment's notice when a unit is needed to operate, and it must operate reliably.
Periodic maintenance must include both the generator as well as the engine. The maintenance should include lubricating the moving parts of the engine, and checking the engine oil, the batteries, the fuel system, the cooling system, etc. The generator needs to be checked for the insulation value of the stator and rotor windings, the rotor bearings, the rectifiers, etc.
Another use of small- and medium-sized generator applications is to provide the primary source of power in remote locations where it is not economically feasible to connect to the utility service. In
such applications, great care is needed to ensure the units will be able to perform all the time with a high level of reliability. In addition, unit efficiency will be a great factor in choosing a particular unit.
If electric power is the blood that keeps equipment running, the power distribution is the complex system of veins and arteries. Wiring, relays, transformers, substations and switching gear must be selected and sized properly, and fitted together just right to ensure reliable performance. In addition, a variety of instrumentation can be used to monitor the system's performance.
Utility Distribution Systems
Electrical energy is generated by the utility using large generation plants. The power is conducted along overhead transmission lines to transmission substations. From there, it is conducted to local distribution centers and finally to the customer's site.
The power distribution system is analogous to a large manufacturer that has several plants and warehousing facilities across a geographically wide area. In addition, they have local distribution centers for wholesale as well as outlet stores for retail customers. The manufacturer will produce a particular product in every one of the plants based on capacity, cost of manufacturing, distance from warehouse, and total customer demand at each outlet store. The same holds true for a power distribution grid. What makes the electrical systems more complicated is the fact that generation and consumption of electricity must match at all times since the system does not store (inventory) any energy. Thus, the generation capacity must be able to meet any load changes instantaneously.
To meet the basic requirements, a power transmission system must be able to reliably meet the customer load demands. In addition, the voltage variation must be within a +/- 10 percent range from the nominal voltage. The frequency variation should not exceed 0.0 Hertz, and the electricity should be delivered at an economical cost in an environmentally acceptable manner. As they sound, these are pretty tight requirements.
The first commercial electrical distribution system was installed in New York City in 1882. It was a DC system serving highly localized areas. To meet widely varying voltage requirements and be able to transmit power to larger geographical areas, by 1910 AC systems gained acceptance and popularity. The economic benefits and flexibility of AC made it the sound choice, and today it is the standard electrical system for most uses throughout the world.
On-Site Power Distribution
The voltage selections inside a facility mirror the same issues that utilities evaluate. The incoming voltage level to a facility is directly a function of the load. The higher the power demand, the larger will be the incoming voltage. The distribution voltage among various buildings will also be a function of distance and the power requirements for individual facilities. For most commercial and small industrial facilities, the incoming voltage can be 2,400, 4,160 or 13,800 volts. For some larger commercial or industrial customers, the voltage can be up to 138 kV.
In the past several years, some customers have migrated to higher voltages for purely economic reasons. In many parts of the country, the electric rates are lower at transmission level voltages as compared to distribution level voltages. This trend could possibly intensify with the pending legislation concerning electrical deregulation and retail power wheeling (see Chapter 7).
In old installations, the incoming power is connected to a large transformer which will step down the voltage for practically all loads in individual buildings. In such a system, in-house distribution will be at low voltage in a radial arrangement. This is a simple and straightforward approach. As always, reliability and cost are key concerns.
In the past half a century or more, as the electrical loads of individual buildings gradually increased, the conductor sizes between the main distribution transformer and the buildings became
Cost comparison between old and new
distribution systems. Old systems relied
on low voltages, while new systems rely
on higher voltages in a configuration
that reduces the distribution cost
a concern. To address this issue, the in-house distribution systems between individual buildings were changed from low-voltage to higher voltages. Today, in most facilities, the incoming voltage at individual buildings is 2,400, 4,160 or 13,800 volts. This reduced the distribution cost quite significantly.
For example, to serve a 1,000 kVA load, the installation cost will be 25 percent, 15 percent, and 12.5 percent of the installation cost of low voltage, respectively, for the above three voltage levels (see Figure 3-1). In addition, the voltage drop will only be 20 percent of the old approach, which can significantly improve voltage regulation. This is why for all new installations, most facilities managers migrate to higher voltages for in-house power distribution.
A large facility may be fed by one or more utility feeders. A two-feeder arrangement is desirable to improve system reliability. For medical facilities and other critical loads, a minimum of two feeders is required.
Schematics of double-ended and triple-ended stations. A double-ended station
improves reliability, while the triple-ended, where a third feeder is added as a tie-
breaker so that if service to one of the first two is interrupted, the interruption
will be only momentary.
In some installations, a third feeder is brought from a different utility substation to greatly improve reliability and keep power interruptions at a minimum. When a facility is served by more than one feeder such as a double-ended station, the site load is divided roughly equally between them. Moreover, the two feeders will be connected with a tie breaker. Under normal operations, the breakers to the individual feeders will be in the closed positions while the tie breaker will be in the open position. Now, if the service to one of the feeders is interrupted, the particular feeder will be isolated by opening the appropriate breaker followed by closing the tie breaker. This way, the service interruption will only be momentary. After the faulty feeder is repaired, it will be closed and the tie breaker opened. The double-ended station is also useful when the utility company is performing maintenance on individual feeders. The work can be accomplished without service interruption to any part of the facility (see Figure 3-2).
Radial Systems - Moving from the facility substation to
Schematics of a radial system. The radial system presents the lowest
first cost and the simplest approach for distributing electric power.
individual buildings, there are several feeder arrangements such as primary radial, parallel, looped or network. The radial system is fed from only one transformer and one primary cable. There is no duplication of equipment or cables and there is no reserve capacity needed in any of the distribution equipment (see Figure 3-3).
The radial system presents the lowest first cost and the least complicated system. With an electrical system, the less complicated it is, the less likely it will fail.
The loss of a primary cable or a transformer will interrupt the service to the load. Since there is only one path for the power flow, the service can not resume until the affected feeder is repaired.
Parallel Systems - A parallel system is fed by two or more cable-and-breaker systems connected in parallel. All parallel cables are energized and each is routed through a different duct bank to improve reliability.
The number of breakers for a parallel system is four times as much and cable costs are twice as much compared to a radial system.
Schematic of a parallel feeder arrangement. While more expensive and complicated
than a radial configuration, a parallel feeders arrangement offers immediate isolation
and switching to an operating feeder during a fault or other problem. Ideal for critical
operations, where even a small power interruption is not acceptable.
The idea behind a parallel system is if any of the cables or breakers experiences a fault, that element can be isolated without any service interruption to the load. Protection and fuse coordination are more complicated than with the radial system. The parallel system is appropriate for critical where any power interruptions, no matter how small, can have a high negative impact (see Figure 3-4).
Secondary Selective Systems - If two secondary sides of a radial system are connected with a tie breaker, a secondary selective system is formed. Normally, the tie breaker will be open and each transformer is supplying its own load. If a fault occurs with the primary cable or transformer, the primary breaker for that feeder and the secondary switching device for the transformer will open to isolate the fault. Moreover, the secondary tie breaker will close and energize both busses from the same transformer. This implies that each transformer must have sufficient capacity so as not to carry
Schematic of a secondary selective arrangement. It is similar to a double-
ended substation as shown in Figure 3-2 for reliability, but is connected
to a tie-breaker, thus acting like a triple-ended substation.
the total load of both feeders. An adequate interlocking mechanism is needed to ensure that the tie breaker will only close after the faulty feeder and transformer has been isolated. In addition, the fault characteristics of the current protective devices should be coordinated to provide selective operation between all devices. The topology of a secondary selective system is similar to a double-ended substation (see Figure 3-5).
Primary Selective Systems - This system reduces downtime if there is loss of power to a primary feeder.
Here, every circuit is fed with two primary feeders. Under normal conditions, the loads among different feeders are equally divided. This means for every circuit, one of the primary beakers is normally closed and the second one is normally open.
Should a fault occur, affecting one of the feeders serving a particular load, the normally closed breaker will open to isolate the fault and subsequently the second breaker can be closed to energize the circuit from the other feeder (see Figure 3-6). The primary
Schematic representation of a primary selective arrangement. As can be
seen, every circuit is fed by two primary feeders. If one fails, the other
carries the load. This minimizes downtime. The cost and reliability of a
primary selective system is greater than a radial arrangement but less
than a secondary selective system arrangement.
selective system costs more and is more reliable than the radial system, but less so than a secondary selective system.
Primary Looped Systems - With the looped system, both ends of a feeder are connected to a single power source with breakers at both ends. Then two or more points within the loop are connected to individual loads. The connection to each load is done using two breakers. This way every load can be served by either side of the loop (see Figure 3-7).
The amount of cable required for a looped system is roughly equal to a radial system. But the cost of additional breakers makes a looped system more expensive than a radial system, but less so than a parallel system. Reliability and service continuity, however, are almost as good as a parallel system.
The looped system is a popular arrangement for many facilities. Normally, each side of the loop is serving half of the load and the
Schematic representation of a primary looped system. This popular arrangement
provides backup protection that is almost as good as a parallel system. Cost and
reliability are greater than a radial arrangement but less than a parallel arrangement.
other half is isolated by disconnecting two of the breakers. However, if any section of the cable in the loop experiences a fault, that section can be isolated without any loss of service to the load.
Network Systems - The network system is fundamentally different from the other systems in the way it serves the load. It provides the highest level of reliability and continuity of service compared to other feeder arrangements. There are different types of network arrangements. A secondary selective network is similar to a secondary selective system. The difference is the tie breaker will be closed under normal conditions and both transformers will operate in parallel to supply the entire load. Should a fault occur the primary service to that particular feeder is isolated, and the load will not detect any service interruptions.
A variation to the above is a primary selective secondary network system where every primary circuit is fed from two separate feeders.
The network arrangement, shown schematically above, provides the greatest system
reliability. Should one of the feeders fail, there will be no detectable service interrup-
tion. This is the most ideal arrangement for critical operations such as computer and
data operations and healthcare applications.
This naturally improves system reliability, but note that each circuit is connected to only one primary feeder.
A final network type is where several primary feeders are simultaneously connected. Moreover, the secondary systems are also tied together, providing several simultaneous paths to feed the load. If there is any fault affecting any of the individual feeders on the primary or secondary sides, it can be automatically isolated and the load will not experience any service interruptions.
The high-voltage distribution in a facility can be overhead or underground.
Overhead Systems - Overhead systems have a number of advantages. They have a lower initial cost, are easier to maintain and easier to troubleshoot.
On the other hand, overhead systems are prone to outages
resulting from bad weather conditions such as severe winds, tree branches falling on cables, or vehicles hitting the power poles. In such situations, if the live cable breaks and drops on the ground, it can cause a serious safety hazard. They can also cause interference problems for voice, data and video systems. In addition, the lines are susceptible to contaminants such as coal dust, acids and pollution, which cause ''flash overs'' around the insulators if they are not cleaned regularly (see Chapter 6). Finally, overhead systems are not aesthetically pleasing.
For these reasons, overhead systems are only used in industrial environments, construction sites or sparsely populated large facilities.
Underground Systems - By contrast, underground systems overcome almost all of the disadvantages associated with overhead systems. They are more expensive to install and harder to maintain, however.
With underground systems, there are several considerations:
1. First, the cable must be installed in concrete-encased duct banks. Do not install high-voltage cables as a direct-buried system.
2. When designing a duct bank for new installation, we must consider system flexibility and future expansions. In most cases, direct links between buildings might limit flexibility in the future.
3. It is essential to have accurate and up-to-date as-built construction drawings on file.
4. When installing new duct banks, be generous in incorporating spare conduits. The marginal cost of additional conduits is negligible compared to its potential benefit in the future.
5. Another important element of underground systems is proper care of the manholes. Make sure that electrical manholes are accurately identified in the drawings and clearly marked in the field. Manhole drainage must be addressed and sump pumps should be installed where gravity flow is not possible. The maintenance of manholes has a direct impact on system reliability. The reason is that all outdoor cable splices in the power systems are in manholes. During wet seasons, with the absence of a good drainage system, water will accumulate to a
level where the cable splices become submerged and cable failure and subsequent power outage follows. Therefore, the drainage of manholes should be checked and cleaned, especially before the rainy seasons in the spring and fall.
6. Lastly, based on the latest OSHA guidelines, individuals working in manholes must be adequately trained to work in confined spaces.
The electrical distribution system in a building is a low-voltage system. The standard low voltages in North America are:
Single-phase, 2-wire, 120V.
Single-phase, 3-wire, 240/120V.
Three-phase, 4-wire, 240/120V.
Three-phase, 4-wire, 208/120V.
Three-phase, 3-wire, 480V.
Three-phase, 4-wire, 480/277V.
Three-phase, 3-wire, 600V.
Three-phase, 4-wire, 600/347V.
By contrast to the above, in Europe and most other parts of the world the standard low voltage is three-phase, 4-wire 380/220V system.
The cables for in-house power distribution should be sized in such a way to limit the voltage drop in the main circuit to two percent and in branch circuits to one percent. The wire conduits should be sized to allow future wire runs if needed. Similarly, it is important to have spare capacity in the main and branch distribution panels. For critical loads, it is a good idea to have dedicated circuits from the main distribution panels.
Another important factor to consider is load diversity. The idea behind diversity is that all of the connected load in a distribution system is not in service simultaneously. For example, in commercial
applications five duplex convenience outlets are connected to a 20 ampere circuit. Since most of the outlets are not continuously used, it does not cause a problem. However, if there are several loads which will be in continuous service and draw higher than 20 amperes, separate circuits will be needed. Therefore, it is important to realize that design guidelines for any building assume a certain level of load diversity based on typical installations. However, if the load requirements for any particular circuit are higher than the design assumptions, then additional circuits should be installed.
Transformers are the most important element of AC electrical systems. They are used to change the line voltage of power systems. Transformers can step up or step down line voltage from one value to another value. There are no moving parts in transformers, so they are static devices. The energy transfer between the input and the output of the transformer is performed through the use of magnetic coupling. Since there is no electrical connection between the input and output of a transformer, it provides electrical isolation. The transformer thus accomplishes two taskschanging the line voltage and providing an isolated power source for sensitive devices.
Had it not been for transformers, large electrical transmission and distribution systems would not be possible. The line losses of sending voltages across long distances would have been too high to be considered economical. Instead, at the utility, step-up transformers increase the voltage to the incredibly high levels necessary to travel long distances with minimal losses. Once the power reaches the point of use, it is reduced to the sufficient voltages along the way by step-down transformers.
Elements of Transformers
Transformers have three main elementstwo electric coils and a magnetic core.
The coils are usually made out of copper, but they can occasionally be of aluminum.
The core is constructed from high permeability magnetic materialdifferent metallic compounds. The transformer core is not one solid piece. In reality, the core consists of laminated metal sheets which are insulated from each other. The reason for this as opposed to a solid core is to reduce core losses, which is adverse to our goal
Schematic representation of a core-type transformer. Core-type transformers offer
good regulation and the insulation can be rated to withstand lower amounts of
voltage. The core is actually a construction of laminated metal sheets which are
insulated from each other.
of distributing electrical power as efficiently as possible.
The line side of the transformer is called the primary and the load side is called the secondary. The coil turn ratio between the primary and secondary windings determines the voltage and current ratio between the line and load side. So for a step-up transformer, since the line voltage is lower than the load side, the primary winding will have fewer turns than the secondary windings. Similarly, the wire size for the primary winding will be larger than the secondary winding.
The magnetic core of a transformer may be a shell-type or a core-type.
If the transformer windings are on a different leg of the core, it is called the core-type winding (see Figure 3-9). With core-type winding, the insulation value of each winding has to withstand only that winding voltage. In addition, these transformers have better regulation.
With shell-type transformers, the coils are wound on top of each
Schematic representation of a shell-type transformer. The coils
are wound on top of each other in the central leg of the core.
other in the central leg of the core as shown in Figure 3-10. The magnetic field has two paths for the flux. The insulation strength of both windings must be able to withstand the voltage difference between them.
Transformers are among the most efficient electrical equipment. Their efficiency typically ranges between 95 and 99 percent. Textbook transformers have zero loss. But since actual transformers have some losses, the values of the voltage and currents will be slightly smaller than in the textbook case. To get a better understanding of how to improve efficiency, it is important to know the sources of transformer losses.
There are primarily two types of losses with transformers: copper losses and iron losses.
Copper losses represent the winding losses due to the coil resistance, and the extent varies directly with the transformer load.
Iron losses are related to the magnetic core and are independent of transformer loading. This means these losses are the same at full load and no load. Iron losses are due to three types of losses: hysteresis, eddy current and flux leakage.
Hysteresis Losses - Hysteresis losses represent the energy needed to change the polarity of the magnetic flux in the core and is a function of the core material and the power frequency.
Eddy Current Losses - As the magnetic flux of the windings cuts the core, an electric voltage is induced in the core which results in currents similar to eddies in streams. The main reason why magnetic cores are not made in one solid piece is to reduce these eddy current losses.
Leakage Losses - In addition to the above two losses, since all of the magnetic flux is not confined to the core, a small fraction leaks out of the system.
In addition to efficiency, losses also affect transformer regulation. Regulation is the percentage of change between full-load and no-load voltages of the transformer. Therefore, the regulation of a unit is closely connected to the winding resistances, because if this resistance is high, the difference between the load and full-load voltage drop will be more appreciable. In addition, the regulation is also a function of power factor. Typically, the regulation of a transformer ranges from 3 to 5 percent.
The three types of transformers are named for the type of material used as a coolant in the unit: dry, oil and PCB.
Dry-Type Transformers - Dry type transformers use air as a coolant. The air flow through the transformer can be accomplished using fans or natural convection. Dry transformers are physically larger in size compared to oil transformers. They have a lower first cost and do not pose a fire hazard.
Dry-type transformers, however, need more maintenance and require more restrictive ambient conditions to operate in. For example, they are primarily used for indoor applications. For outdoor applications, they require weather-proof enclosures.
When installing dry transformers, it is important to install the
unit in such a way that the flow of air is not impeded. In addition, they must not be installed in rooms where they will be subjected to high temperatures. Otherwise, the life of the unit could be drastically reduced. Finally, dry-type transformers rated over 35,000 volts must be installed inside vaults.
Oil-Type Transformers - With oil transformers, the entire winding and core assembly is immersed in dielectric fluid. Oil transformers have been in use since the turn of the century. The oil serves a number of different purposes. First, it acts as a coolant for the transformer. It protects the windings from moisture. And it must withstand voltage surges as well as thermal and mechanical stresses. That is why these units can withstand voltage surges and overloads better than dry types.
Oil transformers are smaller and more compact in size compared to dry-type transformers. The ambient conditions with oil-type transformers are not as restrictive as the dry type. They can be used for indoor as well as outdoor applications.
Since the dielectric fluid used in most transformers is flammable, however, they must be installed in vaults when used indoors. But today, it is easy to find oil-type transformers with non-flammable dielectric fluid, such as silicone-filled transformers, which addresses this limitation.
Oil transformers require little care and maintenance. An important key to continuous operation of oil transformers, however, is to protect the dielectric fluid from contamination. This is because small quantities of moisture can drastically reduce the insulation value of the dielectric fluid. For instance, a water content of less than 50 parts per million (ppm) can reduce the insulation value of oil by half. This translates into one tablespoon of water in a large transformer. In addition, oxidation can result, forming sludge and deteriorating the insulation value. This is caused by the presence of free oxygen in the air and the breakdown of the cellulose in the transformer. The oxygen reacts with hydrocarbons in the oil and sludge is formed. Therefore, periodic testing of the oil to ensure its dielectric value is strongly recommended.
Normally, testing the oil once every 3-5 years is adequate. However, if the transformer has been subjected to a lot of overload and voltage surges, it should be tested more frequently. It is also important to note that the dielectric fluid used in transformers is chemically dynamic. This means that the transformer oil will eventually deteriorate and develop sludge even if the transformer is not in service.
PCB-Type Transformers - PCB transformers are filled with a synthetic liquid as a dielectric fluid. The chemical used is polychlorinated biphenyls (PCB) and were used in transformers from early 1930s to the mid-1970s. PCB has a number of desirable characteristics as dielectric fluid, such as good insulation value, high effectiveness as a coolant, non-flammability and chemical stability. That is why PCB became the dielectric fluid of choice for many transformers. PCB liquid is known under the generic name of askeral. However, the common trade names for PCB are arcolor, asbestol, pyronal, dykanol, chlorextol, etc.
In 1976, the U.S. Congress banned the manufacturing of PCBs and regulated the use of existing PCBs. One of the main reasons for the action was ironically based on the high chemical stability of the product, a property that had made PCB a desirable fluid for transformers. Since PCB is a stable substance, is does not biodegrade after being released in the environment. It can eventually make its way into the human food chain and cause negative health effects. Additionally, after a 1983 fire in San Francisco which involved a PCB transformer, traces of polychlorinated dibenzofurans (PCDFs) and polychlorinated dibenzodioxens (PCDDs) were discovered. Both PCDFs and PCDDs are toxic substances and known carcinogens.
Based on these facts, the U.S. Environmental Protection Agency (EPA) issued a series of regulations relating to PCB-containing substances during the 1980s. The regulations apply to transformers which have a PCB concentration of 50 ppm or more. This is an extremely small concentration level and the amount is analogous to comparing the thickness of a credit card with the height of the Empire State building. This meant that not only PCB transformers but a lot of other non-PCB units which had been contaminated with small amounts of PCBs were regulated. So during the 1980s and early 1990s, many institutions have taken measures to replace most of their PCB transformers with non-PCB units.
In many cases, instead of replacing the transformer, the PCB fluid was replaced with non-PCB liquid. One of the biggest challenges of the retrofit process was maintaining a PCB concentration level of below 50 ppm over time. In other words, in almost all retrofits, fill processes could reduce the PCB concentration to less than 50 ppm after replacing the fluid and rinsing the windings. However, because of the low threshold of 50 ppm, even if a minute trace was trapped in the windings, it would eventually dissolve into the dielectric fluid and raise the PCB concentration to above the 50 ppm level.
Today, there are few PCB transformers in operation. If the reader's facilities still have PCB transformers in use, it is essential to stay current and follow U.S. EPA guidelines regarding handling PCBs.
We have stated there is no connection between the primary and secondary windings of a transformer. The exception to this principle is an autotransformer where the two windings are connected with each other (see Figure 3-11). Autotransformers are commonly used when we are trying to boost the incoming voltage by only a few percentage points.
Autotransformers have higher efficiency and lower losses. Moreover, they are much smaller in size for a comparable wattage.
Since the primary and secondary windings of an autotransformer are physically connected, there is no electrical isolation between the primary and secondary side. In addition, because of lower leakage current, the available fault current of autotransformers is larger than the two winding units. Therefore, when using autotransformers, it is a good idea to check the interruption capacity of protection devices on the secondary side of the transformer.
Considerations For Operating Transformers
There are a number of factors that need to be addressed in operating transformers.
Transformer Nameplate Ratings - Foremost is the rating of the transformers. Similar to other electrical equipment, the limitation is thermal in nature. In other words, the rating of a unit is defined by how much it can be loaded where the temperature will not exceed a certain level. This is why transformer rating is based on kVA rather than kW, because kVA better determines the temperature rise. Generally, the rating of transformers is based on an ambient temperature of 40ºC at sea level. This means if a transformer is in operation at hotter climates or much above sea level, the unit must be derated based on the manufacturer's recommendations.
The kVA rating of a transformer is stated on the unit nameplate data. In addition to the rating, the nameplate will contain other pertinent data such as the primary and secondary voltages, impedance, taps, insulation class, etc.
The insulation class determines the insulation temperature limits. There are four insulation classes: A, B, F and H.
Schematic representation of an autotransformer. Autotransformers are the
only type of transformer in which the two windings are connected. They
are used in applications that need to boost the incoming voltage by only
a few percentage points. They are highly efficient and compact.
Class A insulation units imply that the maximum temperature rise in the windings must not exceed 55ºC. The maximum allowable temperature rise for the other insulation classes will be higher in the above-mentioned order up to a maximum temperature of 115ºC.
Sound Level - For many indoor applications, another important parameter for a transformer is its sound level. Since the magnetic field in a transformer is continually oscillating at a line frequency, it radiates audible sound energy. In addition, the magnetic force between the laminations produces motion that generates noise. The sound level is usually a function of the load level. The noise level can be reduced by checking any loose connections in the laminations or connections. For some applications, it may be desirable to build an acoustic enclosure.
Transformers Operating In Parallel - Sometimes, one transformer is not large enough to meet the load demand. In this
case, two or more units have to be installed in parallel. In such cases, it is important to have matched transformers. This means in addition to the primary and secondary voltages, the winding impedance, wiring polarity and the turns ratios must be the same. Otherwise, the units will not be equally loaded and there can be a large circulating current in one of the units. In this case, not only will the losses be appreciably higher, but the system capacity will be much lower than the arithmetical ratings of both units separately. To illustrate this point, a one percent difference in the windings of two transformers can generate a circulating current of about 20 percent.
There are many ways to connect transformers. First, transformers can be single-phase or three-phase. Three-phase transformers can have all three windings on the same core. In some cases, it is more practical to use three single-phase units connected together as a three-phase transformer. The windings of both primary and secondary may be delta or wye.
This gives rise to four possible configurations: delta-delta, wye-wye, delta-wye, and wye-delta.
Delta-Delta Connection - A delta-delta unit has a number of advantages. First, if there is any third-level harmonic present, it will be trapped by the delta winding. In addition, if one of the three windings fails, the transformer can still function as an open delta system providing 58 percent of the full rating. This characteristic is particularly helpful in situations when the present load is moderate, but will be increased in time. In such instances, two single-phase units can be connected as open delta and when the load increases, a third unit will be added. This will delay part of the capital investment.
The main disadvantage of open delta system is higher losses, poor voltage regulation and a 16 percent lower rating than the combined ratings of the individual single-phase transformers. In addition, the probability of ferroresonance problem is high with delta-delta transformer.
Wye-Delta Connection - The wye-delta transformer is normally used to step up the voltage. The advantage is to provide a good ground that will not shift with unbalanced single-phase load. A typical application is coupling a cogeneration unit with the utility power grid. This way, the cogeneration system will have a good
ground and no third level harmonic will be transferred to the utility power grid.
Delta-Wye Connection - A delta-wye transformer is normally used as a step-down unit for in-house distribution systems. The wye provides a good ground as well as the capability of serving three-phase and single-phase loads. Moreover, if there is any third phase harmonic generated by in-house equipment, it will be trapped by the delta winding and will not affect the utility system.
Wye-Wye Connection - A wye-wye transformer is usually employed to connect two delta-delta transformers. It provides a good ground connection and achieves a neutral for the system. Both the primary and secondary sides can have single- and three-phase connections. One problem with this connection is the inability to trap the third harmonic. To overcome this shortcoming, a tertiary delta winding is added.
Transformers play a critical role in power distribution systems. A comprehensive transformer maintenance program will contain several components:
1. The first element is visual observation of components for tightness, cleanliness and winding temperature. The insulators, surge arresters and busbars should be cleaned and checked for any insulation cracks and oil leakages periodically.
2. The insulation value of the winding resistance should be measured with a megger and compared with prior readings.
3. For liquid-filled transformers, the dielectric fluid should be tested periodically.
4. For larger and more critical transformers, additional tests for insulation, power factor, turns ratio and excitation are recommended.
Switching stations play a critical role in power distribution systems. It is the central nerve of an electrical distribution system. The incoming power from the electric utility is terminated at a switching station. In addition, the outgoing feeders to individual buildings initiate from here as well.
Switching stations can be indoor or outdoor installations. The main elements of a switching station are high-voltage load-break switches, circuit breakers and disconnecting switches. In addition, there are a number of lesser items such as fuses, protective relays, metering and other devices. Since most of these are covered elsewhere in this book, here we will concentrate on circuit-interrupting devices.
Switches are used to connect or disconnect currents during normal conditions. Circuit breakers connect the electrical circuit non-automatically, but can open the circuit automatically or manually under normal or faulted conditions.
Both switches and circuit breakers are sized based on rated system voltage, continuous current capacity, and short circuit current-interrupting capacity. The power flowing through a large electrical system can be substantial. Disconnecting the flow of such immense energy is a complex process, especially for high-voltage systems.
Since electrical systems spend all but short periods of time in steady state conditions, facility managers primarily concern themselves with this. In fact, few have a clear understanding of the transient condition of power systems. This is reinforced by the fact that few references address the topic to any extent. In reality, an adequate knowledge of transient response of electrical distribution is important in anticipating the severe and excessive voltages and currents a system may experience. Such stresses may disable equipment, shut down a facility, cause major blackouts and pose create unsafe conditions.
High voltage interrupters consist of switches and circuit breakers that can open or close electrical circuits. When the contacts of these devices are in close proximity, the electric field will rise rapidly just before opening or closing the circuit. This tends to ionize the
Safety is always a paramount concern with electrical distribution equipment operation
and maintenance. Shown here is a lockout device designed to secure an individual
circuit breaker without locking out the entire panel. It can be added to existing equipment.
Courtesy: Panduit Corp.
surrounding air molecules; in conjunction with the arc, high temperatures are generated and the contacts tend to weld together. So these devices must be able to withstand the simultaneous electrical, mechanical and thermal stresses that are exerted.
There are a number of techniques used to extinguish the arc safely and reduce the stresses that are exerted on the contacts. This has led to the development of different types of interrupting technologies.
Since we talk a lot about switches and circuit breakers in this chapter, it is important to differentiate the two.
Switches - A switch is a device that can make or break the connection of an electrical circuit under normal conditions. It is not designed to interrupt fault currents. The switch insulators are coordinated with the rest of the system Basic Impulse Level (BIL) so it will be able to withstand voltage surges. Switches contain two sets of contactsstationary and moving.
The stationary contact is mounted on suitable insulators to meet the system BIL requirements. The moving contacts are hinged on one end to facilitate making or breaking of the electrical connection.
Circuit Breakers - A circuit breaker is a device that can close a circuit and open the flow of current during normal and abnormal conditions. This means circuit breakers are able to open during short circuit conditions. Just like switches, the circuit breaker insulation can withstand the system BIL. But in addition, a circuit breaker should be able to withstand the short-duration, high-magnitude currents that are induced during a short circuit condition. Circuit breakers are closed non-automatically, but if the current reaches a certain preset value, they operate automatically and interrupt the circuit.
The important parameters of a circuit breaker include the maximum steady state current they can carry, the maximum interrupting current, the maximum voltage and the interrupting times in cycles. The interrupting time is normally between 2 to 8 cycles or 33 to 133 milliseconds. Quick circuit interruption is critical during a fault condition, because large amounts of energy flow through a circuit which can cause serious damage.
To accommodate such a quick response, the breaker must rapid deionize the arc and cool the contacts. The contacts are opened with an automatic tripping mechanism, which consists of an overload relay that detects abnormally high currents. The tripping mechanism is usually powered by an external DC-operated battery bank. When the current exceeds a preset value, the primary relay energizes and closes the contacts for the DC tripping coil. As the DC tripping coil is energized, the switch main contacts will open and the main circuit is interrupted. There are a number of different kinds of circuit breakers in electrical systems. The most important types are air-magnetic, oil, vacuum and sulfur hexafluoride breakers which are introduced below.
Air-Magnetic Circuit Breakers - Air-magnetic circuit breakers utilize air to extinguish the arc by elongating it with the help of a magnetic field. These breakers are common for installation voltages of up to 15 kV. The principle behind this technique is when the length of the arc is greatly increased, it will become weak and eventually not be able to sustain itself, thus extinguishing.
In addition to the main contacts, there is an auxiliary set that are called the arcing contacts. Sometimes, there is a third set of
intermediate contacts to improve the transfer of current between the arcing and main contacts. The intermediate contacts are of particular significance it the heavy-rated main contacts cannot be located near the arcing contacts. The arc chutes are located immediately above the arcing contact. The arc chute is a V-shaped or semicircular device which has a number of insulation fins next to each other with an air gap between adjacent plates.
It should be noted that subsequent restrikes will contain less energy as compared to the original one. It should also be mentioned that the arc chutes in most breakers manufactured prior to the early 1980s might contain asbestos, which means the damaged units cannot be discarded in regular trash. As an asbestos-containing substance, it should be disposed of as a hazardous material in accordance with appropriate federal, state and local laws.
Air-magnetic breakers have a lower cost compared to other types. They can be installed indoors or outdoors. Since they do not pose any fire hazard, air-magnetic breakers can be installed indoors without the need for a fire-proof vault. On the other hand, they are more bulky and require a larger footprint. So if space limitation is an important consideration, they might pose difficulties.
Air-magnetic breakers must be installed in clean, dry environments. For indoor applications, the mechanical rooms that house such breakers must be supplied with filtered fresh air. They require more maintenance than other breaker types. These breakers have many more moving parts, so the mechanical upkeep is important to ensure safety and reliability. The breaker maintenance should include visual inspection of the parts for signs of wear and discoloration due to overheating. Any contact damage or charring due to arcing should be cleaned and repaired. The blow-out coil needs to be tested. The breaker alignment must be closely checked. If there is any misalignment, all three phases of the breaker will not be making electrical contact simultaneously. This can induce a ferroresonance problem (see Chapter 6).
Another important maintenance task is testing the speed at which the contacts open (in units of cycles) and comparing it with the nameplate characteristics of the breaker. The speed of opening is important for all types of breakers.
Finally, the trip mechanism and the associated DC battery bank circuit should be examined. It is important for facility managers to play close attention to the battery bank and check the charge. Man times, a breaker has tailed to operate and de-energize the circuit in
a fault due to lack of adequate charges in the batteries. So one way to avert such costly accidents is to periodically check the batteries as well as their battery charger. Since batteries have a finite useful life much smaller than the breaker, the replacement of batteries should be planned every few years.
Oil Circuit Breakers - In an oil circuit breaker, the contacts and all other live parts of the unit are immersed in a steel tank filled with dielectric fluid. When the circuit breaker is closed, the current flows through one set of porcelain bushings to the fixed contact, through the moving contacts, and finally out the other set of bushings. When the circuit is interrupted, the contacts open and an arc is generated. The heat of the arc causes the dielectric fluid to become volatile and form non-ionized gas particles. The decomposition of oil results in a gas which is rich in hydrogen. Since hydrogen resists ion-pair production, the arc will extinguish and the circuit will be cleared.
There are two types of oil breakers: dead tank and live tank.
Dead tank units are bulkier and contain a lot more oil than the live tank units. The interrupting capability of the breaker can be enhanced by adding oil chambers. As the contacts open, a fresh jet of oil flows over them and the arc is extinguished rapidly.
Live tank oil breakers contain small quantities of oil and are designed so the arc is deflected and elongated by a self-induced magnetic field. The arc is blown against a number of insulating plates that will break up the arc and cool it quickly.
Oil circuit breakers are smaller in size compared to the air-magnetic type. They have long been the workhorse in power distribution, so the industry has a lot of experience with this technology.
Oil breakers require minimal maintenance. They can be installed in manholes, on top of poles, or indoors in mechanical rooms. Moreover, oil breakers have a higher BIL value, so they can withstand the highest level of short circuit currents safely when compared to other breakers. Oil breakers are used in 4 kV up to 345 kV systems.
Since oil breakers have few movable parts, they are rugged and reliable devices. That is why power utilities use them frequently.
The main disadvantage of oil breakers is that the dielectric fluid is a flammable liquid. So for indoor applications, they must be installed in a fire-rated vault.
To ensure the reliable operation of oil breakers, the oil should be tested regularly. Dielectric fluid is a chemically dynamic
substance, so even if the breaker is not operated, the oil will eventually develop sludge. Obviously, if the breaker operates frequently, sludge will be formed much faster. Another thing to watch for is keeping the oil away from moisture because even a minute quantity of water can severely reduce the dielectric value of the oil.
Vacuum Circuit Breakers - Vacuum circuit breakers operate on the same principle as other types of breakers. An arc is formed as the result of ionization of the medium around the contacts. So if the contacts are opened in a vacuum, there is nothing to generate and sustain an arc. The circuit is interrupted when the current goes through zero for the first time; there will be no risk of re ignition.
Vacuum breakers consist of contacts inside hermetically sealed cylinders which are kept under a high vacuum. Because of the high vacuum, a one-quarter-inch gap is sufficient to interrupt 100 kV.
Vacuum breakers are simple in construction. They are compact, light-weight and can be installed almost anywhere at any orientation, making them ideal to accommodate tight spaces. The energy needed to operate these breakers is small. They require practically no maintenance, so they can be installed in less accessible spaces.
Vacuum breakers are used for up to 138 kV, but the current is limited to 4,000 amperes with current technology. Vacuum breakers have a lower BIL rating than oil breakers. So when specifying a vacuum breaker, it is important to check the system BIL requirement.
The main disadvantage of a vacuum breakers is ironically their ability to extinguish the arc quickly. An electrical network contains an appreciable amount of energy when it is energized. As the circuit is interrupted, the energy stored needs to be drained. The arc provides a means of dissipating the system energy. But with a vacuum breaker, as the arc extinguishes prematurely, an appreciable amount of energy will remain trapped in the electrical system. When the circuit current drops toward zero, the trapped energy will manifest itself as a high-voltage impulse. This phenomenon is called ''wave chopping'' and can damage electrical equipment. An easy way to avoid wave chopping is to have a small load connected in the secondary circuit when the breaker should be opened. The load will safely drain the system energy and avoid wave chopping.
Vacuum breakers are hermetically sealed without any indications of the level of vacuum. Therefore, some facility managers think of installing a pressure meter to monitor the vacuum level. This temptation should be resisted because installing a pressure gauge
Vacuum interrupters provide reliable overcurrent protection with the added benefits
of no maintenance, light weight, simple construction, compact size and low energy
requirements. The model shown is electronically controlled, fuseless and resettable.
Courtesy: G&W Electric Co.
will more than likely break the vacuum; the breaker will be damaged.
Vacuum breakers have been used in the industry for almost a half a century now for higher voltages. In the past two decades, their cost has come down to the point where they are competitive for voltages as low as 2400V. They provide a viable alternative in retrofit applications.
Sulfur Hexafluoride Circuit Breakers - Sulfur circuit breakers are similar to oil breakers, but instead of liquid as the dielectric medium, sulfur hexafluoride gas is used to extinguish the arc. The gas is a colorless, odorless, non-corrosive, non-toxic and non-flammable inert substance with desirable electrical characteristics. It can extinguish an arc a hundred times more effectively than air. These breakers have all of the desirable characteristics of oil breakers. In addition, they pose no fire hazard, are light in weight, and like vacuum breakers can be installed in any orientation. They are available in all voltages up to 765 kV. These breakers have been
Vacuum interrupters ore available with solid state controls to permit extremely
accurate, consistent protection curve characteristics. The controls shown here
can also coordinate with substation breakers or other non-fuse devices, elimin-
ating the need to recoordinate the distribution system. Courtesy: G&W Electric Co.
used by utilities for several decades, and since the mid 1980s they are competitively priced for voltages as low as 2400V. Sulfur hexafluoride units also lend themselves for rotary design, thus making multiposition switching possible in compact assemblies.
The contacts in these breakers are usually a blade and knife arrangement. This is because in the presence of high temperatures, sulfur hexafluoride can decompose and form a white powder with metal vapors. So the blade and knife arrangement gives the blades a natural wiping action.
These breakers require hardly any maintenance. Their construction often includes a sight glass so we can visually inspect the blades to see if the unit is in the ON or OFF position. Since the gas pressure in the breaker is only a few atmospheres, they are usually equipped with a gauge to monitor the gas pressure. If the gas pressure drops below the manufacturer's recommended level, the breaker vessel can easily be charged with additional gas. Finally,
there are no ambient restrictions for the location of these breakers. Generally speaking, they are the units with the best overall characteristics.
Air-Blast Circuit Breakers - These breakers use the power of compressed air to blow out and extinguish the arc. The common voltage range of application for these breakers is 34 kV to 765 kV. The compressed air is stored in a tank under pressures of up to 435 pounds per square inch. As the breaker operates and the arc is established, the high-pressure compressed air is applied across the arc and the circuit can be cleared in about two cycles. These breakers are non-flammable and relatively small, not counting the air compressor. The noise accompanying the blast is quite loud, so when these breakers are used near residential or commercial areas, noise suppression is desired. They can be used both for indoor and outdoor applications.
Interrupter Switches - Interrupter switches appear like a simplified form of air-magnetic breakers. The arc is extinguished by a lengthening and cooling process. The switch contacts are blade and knife type. As the moving contact separates from the stationary contact it travels within a V-shaped horn, which is the arcing chamber. The arc is elongated, squeezed and eventually extinguished. These switches are simple in construction and can be used for voltages up to 34 kV, are capable of interrupting 1,200 amperes and can withstand a momentary current inrush of up to 60,000 amperes.
Disconnecting Switches - Unlike the interrupter switches, these units cannot interrupt any current. So they must only be opened and closed when the current is zero. They basically act as isolation devices to carry out maintenance work or reroute power flow. They are usually equipped with a latch to prevent the switch from opening under the severe magnetic forces that can accompany short circuits. The construction of these switches is simple and straightforward. It is important to understand that before they can operate, the breaker or switch upstream to them must be de-energized first. Otherwise, exercising them under load conditions results in generating large arcs and unsafe conditions.
Low-Voltage Circuit Interrupters
The disconnecting devices for voltages below 600V are much simpler in construction when compared to their high-voltage counterparts. However, they might have to interrupt larger currents and withstand even larger momentary short circuit currents. The three most common low-voltage interrupting devices include safety switches, power circuit breakers and molded-case circuit breakers.
Safety Switches - These switches come with or without fuses. They are operated from a handle outside the enclosure. In fact, there is an interlocking mechanism to prevent opening the enclosure when the switch is in the ON position. Therefore, safety switches must be turned off before they can be opened. These switches are available for ratings of up to 6,000 amperes capable of withstanding a momentary fault current of 200,000 amperes.
Power Circuit Breakers - These breakers are open-construction assemblies housed within a metal frame. The individual parts of these breakers can easily be replaced in the field. Moreover, the tripping mechanism is interchangeable and can also be field-adjusted. The tripping unit can be an electromagnetic or solid state overcurrent device. These breakers must be sized to carry the maximum quarter cycle asymmetrical fault currents of the system to ensure their safe operation.
Molded-Case Circuit Breakers - These breakers contain a switching device coupled with an automatic protective device and are housed in an integral assembly of insulating material. Molded circuit breakers are sealed to prevent any tampering, so they are not designed to be maintained in the field. The tripping mechanism can be one of several types. A thermal-magnetic provides instantaneous tripping for short circuits as well as delayed action for sustained overloads, while a magnetic or thermal provide only one of the above characteristics, respectively. They are available in the same ratings as power circuit breakers.
The term switchgear refers to an assembly of switches, circuit breakers, associated control devices, metering and relaying devices, interconnecting busbars, other accessories, and the enclosures. The switchgear in systems up to 34 kV is installed in a metal-enclosed housing. The metal-enclosed switchgear is classified into four types:
metal-clad switchgear and interrupting switchgear for high-voltage systems, and circuit breaker switchgear and distribution switchboards for low-voltage systems.
Metal-Clad Switchgear - For metal-clad switchgear, all of the major parts are enclosed in a grounded metal enclosure. The individual breakers are removable. The primary busbars are insulated. The front switchgear will contain all of the metering and other instrumentation.
Interrupter Switchgear - An interrupter switchgear contains interrupter switches and fuses which will be capable of manually disconnecting the circuit at full continuous load. The busbars will not normally be insulated. The switches are stationary and cannot be racked out for repair. All manual operating handles will be on the front face of the switchgear as well as any metering and instrumentation. This switchgear can also have an interlocking arrangement to assure a predetermined switching order.
Circuit Breaker Switchgear - The low-voltage power circuit breaker consists of draw-out or molded-case circuit breakers which can be in a dead front or switchgear compartment arrangement. The breakers can be operated manually or electrically. Individual breakers will be stacked on top of each other to accommodate common vertical busbars. Any associated metering will be on the front face of the switchgear.
Distribution Switchgear - The metal enclosed distribution switchgear is used as a secondary substation for commercial applications. They can be built as front- or rear-accessible. Similarly, they can be wall-mounted or floor-mounted.
Maintenance - Proper care of the switchgear is an important part of the electrical preventive maintenance program. Such a program at a minimum must contain tightening of exterior connections, inspecting gaskets for oil leakage, observing the alignments, testing the oil, and cleaning the bushing and other insulators.
Switchgear Rating - Finally, the rating of a switchgear is a function of ambient temperature, altitude, duty cycle, etc. If a unit is in operation at high ambient temperatures or high elevations, the switchgear needs to be de-rated in accordance with the manufacturer's recommendations.
It is well known that one can only control what one can measure. That same analogy can be used in electrical systems. Measurement instruments play an important role in managing electrical networks. They give us the ability to determine a variety of parameters for monitoring, measuring and controlling purposes. That is why accuracy, precision, repeatability and reliability are critical attributes of measurement instruments.
Every measuring instrument has three building blocks in series. These are the primary detector, intermediate mean and the end device. This means the aforementioned attributes must be evaluated for the three blocks to ensure satisfactory results.
In this section, a number of common measurement instruments are examined. Although the list is not exhaustive, it gives one a general idea about fundamental aspects of these important devices. Current and voltage are the most commonly measured quantities in electrical systems because through these two parameters the essential nature of electrical systems can be determined. Based on their principle of operations, there are three mechanisms for measuring electrical quantities.
First, electromagnetic instruments operate based on the magnetic forces generated when a current-carrying conductor is in a magnetic field.
Electrostatic instruments operate due to the force on a charged conductor in an electrostatic field.
Thermal instruments operate based on the expansive characteristics of electrically heated metal.
In addition to these instruments, with the advent of solid state technology, digital meters are a fourth important group. These devices do not have any moving parts and operate based on electronic processing and measurement of electrical parameters.
There several types of electromagnetic devices: moving coil, moving iron, electrodynamometer and induction-type. In these instruments, the deflection angle of the pointer is a function of the torque generated by the magnetic field interaction. There is a spring that acts as a balancing torque and will move the pointer back to zero when there is no current flowing through the unit.
Moving Coil - The permanent magnet moving coil device, also
known as the D'Arsonval galvanometer, is the building block for many different types of meters (see Figure 3-15). It is the most widely used device especially for measuring direct current applications. They are inherently sensitive devices for measuring small direct currents in the microampere and milliampere ranges. Typically, the full-scale range of the meters are in the order of 10 microamperes to 50 milliamperes. The scale of these meters is linear for most of the instrument range except for the low end of the scale. These instruments are bi-directional because the direction of the needle rotation is a function of the current polarity in the armature coil. They are normally used for DC applications. When used with AC circuits, a bridge rectifier is needed to convert the current from AC to pulsating DC. Otherwise, the meter will be vibrating around zero.
Moving Iron Instruments - There are two types of moving iron instrumentsthe attraction type and the repulsion type. The current that passes through the coil of wires produces the torque which moves the needle.
For the attraction type, a small piece of iron is drawn into a core as the current flows through the coil.
In the repulsion type, there are two thin iron plates inside a coil where one plate is fixed and one is movable.
When the current flows through the coil, both plates are magnetized with the same polarity, thus a repulsion force between the two is generated. The torque generated will be proportional to the square of the current in the coil. The vanes in moving iron instruments can move up to 90 degrees. The meter scale is nonlinear, especially at the low end of the scale. Regardless of the direction of the current, the iron instruments will always move in one direction. Because of this unpolarized characteristic, these instruments can be used both for DC and AC applications.
Electrodynamometer - If the permanent magnet is replaced with an electromagnet, then the moving coil instrument is called an electrodynamometer. The two windings can either be in series or in parallel. The torque developed will be a function of both the fixed and moving magnetic fields. Therefore, with a dynamometer the field strengths can be proportional to the square of the system current, voltage, or voltage multiplied by the load current. The meter deflection can measure the current, voltage or power.
In some cases, instead of utilizing a needle and spring, the meter utilizes a rotating disc in which its angular velocity will be
A schematic representation of a D'Arsenval meter. This meter is extremely
sensitive and is the most popular device for measuring DC applications.
proportional to the usage of power quantity. The disc rotation is recorded by a counter which gives us the basic building block for a kWh meter. On the other hand, if one of the coils is in series with a resistor and the other is in series with an inductor, the two currents will be 90 degrees out of phase with each other. If the system power factor is in unity, then the meter will experience maximum torque and the needle will show full deflection. If the system load is totally reactive, then the needle will remain at zero. For any other power factors, the meter deflection will be somewhere in the middle. This is the basic building block for a power factor meter.
Induction Type - These instruments can only be used for AC applications. The advantage of these instruments is the large-scale deflection, which can be up to 300 degrees. However, this means an increased stress in the control springs because the stress is proportional to the deflection. The frequency and temperature can also introduce variation. Since the maximum current that can flow through these instruments is in the milliampere range, series
resistors and shunt strips are needed to enable these devices to measure large quantities of voltages and currents. If a large resistor is added in series with these instruments, where the maximum needle deflection current will correspond to the maximum voltage, then the instrument is converted to a voltmeter.
Sometimes, a number of different resistors are added through a selector switch to achieve several voltage ranges. If, on the other hand, a shunting strip is connected across the moving coil circuit where the majority of the current is diverted, the instrument can be used as an ammeter. The ratio of the current between the shunting strip and the instrument coil is inversely proportional to the ratio of the resistances. Similar to a voltmeter, a number of shunting strips may be installed in parallel through a selector switch in order to attain several current ranges.
In contrast to electromagnetic instruments which respond to current, electrostatic respond to electric potential. Since the electrostatic mechanism produces low operating torques, they inherently are not as accurate and sensitive as electromagnetic devices. Their principle of operation is based on induced forces of attraction and repulsion between two electrified conductors. The electrified conductor is composed of a fixed and a movable plate in an air capacitor. When the capacitor is charged, the moving plate tends to shift to increase the capacitance between the two plates. Electrostatic devices are normally used to measure voltages. If the meter deflection is directly proportional to the voltage difference, it is called the hetrostatic method; if the movement of the needle is based on the square of the voltage, it is called the idiostatic method.
With these instruments, the meter deflection will be proportional to the heat caused by the system current. The current in thermal instruments moves through a thermocouple unit and the flow of current will cause it to deflect. A common thermocouple used for metering applications is platinum-iridium, gold-palladium cold junctions. The reason for the deflection is that thermocouples consist of two material plates with different coefficients of expansion. Since one plate expands faster than the other and the two plates are attached to each other, the thermocouple will start to bend. The instrument reflection is proportional to the heat generated, which
in turn is a function of the square of the current. These instruments read the RMS current and regardless of current polarity the needle will always move in one direction, which makes them applicable to both AC and DC circuits.
Digital instruments cover a variety of electronic devices that can measure one or more electrical parameters. They are available in a wide range of scales to measure voltage, current, resistance, power factor, etc. These instrument have a digital display that is powered either by a light emitting diode (LED) or liquid crystal diode (LCD). Some units have memory capabilities that store measurements for later use. They can also connect through an RS232C interface to communicate with a personal computer. Their principle of operation is based on rectifying the incoming current to a pulsating DC, and then integrating the area under the current wave to calculate the average value. The reading can be scaled appropriately to reflect average, peak or RMS values.
These instruments are rugged since they do not have any moving parts. They can lie in any position. In addition, unlike moving needle instruments, they are not subject to drift, so periodic calibration is not needed. The cost of these instruments has gradually come down in the past two decades, which explains their popularity. They can measure AC or DC parameters.
It should be kept in mind that if the circuit is rife with harmonics, the digital meter reading will not be accurate. This is because the meter scale is calibrated using the crest factor multiplier for a sinusoidal wave.
In most electrical circuits, the instrumentation cannot directly monitor the circuit parameters because of high system voltage or current. Instrument transformers are used to couple the circuit parameters with the instrumentation. The key issue with instrument transformers is making sure that the current, voltage and phase angle of the primary circuit is faithfully replicated with an acceptable level of accuracy.
The use of instrument transformers as opposed to direct connection has a number of advantages. First, they permit the instruments to be isolated from high-voltage levels, which is an important safety consideration as well as a benefits in smoothly
operating the system. Moreover, instrument transformers change the wide range of voltage, current or power values to a convenient and standardized range. Finally, instrument transformers can total a number of different voltages and/or currents and provide a combined parameter.
Instrument transformers are used both to couple metering and relaying devices. For metering applications, it is important to accurately measure circuit parameters under steady state normal conditions. In contrast to relaying applications, accurate measurement is important during transient conditions.
Instrument transformers are divided into two types: potential transformers (PT) and current transformers (CT). A PT is similar to a normal high-quality transformer, where the turns ratio between the primary and secondary windings are tightly controlled to ensure accuracy. The primary winding is directly connected to the system voltage and the secondary winding is connected to the measuring instrument. For both CTs and PTs, the external load impedance that is connected to their secondary side is called the transformer ''burden.'' Ideally, the burden should have an infinite impedance. That is why high-burden impedance is desirable to ensure better measurement accuracy.
The primary side of PT terminals is designated as H1 and H2, while the secondary side is designated as X1 and X2. The polarity of the PT is determined by dots on the primary and secondary windings. If the current is flowing to a dot for the primary circuit, for the secondary circuit the current will be flowing out of the dot. However, if the polarity is not shown, the following test can be used.
To determine the polarity, connect one of the primary and secondary terminals to a voltmeter. Connect the other primary and secondary terminals with shunting strips. The next step is to energize the primary circuit of the transformer with a known voltage source. Depending on the polarity connection of the PT, the magnetic fields between the primary and secondary windings will either be additive or subtractive. In other words, if the voltmeter reading is higher than the primary voltage, it implies that the shunted terminals are H1 and X2 or H2 and X1. If the voltage reading of the voltmeter is lower than the primary voltage, then the shunted terminals are X1 and H1 or X2 and H2 (see Figure 3-16). Normally, PTs are fused on the primary side.
For a three-phase system, two PTs are adequate. The accuracy
of PTs is greater than 97 percent. However, for an ungrounded system during a ground fault condition, the PTs connected to the unfaulted phases are subjected to phase-to-phase voltage levels of the power system. This will usually saturate one of the PTs and, due to magnetizing current, the particular PT fuse might blow. It was mentioned earlier that the PT burden has an impact on the accuracy of the instruments. The American National Standard C57 has classified the following standard burden designations for PTs:
VA at 120V
A current transformer (CT) is a doughnut-shaped metal core winding that encloses the current-carrying conductor of a power system. The primary winding of a CT is the main circuit conductor, while the secondary winding is the doughnut-shaped winding.
The standard CT ratios range from 100:5 to 1200:5 with 100-ampere increments. Since the number of turns for the primary is one, the corresponding turn ratio for standard CTs ranges from 20:1 to 240: 1.
The American National Standard Institute (ANSI) and the Institute of Electrical and Electronic Engineers (IEEE) have developed class designations for CTs that consist of two integer parameters separated by the letters "T" or "C," such as 10T400 or 5C200. The letter signifies that the characteristics of the CT can be analytically calculated. The letter ''T" signifies that due to some design uncertainties the CT performance cannot be calculated analytically and therefore it has to be determined by testing. The two integers determine the accuracy of the CT.
The first integer designates the maximum error when the voltage at the secondary terminal is equal to the second integer and the current in the transformer is 20 times its rated value. Since the CT secondary current is rated at 5 amperes, the 20 times rated value corresponds to 100 amperes. Therefore, for a CT, the number 5C200 means the maximum error will be 5 percent when the secondary current is 100 amperes, for a burden impedance that will produce 200 volts at the secondary terminals. The polarity of a CT will designate the relative directions of the primary and secondary
Polarity test for a potential transformer.
currents. The polarity of CTs and PTs is important to assure proper operation of measuring instruments. If the polarity is wrong, a meter will turn backward or cause a relay to have false trippings. Similar to a PT, the industry convention states if the current is going in the direction of the primary dot, the current will be coming out of the secondary dot.
To determine the polarity of a CT, a battery with a switch and a milliammeter should be connected in series as shown in Figure 3-17. If the polarity of the CT is correctly chosen, the current will rise suddenly when the switch is turned on and then drop to zero. Current transformers are constant-current devices, which means that their secondary circuit must always be closed when they are energized. If the secondary circuit is opened, extremely high voltages between the secondary terminals will be generated. This can cause severe equipment damage and personal injury. That is why CTs are never fused and moreover, when working on CTs, shunting strips are used to ensure closure on their secondary circuits.
A final note is the limitation of a CT performance if a CT goes into saturation. This means relays which depend on secondary currents operate poorly in such conditions. This problem needs to be taken into account when designing and analyzing relay systems.
Instruments used to detect and measure harmonics and harmonic distortion are covered in Chapter 5.
Substations And Switchgear
Let us discuss the issues surrounding replacement of switching substations. There are a number of reasons why substations must be replaced.
The obvious reason is the age of the equipment. The average life expectancy of a substation is roughly 25 years. When a substation gets close to the end of its useful life, the reliability of the system can be significantly affected.
Another common reason for replacement is equipment obsolescence. For instance, many manufacturers who built air-magnetic circuit breakers or oil switches discontinued producing such equipment, making spare parts for this equipment quite scarce. Therefore, many owners, in an effort to protect themselves from any appreciable service interruptions, have developed a capital renewal plan to replace the older technology with state-of-the-art equipment.
There are two other cases where switching substations will need to be upgraded or replaced. One is increasing the size of the service. If the power requirement of the load is larger than the maximum capacity of the switching substation, an upgrade will be necessary regardless of the age and condition of the equipment. It is important to do an analysis to see whether an upgrade or a total replacement will be the best choice. The second case will be if more automation and monitoring systems are desired. In the past decade, with the advent of new electronic power technologies, many remote monitoring and operating features that were not economical in the past are now accessible. Adding these features may require additional upgrades so these capabilities can be supported.
Select The New Equipment - When a decision for the replacing the old switching substation is made, the next step is to examine the technologies available.
As mentioned earlier, many of the air-magnetic breaker and oil switch equipment is no longer manufactured. The two obvious replacement technologies are vacuum and sulfur hexafluoride. Since
Polarity test for a current transformer.
both technologies have desirable characteristics, either of the two can be a satisfactory choice. Therefore, cost becomes a key consideration in the purchase. However, since sulfur hexafluoride units can withstand a higher fault current than the comparable vacuum switch, they may be more appropriate. Similarly, since wave chopping can potentially be a problem for vacuum switches, sulfur hexafluoride may be a better choice in this case as well.
Decide Whether To Replace Switches And Breakers - After deciding on the switching technology, evaluate whether to replace the entire switchgear or only replace the switches and breakers. In many cases, it is possible to replace the old air-magnetic and oil units with vacuum or sulfur hexafluoride units without replacing the remainder of the switchgear. This is an important point to consider because it carries significant cost implications. Several of the equipment manufacturers make new vacuum and hexafluoride units that can be installed in the old cubicles of air-magnetic breakers. Some degree of adjustment or modification on the existing busbars may be called for.
Another element that can impact the above decision is the
location of the switchgear and how easily the old unit can be removed. If it is located outdoors, or indoors with easy access, replacing the entire switchgear can be accomplished relatively cheaply. But if access is difficult, it makes sense to consider switch and breaker replacement in the existing cubicle of the of unit.
Note that when these old switches and breakers are replaced with new technologies, future maintenance requirements will be drastically reduced. The maintenance consideration may even be a contributing reason for migrating to new technologies.
For an oil-filled unit, there is an additional benefit. Since the dielectric fluid is highly flammable, when it is replaced with vacuum or hexafluoride units, the fire hazard is reduced as well. This may have a positive impact on the owner's insurance premium.
Decide Whether To Upgrade Protective Relaying And Metering - As part of the switchgear upgrade, another candidate for replacement is the protective relays and meters. With any switchgear that is more than 20 years old, more than likely the metering and relays used are electromechanical devices. This will be a good time to upgrade these devices to solid state units if appropriate. With such an upgrade, many of the parameters can be monitored and controlled remotely. Table 3-1 summarizes the major characteristics of various types of circuit interrupting devices. switches and circuit breakers.
Transformers are generally reliable and have comparatively long life compared to many other elements in an electrical distribution system. The main reason is lack of moving parts. However, if the transformer must be replaced, the following issues should be considered.
Efficiency - It is essential to look at the efficiency of new units. Although the initial cost of more-efficient transformers may be higher, in the long run they may be a good investment. However, replacing transformers merely by efficiency gain normally does not have an attractive payback. This is due to the inherent high efficiency of all transformers.
Harmonics - K-factor rated transformers can be selected to ensure lower harmonic distortion.
Dry-Type Or Oil-Type - The first thing to consider is if the existing transformer is a dry-type or an oil-type, and does it make sense to stay with the same type of a unit or change.
If the existing transformer is a dry type, for replacement a dry-type unit is appropriate if the environment is clean and dry. Moreover, a sufficient supply of filtered, clean air must be available. Otherwise, it might make sense to consider an oil transformer for replacement. If this is the case, the next step that needs to be considered is the fire separation. Since the dielectric fluid in an oil transformer is flammable, the fire rating of the transformer vault should be considered. In addition, the transformer weight should be checked against the loading of the slab where the transformer will be installed; because of the weight of the dielectric fluid, oil transformers are heavier than dry type for the same rating. Physical size will not be an issue, as oil transformers are smaller and more compact than dry-type units.
If the existing transformer is an oil type and the replacement will be a dry type, examine the ambient conditions to ensure an adequate supply of clean, dry air. The next point to consider is the dimensions of the new transformer. As mentioned earlier, dry-type units are physically larger than oil transformers, so if the space is tight, the clearances around the new unit may be less than satisfactory.
Connection Arrangement - Another important element to consider between the new and the old transformer is connection arrangement. In some transformers, both the primary and secondary connections may be at the top of the unit while in others they may be on one or two sides. If the connection arrangement of the new and the old units is different, the installation cost may increase significantly. Additional connections and splices will be needed, increasing labor costs and installation time. If power downtime is a concern, this issue has to be considered in advance.
Installation - The next important issue to consider is how easily the old unit can be pulled out and a new unit be installed. This is a function of transformer location. For outdoor units, the task will be relatively easy if heavy rigging equipment, such as cranes, can be used for the installation. For indoor units, the location of the transformer can heavily impact the replacement cost. For instance, if the unit is deep inside a building away from an exterior wall and the building is not equipped with a freight elevator, the installation
cost will be higher. If the transformer is on the roof or the penthouse, a long boom crane may be required. Today, for most high-rise buildings, such rigging tasks can be done economically using helicopters. But it is vital to make sure that the helicopter flies over at a time when the building and surroundings are secured before the rigging job begins.
Availability - When one is contemplating replacement of a medium- to large-size power transformer, consider the availability of the unit. Power transformers are not off-the-shelf itemsthey are made to order. This means the delivery time can range from a few weeks to several months; therefore, preplanning is critical to the success of the project. The project should be staged at a time when it will have relatively minimal impact on the operation of the facility. Similarly, the weather may play a role in determining the appropriate time to replace an old unit.
Disposal - When replacing a transformer, another decision that needs to be made is what will be done with the old unit. If the owner has a better use for the unit, it can be applied to that use. If this is not the case, the easiest thing to do is let the contractor have it for salvage.
Table 3-1. Electricity And MagnetismSummary of major characteristics of various types of circuit interrupting devices.
Periodic oil test
Wiring And Cabling For Power And Communications
Wires and cables are the conductors that allow us to transmit electricity, light and data signals from the point of generation to the point of use. Proper selection, installation and maintenance is essential to ensure many years of reliable service. Communications wiring and cabling present special challenges that must be addressed in design. In this chapter, we will cover all aspects of wiring and cabling, from physical construction through management concepts.
A conductor is a material that easily conducts electricity from the point of generation to the point of use. Only a little electrical stimulation is needed to induce an electric current.
A variety of materials can be used to transmit electricity. Although copper and aluminum are common conductors, copper has an excellent cost-to-conductivity ratio versus aluminum. A few decades ago, when the price of copper skyrocketed, aluminum wire became popular. But since aluminum only has 61 percent of the conductivity of copper, has a low tensile strength, and is difficult to solder, its use dropped dramatically when the copper price went down. Today, aluminum is almost exclusively used for high-voltage transmission lines.
The current-carrying capacity of a conductor is determined by its physical size and the ambient temperatures. Therefore, in the design of a circuit it is important to ensure that the rate at which heat is rejected in the conductor is equal to or greater than the rate at which heat is generated in the conductor.
In addition to size, the other important characteristics of conductors include the degree of hardness and whether the conductor has any coating. The degree of hardness is related to the number of strands that a conductor has. For instance, small conductors can be solid. In addition, the conductor can be bare copper or coated with tin, silver or nickel to impede corrosion and facilitate soldering.
Although a conductor's resistance is very small, it is measured as a positive number. When the current passes through the conductor, heat is generated. To ensure a proper heat rejection rate, wire size must be increased at a higher rate than wire capacity.
In this chapter, we will cover wires and cables for low-voltage (below 600 volts), high-voltage (above 600 volts) and communications applications.
In this chapter, we will discuss the fundamental issues applicable for low-voltage systems followed by a discussion of high-voltage systems. The balance of the chapter deals with communications systems.
Wire size is measured in circular mils (CM). A mil is one thousandth of an inch. A circular mil is the area of a one-mil diameter circle. The American Wire Gauge (AWG) is the most commonly used measure in the United States. AWG should not be confused, however, with the gauge used to measure steel wire for non-electrical applications.
The AWG scale consists of wire numbers, which are even, starting with the number 40 (representing a wire diameter of 0.003 inches). The smaller the wire number, the larger the cross-sectional area.
Building wiring applications commonly use wire sizes of 14, 12, 10, 8, 6, 4 and 2. Wires larger than that are called 1/0 (''one-naught"), 2/0, 3/0 and 4/0. A wire larger than 4/0 is not designated by a numerical size, but rather by its cross-sectional area in mil circular mil (MCM). One MCM is equal to 1,000 CM. Sizes such as 250 MCM, 500 MCM and 750 MCM are used for building wiring.
Odd-number size designations are commonly used for magnetic wires such as those used in motors and transformers. No. 8 or smaller wire may be solid or stranded, but No. 6 and larger wire must be stranded to achieve the desired flexibility.
Grounded wires are designated only by white or gray insulation. The grounding wire is identified by green or green-with-yellow-stripes insulation or, in some cases, is uninsulated.
Hot wires have blue, yellow or red insulation, in that order, depending upon the number of hot conductors.
Indoor building wire is suitable for voltages up to 600 volts. The most common wire insulation, thermoplastic, comes in many different types.
Types TW and THW are the most common and can be used in wet or dry applications. When type TW is used, wire temperatures should not exceed 140ºF; when type THW is used, temperatures should not exceed 167ºF.
RHH and THHN types are used only in dry locations, and the wire temperature should not exceed 194ºF.
THHN and THWH types have an oil-resistant final insulation layer that adds strength and greater insulating capacity.
The XHHW type has a cross-linked synthetic polymer, which also adds strength and even higher-quality insulation.
In addition to ampacity, an important factor in the wire-size selection is the voltage drop, which is a negative voltage variationthe line is feeding less than its rated voltage. The voltage drop on branch circuits should be kept below 2 percent, and on the feeder and branch circuits, below 3 percent.
We must keep the voltage drop to a minimum because in addition to the energy loss, voltage drop has a negative effect on electrical equipment. A 5 percent voltage drop on an electric motor, for example, means a 10 percent drop in output power. Similarly, for incandescent lamps, a 5 percent voltage drop means a 16 percent drop in light output; for fluorescent lamps, a 10 percent voltage drop means a 3 percent drop in light output.
Use this formula to determine the appropriate wire size in circular mils:
K= Resistivity of the conductor which is 22 ohms per CM foot value of copper and 36 for aluminum
L= Length of the conductor
I = Current in amperes
VD = Voltage drop in volts
For three-phase applications, the result of the formula should be multiplied by 0.866. Higher voltages are preferred for larger loads because they limit the voltage drop. For example, 277V is preferred over 120V for fluorescent lamps and 480V is preferred over 208V for larger motors.
Wires In Parallel
In some instances, two or more wires can be used in parallel instead of using single wire. The NEC specifies that parallel wires must be larger than No. 1/0, be of the same material, have the same length and cross-sectional area, and terminate in the same manner to ensure safety and reliability.
Wire Splices And Terminations
The weak links in building wiring systems are usually wire splices and terminations. It is important to ensure that splices and terminations are electrically and mechanically correct.
Screw-type terminals should have more than two-thirds wrap in the clockwise direction, with no overlap.
Copper wires or copper-clad aluminum wires commonly require solderless connections since aluminum rapidly forms aluminum oxidea poor conductorwhen exposed to air. Due to the aluminum thermal expansion, cold flow, the standard copper connectors cannot be used. The connectors must be larger or deeper than the standard ones.
In addition, the aluminum connectors should be able to bite through the oxide layer. Today's soldering connections are seldom used for building wiring. As mentioned earlier, an effective technique to impede electrolytic actions for both copper and aluminum is achieved by coating the conductor with a neutral material like tin or nickel. This makes having a good electrical connection much easier.
A conduit wiring system provides a high level of mechanical protection for the electrical circuits. This system reduces the probability of fire due to overloaded or short-circuited conductors. Circuit wires are easily replaced and removed, and new circuits can be pulled easily if there is space. Conduits may be buried in walls or surface-mounted.
The ambient conditions determine the type of conduit, the type of coating, and the type of fitting.
Dust-tight, vapor-tight or water-tight conduits are available in 10 ft. lengths. The size is determined by the internal diameter in inches. The standard sizes are 1/2 inch, 3/4 inch, 1 inch, 1-1/2 inch, 2 inch, 3-1/2 inch, 4 inch, 4-1/2 inch, 5 inch and 6 inch.
The most common types of conduits are rigid galvanized, intermediate metal, electric, rigid PVC and flexible conduit.
Rigid Galvanized Conduit
Rigid Galvanized Conduit (RGC) provides the highest level of mechanical protection. RGC is made of heavy-wall steel that is either hot-dipped galvanized or electro-galvanized to reduce the damaging effects of corrosive chemicals found in insulations. RGC provides good fire protection as sparks cannot escape the conduit. Therefore, it is often recommended for hazardous locations. If it is properly installed RGC can provide an excellent equipment ground. It normally comes in 10 ft. lengths with one threaded coupling.
RGC differs from wafer-type conduits in that the interior surfaces are prepared so that wires can be easily pulled. The wall is approximately 0.109 inch thick.
The disadvantages of RGC are its high cost, heavy weight and the difficulty of installationi.e., cutting and bending.
Intermediate Metal Conduit
Intermediate Metal Conduit (IMC) is similar to RGC but the wall thickness is about 25 percent less, making it lighter, less expensive and easier to install. Despite the thinner walls, IMC can withstand severe mechanical abuse.
Coupling for IMC can be threaded or threadlesss. IMC is available from half-inch to four inches in diameter.
Some Hospital Grade (green dot) connectors, plugs and receptacles are available
with transparent housings for inspection. The items shown here include 2x pro-
prietary magnification windows that allow better inspection of completed terminal
housings. Courtesy: Marinco Specialty Wiring Devices.
Electric Metal Tubing
Electric Metal Tubing (EMT) has a wall thickness about 40 percent less than RGC, making it lighter and less expensive. EMT is used mostly for branch circuits above suspended ceilings. Unlike RGC and IMC, EMT is not threaded into a fitting or box.
EMT mostly uses compression or set-screw fitting joints. EMT can be jacketed with polyvinyl chloride (PVC) to make it resistant to corrosive chemicals. Since EMT is light-weight, it cannot be subjected to severe physical damage. Proper care is a must in order to prevent damage to the PVC jacket during cutting or bending.
Rigid Aluminum Conduit
Rigid Aluminum Conduit (RAC) is light-weight, rust-proof and provides a better grounding system than RGC. Since aluminum is a non-sparking metal, it is safe when used near explosive gases found in hazardous locations. Due to RAC's relatively fragile nature, it
Plugs and connectors are available for rough conditions to seal out water, dirt,
dust and particle contaminants. Courtesy: Leviton Manufacturing Co., Inc.
should not be installed in concrete slabs. Rigid steel elbows usually are used with RAC.
Rigid PVC Conduit
PVC conduit is light-weight and works well even in highly corrosive areas or places where moisture and condensation are a problem. Two advantages of PVC conduit are that it has no voltage limitation, and it resists aging from ozone and sunlight exposure. Since PVC is not conductive, a grounding conductor may also be required.
The two common PVC conduits are Schedule 40 and 80. Schedule 80 has a thicker wall and is more durable than Schedule 40.
Flexible Metal Conduit
Flexible conduit is used when a connection is needed with vibrating or moving parts, such as motors, or when rigid conduit
cannot be formed to a required contour. Flexible conduit normally is used for short distances of no more than 60 ft. PVC-jacketed, liquid-tight, flexible conduit is used for damp locations.
Flexible metal conduit can be used for grounding if the flex is not more than 6 ft. and is protected with an overcurrent protection device of no more than 20 amperes.
Liquid-Tight Flexible Conduit
Liquid-tight flexible conduit is similar to a flexible metallic conduit, with an outer plastic jacket impervious to water, oil and chemicals. It cannot be used as a grounding conductor in sizes of 1-1/2 inches or larger. Note that we must also use the appropriate water-tight connectors with such conduit. If flexible conduits are used in lieu of fixed conduit, an equipment grounding conductor is required.
Busways are sheet-metal enclosures for conductors that are factory-built. The conductors can be rectangular copper or aluminum busbars supported by insulators. They are usually manufactured in 10 ft. sections and field-assembled on-site.
Designers like busways because of the flexibility they offer when use of the space within a plant changes over time. Switches can be easily installed along the busway with great ease. Busways can be extended vertically and horizontally. They are primarily used for low-voltage distribution systems. For high-voltage systems, cables are appropriate for most applications.
In addition to the factors mentioned earlier, we must consider the mechanical strength of the conductors. Overhead transmission lines must be strong enough to carry any mechanical load that may be reasonably expected. The most severe loads are experienced during winter ice and wind. Underground conductors must be able to withstand the allowable stress to which the cable is subjected during installation. The need for adequate cable strength will be a factor in determining its electrical conductive capacity.
When comparing costs of overhead and underground transmission lines, it should be noted that underground construction is much more expensive. Overhead lines, however, are considered
A cable tray system. Shown is a system designed for maximum flexibility for future
wiring upgrades, as well as n applications including suspended ceilings, open spaces
and raised floors. This system can be used to house power and communications wiring.
Courtesy: The Wiremold Company.
unsightly and detract from an attractive environment. As a consequence, underground lines are most commonly used in new installations. One thing to watch out for: Cables are usually located in duct banks, so insulation that weakens prior to failure cannot be seen, often resulting in inadequate maintenance. Traditionally, underground cables are the system component that receives the least attention, perhaps since ''out of sight is out of mind."
Paper-impregnated lead cable (PILC) and varnished cloth (VC) have been the insulation workhorses of the industry since 1910.
PILC has compound migration problems if used on vertical risers, and termination and splicing also are more difficult.
VC cables are relatively more expensive for the quality of the dielectric, but they do not have the compound migration problems.
The combination of VC cables for vertical risers and PILC for horizontal runs has long been used successfully. During the past
two decades, the petrochemical industry has introduced a variety of polyethylene compounds as insulation materials that have good insulating characteristics, such as high-moisture resistance, low-temperature characteristics, high-ozone resistance, and greater abrasion resistance. These cables are lighter in weight compared to PILC, and terminations and splicing are relatively easier. The two common types of cable here are the ethylenepropylene rubber (EPR) and cross-linked polyethylene (XLPE).
EPR cables have excellent electrical properties. They have good corona and ozone resistance as compared to XLPE, thus making them suitable for high-voltage applications. Like rubber, both materials can burn and are not particularly oil-resistant. Therefore, the insulated cable must be provided with other forms of protection in such applications. (Both EPR and XLPE burn without releasing toxic gases. If judged in terms of oxygen index, EPR burns slower than XLPE.)
For low voltages, either products can service wet locations; however, EPR has a better service rate for higher-voltage applications should the cable come into contact with moisture. In addition, EPR requires less insulation material, making it lighter than XLPE. On the other hand, XLPE has a high mechanical strength and resistance to sunlight and heat.
In addition to EPR and XLPE, some of the other insulation materials used for cables and their common industry names are:
Isobutylene isoprene (Butyl).
Styrene butadiene rubber (SBR).
Chlorosulfonated polyethylene (Hypalon).
Ethylene tetrafluoroethylene (Tefzel).
Methyl chlorosilane (Silicone).
Shielding is the practice of confining the dielectric field within the cable insulation. Shielding is either a thin copper tape or concentrically wrapped copper wires. For operating voltages less
than 2,000 volts, shielding is normally not used. Above 2,000 volts and below 35,000 volts, NEC and ICEA requires shielding. NEC does, however, allow non-shielded cable for up to 8,000 volts provided the cables are listed by a nationally recognized test lab and approved for such usage. Since shielded cables are more expensive, and their terminations are complex and require more space, non-shielded cables are used in most 2,400V and 4,160V systems.
For a shielded cable, the voltage equipotential surfaces are uniform concentric cylinders between the conductor and the shield. The lines of force and stress are uniform and radial. This means there is no tangential and longitudinal stress within the insulation or its surface. On the other hand, although the voltage equipotential surfaces are cylindrical, they are not concentric. Thus, the cable surface will be at different potentials. If the electric field is intense, surface discharge will take place and cause ionization of the air particles. Surface tracking, burning and destructive discharges to ground will deteriorate cable insulation or jackets. If the surface is moist or covered with salt or dirt, the condition will be worse.
One problem with non-shielded cable is radio and television interference, especially if such equipment is close to cables in damp conduits. A non-shielded cable can be more hazardous from a safety point of view. Shielding can remove the fire and explosion risks by grounding electrical discharges. This is especially important in gaseous locations. Finally, it is also important to make sure that shielding is always at or near ground potential by having adequate connection between shield ground. An ungrounded or floating shield can be more hazardous than unshielded cable. Therefore, stress cones should be used at all terminations of shielding, such as potheads, according to industry standards.
There are a number of factors used in determining the proper size of cables: the voltage level, current-carrying capacity and physical strength.
The insulation must withstand the electric stress both at normal and faulted conditions. So the rating is based on the phase-to-phase voltage of the system where cable is to be used, and whether the system is grounded or ungrounded.
With grounded systems, where the ground-fault protection can clear such a fault within one minute, 100 percent voltage-rated cables are applicable. If the clearing criteria are not met, 133 percent-rated cables are used if there is assurance that the fault will be
cleared within one hourotherwise, the cable rating should be 173 percent.
The current-carrying capacity is based on thermal heating from both the load current and other nearby cables. All ampacity cables show the minimum sizes. Technical considerations include ambient temperature, future load growth, voltage drop and short circuit heating. We must ensure that the cable can withstand short circuit current without any thermal damage until the fault is removed by protective devices.
When cable sizes bigger than 500 to 750 kcmil (kilo circular mil) are needed, two or more cables are used in parallel. This is because for larger sizes, the current-carrying capacity per circular mil drops off due to skin effect, lower relative surface area, proximity effect, etc. However, it should be noted that with parallel lines the overload devices cannot protect individual cables, thus additional line current limiters must be used.
As mentioned earlier, the short circuit current rating must be considered, because in addition to thermal stresses, there are mechanical stresses exerted on the conductor during a short circuit. Cables must also be able to withstand the mechanical stresses they encounter during handling and installation. Some of the limitations that should be taken into account are the cable bending radius, pulling tension, and abrasive tolerance. The bending radii should normally not be smaller than 12 times the outer diameter of the cable. The pulling tension exerted on them should not increase above 100 pounds per ft. Since the abrasive value of cables are low, cables must not be dragged along rough surfaces like gravel, metal edges or any sharp object. It is important to keep in mind that cable almost always fails at individual spots rather than over the entire length. That is why one damaged spot in the cable jacket can cause a failure even if the rest of the cable is in a perfect shape. Finally, in specifying cable, the items listed below can serve as a checklist:
Number of conductors in the cable.
Installation ambient conditions.
Applicable UL listing.
Termination And Splicing
There are different splicing kits available, and manufacturers have a wide variety of techniques for splicing. Therefore, it is important to first make sure that the proper size and type of splice is used for every situation, that the manufacturer's recommendations are followed, and that the work is performed by skilled personnel.
Cable splices and terminations are usually the weakest points in a cable system, so adequate attention has to be devoted during installation and subsequent maintenance. Although in a shielded system the dielectric field consists of symmetrically distributed radial stress distribution, at the point of termination there is longitudinal stress over the surface of the insulation cable. The combination of longitudinal and radial stress at the termination of the cable ends results in the cable's minimum dielectric strength. The common method of reducing this stress is to gradually add insulation at the termination point, built up in the shape of an inverted ''V" all around the cable. This is commonly called a stress relief cone. Generally speaking, cable terminations are divided into four types:
Taped Terminations - A taped termination is used for systems of up to 15,000 volts, both for indoor and outdoor applications and for shielded and unshielded cables. As the name states, the termination is achieved by building successive layers of tape to obtain the required level of insulation.
Armored Terminations - For cables with a metallic jacket, in
addition to taped termination, a special fitting is used to ground the armor. They are sized individually for different cable sizes.
Potheads - For PILC cables, potheads are used. These consist of a hermetically sealed porcelain insulator with a metallic body. The assembled unit is filled with dielectric fluid or some other insulating compound. Potheads provide reliable and excellent seal against moisture and mechanical damage and thus are recommended for many outdoor applications. There are two common types of potheads: capnut and solder seal.
Preassembled Terminations - During the past two decades, preassembled termination has become popular. They can be installed relatively quickly and easily, and maintain a high degree of consistency and overall quality. Preassembled splices are applicable when a waterproof seal for the cable jacket is required for submersible cables, direct burial of cables, and other situations where the jacket should provide the same level of protection as the cable insulation.
Since electric cables are the arteries of an electrical system, adequate care and maintenance is an essential ingredient of system reliability. There are a variety of tests used for cable to make sure it is working properly. The most common ones are the megahmmeter test, dielectric absorption test and high potential (hypot) testing.
Megohmmeter Test - This test determines the insulation resistance between the conductor and ground. A megohmmeter is used to measure the resistance. It is basically a high-voltage ohmmeter consisting of a small DC generator and a milliampere meter. Megohmmeters generally have ranges of 100V to 5,000V.
Good insulation is indicated by an initial dip of the milliampere meter pointer toward zero, followed by a steady rise; the initial dip is due to the capacitive effect of the cable.
If the pointer makes slight twitches down scale, however, this implies current leakage along the surface of dirty insulation.
To compare the insulation with historical record, a spot test is performed. The megohmmeter is applied for 60 seconds, a reading is made, and then we can compare to see how much the insulation has degraded.
Dielectric Absorption Test - This test provides better information than the spot test and takes considerably longer than the megohmmeter test. Since the current is inversely related to time, insulation resistance will rise gradually if the cable is good, and flatten rapidly if the insulation is faulty. The insulation resistance is plotted against time.
High Potential Test - The above two tests cannot determine the dielectric strength of cable insulation under high-voltage stress. A high potential (hypot) test applies stress beyond what a cable encounters under normal use. It is the only way to obtain positive proof that the cable insulation has the strength to withstand overvoltages caused by normal system surges. There are two types of hypot tests, AC and DC (see Figure 4-4).
The AC hypot is used almost exclusively for situations of insulation breakdown. If applied properly, the DC hypot is a nondestructive test, so it is commonly used for maintenance. However, proper care is needed because DC hypot is potentially a destructive test.
Cable is tested before shipping from the factory using the AC hypot test at the voltage levels set by the Insulated Power Cable Engineers Association (IPCEA), which applies about three times the system operating voltage.
When a cable is installed in the field, test voltage should not be more than 80 percent of the factory level. During routine maintenance, the cable test voltage is about 60 percent of the factory level and the conversion factor between AC to DC test voltage should be in the range of 1.7 to 3.
The test frequency has been the subject of controversy in the past two decades. One school of thought believes that when cables are tested regularly, it reduces their life expectancy because of the higher-than-normal system voltages that cables are subjected to during the test. Another school says that regular testing is needed to know if a cable is getting weak, and so it can be replaced before an actual failure.
One method of developing testing frequency is by examining the probability of cable failure versus service life. Cable has a high failure rate during the first few years of installation. The high mortality is primarily due to weak spots caused by defects during manufacturing. Afterwards, if it survives this period, the probability
Portable device used for non-destructive DC hypot field-testing of cables
and equipment. Reprinted by permission of Associated Research, Inc.
drops to a low figure and then gradually increases over the average life expected of the cable. Therefore, after a cable is installed and tested, it should be tested annually for the first three years. Afterward, a test frequency of once every 5-7 years is recommended. One final note to remember is that cables should only be tested by qualified individuals. Otherwise, a system is better off not tested.
Until two decades ago, communications wiring was a narrow topic which was exclusively the responsibility of the telephone companies. This is because the majority of the wiring was used for voice, and later for low-speed occasional data transfers. With the breakup of AT&T and major technological breakthroughs in computer technology, data communication took on a life of its own. By the '80s, professional offices experienced a proliferation of personal computers as people grasped the significant breakthroughs in productivity offered by computers. This dramatically changed the paradigms of office operations.
Originally, one of the main reasons for a connected computer network was sharing expensive output devices such as printers. However, access to any databases and different computer systems is a basic need of most offices. With the proliferation of the Internet and World Wide Web, adequate communication systems have become essential for the survival of many businesses today. So premises wiring and cabling is continually becoming an important element of building design.
Before we discuss communications cabling, let us cover a few of the basic concepts pertaining to communication systems in general.
"Topology" refers to the geometric arrangements of the data links with the input/output devices. There are three basic topologies: star, ring and linear bus. For star topology, all devices are connected to a main hub. So failure of the hub will result in the failure of the whole systemhowever, the failure of individual units will not affect the rest of the system. Telephone systems and some local area networks (LAN) are star configurations.
The system throughput is a function of the central node capacity and the maximum number of units connected. In a ring topology, the units are connected to form a closed loop. If one unit fails, it will imply the failure of the entire system unless there is a bypass. For a linear bus, there is a common bus with individual nodes hanging from it. A failure of an individual node does not affect the rest of the system.
Transmission Path Establishment
Before a message is transmitted between two devices, a communication path has to be established. There are two major ways of establishing a path: circuit switching and packet switching.
Circuit Switching - For circuit switching, a communication link is set up exclusively and continuously for the devices prior to transmission. This is the case with voice communication systems, because establishing the connection is a small percentage of the entire process. However, data communication consists of short bursts, so circuit switching is inefficient.
Packet Switching - In packet switching, the message is divided
into standard segments with a message header and trailer referred to as switching packets. There are two variations of packet switchingstore and forward, as well as broadcast.
Channel Access And Allocation
In a communication system, channel is allocated by time-division multiplexing or frequency-division multiplexing.
In time-division multiplexing, unique slots of times are allocated for every device to have access for the entire channel band width. By contrast, in frequency-division multiplexing, the channel bandwidth is divided into unique segments which are allocated to individual devices. Every node can communicate at any time.
Channel access can be centralized or distributed, and deterministic or random. The most common type of the deterministic methods is illustrated by the token-passing technique. When a node has the token, it has control over the network and will utilize it for as long as necessary. Then it is passed on to the next node.
One of the drawbacks to this system is "hogging," where a particular node has control of the network for a long time. The token-passing system works best when the data transfers between nodes are short and frequent.
A good example of the random technique is illustrated by the Carrier Sense Multiple Access with Collision Detect (CSMA/CD) method. CSMA/CD is based on the idea of "listening before talking." So when a node wants to transmit a message, it listens to see if the channel is free. Then it can transmit the data. If it happens that more than one node tries to transmit, a collision is detected and both nodes stop transmission and wait to start at a later time. CSMA/ CD is more appropriate for less-frequent long messages.
There has been a number of national and international agencies involved in setting standards for this industry.
The main national groups are the American National Standards Institute (ANSI), Computer and Business Equipment Manufacturing Association (CBEMA), Electronic Industries Association (EIA), Advanced Data Communications Control Procedure (ADCCP), and the Institute of Electrical and Electronic Engineers (IEEE).
The major international organizations involved in developing standards are the International Electrotechnical Commission (IEC),
European Computer Manufacturers Association (ECMA), International Federation of Information Processing (IFIP), Consultive Committee on International Telegraph and Telephone (CCITT) and the International Standards Organization (ISO). The principal group involved in developing standards is ISO, which has devised a seven-level layer model called the Open System Interconnect (OSI), which has gained acceptance and popularity with most manufacturers.
Cabling is one of the least noticeable elements since it is commonly out of sight, but of course it is a critical component of communications architecture.
There are three types of communication cable: coaxial cable (coax), twisted pair and fiber optics.
Communications cable consists of center wire surrounded by insulation, followed by a foil, and the outer jacket.
Coax originally was the workhorse of broad-band communication systems such as video applications, Ethernet, etc. It can generally accommodate longer distances and higher data transmission rates. Due to its inherent design feature, coax is not as susceptible to radio frequency and electromagnetic interference as other types of cables.
The main disadvantage of coax is cost; it also can be difficult to work with.
The standard coax used for Ethernet and IEEE 802.3 LANs have an impedance of 50 ohms.
Twisted Pair Cables
Twisted pair cables have become popular for data communication. The cable properties are measured by the diameter or gauge of the wire, the number of twists per foot, and the type of insulation. Twisted pair cable may be shielded or unshielded.
Shielded Twisted Pair - A shielded twisted pair (STP) consists of a foil or a mesh surrounding the twisted pairs. In some applications, the individual twisted pairs will have individual shields. STPs can support higher data transmission rates than unshielded ones and they are easier to work with compared to coax cablehowever, they are more expensive.
Unshielded Twisted Pair - By contrast, unshielded twisted pair (UTP) consists of one or more twisted pairs in an insulation jacket. At one time, UTP was perceived to have limited use in data transmission. However, today's UTP cables can support data rates of above 100 Mbps (megabits per second). UTP is currently supported by Ethernet, Token Ring, ARCNET, Apple Local Talk and other common LANs. It has a low cost and is easy to install. UTP can support voice and data communication systems.
However, compared to coax, the length of the runs is limited. And as mentioned, UTPs support lower data transmission rates than STPs.
Fiber Optic Cable
Fiber optic cable has been in use since the late seventies. Compared to copper cable, it supports a vastly higher data transmission rate with no susceptibility to electrical interference and is more difficult to tap into. So if security and transmission of high amounts of datasuch as with real-time videoare a priority, fiber optic cable is desirable.
The size of fiber cable is at least an order of magnitude smaller than copper cable. For instance, the diameter of single mode fiber is 0.000472 inches while the average human hair is about 0.004 inches.
Although the cost of fiber cable has continually dropped for the past decade, it is still the most expensive cabling choice. One reason for the high cost is the termination electronics required at both cable ends to convert electronic data to light and vice versa. The technology is most cost-effective for longer runs in inter-building connections and backbones.
Although the above discussion covers the common media for data communications, there have been developments in infrared, laser beam, microwave and radio transmission. With today's technology, these media are currently limited as to speed, bandwidth and coverage. In some situations for low-speed data, however, they can be a viable alternative if cable installation proves costly.
The Electronic Industry Association (EIA) and the Telecommunications Industry Association (TIA), in collaboration with the American National Standards Institute (ANSI), have published voice and data cabling standards. The most important standard is Commercial Building Wiring Standard (ANSI/EIA/TIA-
568-4966), which is designed to support a multivendor environment.
Similarly, the Canadian Standards Association (CSA) adopted a similar standard, called CSA T529.
Internationally, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have developed ISO/IECJTC 1/SC25N106 Standards. Both the Canadian and international standards have incorporated much of EIA/TIA-568 Standards. The 568 supplements, EIA/TIA TSB-36 and EIA/TIA TSB-40, have defined UTP cable grades into three different categories corresponding to UL's designation Level III, Level IV and Level V. Category 3 corresponds to the maximum speed of 10 mbps, Category 4 corresponds to speeds of up to 16 mbps, and Category 5 corresponds to speeds of up to 100 mbps. STP can be used for speeds up to 150 mbps. Category 5 is making the 50 ohm coax cable (which has a maximum speed of 100 Mbps) obsolete even for video applications.
Regarding distribution of the cables, we must consider whether the application requires an inter-building or an intra-building system.
Inter-Building Systems - If the facilities in a multibuilding campus need to be connected, then the inter-building issues must be attended to carefully. There are three common methods of installing inter-building cabling: underground conduits, direct buried cables, and tunnels. On rare occasions, overhead lines are installed. Since the significant part of underground cabling is the trenching and backfill, it is important to install a ducting system which can accommodate future changes in network topology. Spare conduits should be installed.
The key design elements are trench location and depth, separation from other utilities, maximum angles and curve radius, duct arrangements, and manhole size and location, manhole frame and cover load requirements. Keeping accurate records of the as-built drawings can save the facility manager a lot of headaches in the future.
In addition, the installation should be in accordance with NEC, National Electric Safety Code (ANSI C2) and the Uniform Building Code (UBC), as well as local ordinances.
Intra-Building Systems - Intra-building distribution systems are installed within a building. All communications take place inside.
Control Of Ambient Conditions. The walls, ceiling and floor of the closet should be treated to minimize dustin addition, false ceilings should be avoided. The closets should be kept dry and located in an area where there is no danger of flooding.
Location Of Closets. Communication closets should have a minimum size of 18 sq.ft. and a continuous wall of above 5 ft. For a multistory facility, the closets should be aligned vertically with a riser shaft for vertical cable runs.
The location of closets on a floor requires careful attention. Closets should be strategically located in an area which will minimize wiring runs. They need to be away from any source of electromagnetic interference such as copying centers, power transformers, etc. Finding a good location for communication closets can be a challenge for designers, especially in renovation projects, because in many cases the old telephone closet simply does not meet these needs, and negotiating and carving space from currently assignable square footage can become a major political issue among departments.
The cable distribution system can be in the ceiling or in the floor.
Ceiling Distribution Systems. The ceiling distribution is normally run in the space between the suspended ceiling and the structural floor above. Cables are run up the floor and down through the walls or poles, or up the floor above. The ceiling system offers an important advantage: common access to most of a floor. On the other hand, the clearances under the air conditioning ducts and electrical wiring is not adequate. Moreover, lack of sufficient ceiling support to hold the cable, as well as potential electrical interference, are issues that need to be addressed.
The ceiling distribution system consists of four methodszone, home run, raceway, and poke-through.
With a zone method, the floor is divided into several zones and then a common cable is pulled to the central area of each zone, terminating at an adapter. From there a smaller cable is run through the walls and poles to every station in that zone. This is an economical system and offers flexibility (see Figure 4-5).
With the home run method, a separate cable is run from the closet to every desired outlet. This system offers more flexibility and eliminates the possibility of interference between analog and digital signals in the same cable sheath.
A raceway is specially used in larger buildings, which have a more complex distribution system (see Figures 4-3 and 4-6). It provides support and mechanical protection for the cables. However,
a raceway is more expensive to install and requires an extra support system which could limit flexibility in the future. Some newer products are being designed to more easily allow for future growth.
The poke-through method involves placing the cable in the suspended ceiling in the floor below and drilling holes through the floor. One problem with this is that effective sealing around the cable is critical to effectively maintain the barrier between floors for fire safety reasons. The poke-through method is not recommended and is normally used as a last resort.
Floor Distribution Systems. There are also four methods for floor distribution systems: underfloor conduit, underfloor channel, cellular floor and raised floor.
The underfloor conduits consist of installing home runs between the closet and the location of the terminal equipment. If there is a relatively small number of outlets and the final location of the outlets is established, this is an economical method. However, it lacks flexibility if the work station arrangements need to be changed.
The underfloor channel consists of a row of distribution metal channels that is often enclosed in concrete and accessed through junction boxes. There is a partition in the channel so that it accommodates both electrical and communication wiring. It is a secure method if wire-tapping is a concern; in addition, it has lower electrical interference problems. The system needs to be considered at the early stages of the building design.
The cellular floor consists of a corrugated cardboardlike channel through which cables can pass. The distribution cells can be constructed from steel or concrete. It has all of the advantages of the above two systems in addition to high capacity for large cableshowever, the initial cost is higher and some additional floor weight is added.
The raised floor distribution consists of square plates that rest on metallic pedestals (see Figure 4-8). The square covers have a bottom steel layer to sustain the expected floor loading, covered with laminated wood core, vinyl tiles or carpet tiles. The raised floor is also used as a plenum for air conditioning. Raised floor systems provide the ultimate flexibility for cable distribution systems. Disadvantages include the higher cost and the acoustic considerations due to the sounding board effect as people walk over them. In addition, if the raised floor is at a different elevation relative to the rest of the floor, constructing access ramps to meet Americans with Disabilities Act (ADA) ADA concerns will be costly in addition to losing useful assignable space.
With the zone method of cable distribution, a common cable is fed to the center of
zones, where it terminates at an adapter that in turn runs smaller cable to each work-
station. Shown is a unit that has a Utility Center which serves as a mini tele/data
wiring closet for clusters of 8-16 workstations. Courtesy: Powerflor.
Other Methods. Ceiling and floor distribution systems will be appropriate in new buildings. However, in renovation projects they might not offer feasible solutions due to structural constraints, the presence of asbestos in the floor or ceiling overspray, and other cost constraints. In these cases, a less expensive alternative is utilizing baseboard raceways, overfloor ducts, molding raceways and fiat wire. Baseboard raceway and molding raceway cables are surface mounted on the floor or on the wall and they have a plastic, wood or metal channel for the cable. Both provide easy access to the cable. An overfloor duct consists of an enclosed rubber or metal duct that is secured on the floor. The cable is run through the channel inside the duct. The exterior of the overfloor duct is shaped in such a way to minimize trip hazards. It is normally used in low-traffic areas. Flat wire is typically installed under the carpet tiles where the thickness of regular wire is a problem. It is not recommended for digital signal applications because the wires are running parallel.
Communication wiring is a challenge for designers and sometimes a nightmare for facilities managers. In some cases, the design can be more of an art than a science. The introduction of new products by many vendors has been a great help in solving the most common as well as unusual situations. The designer needs to be sensitive to the fact that cost does not end with installation. As technology continues to evolve at a dramatic pace, flexibility achieves paramount importance. In the author's opinion, systems should be designed based on the simple guidelines shown below:
Raceways are used to house wiring and cabling,
allowing for future expansion and protecting the
cable. Courtesy: Panduit Corp.
The communications electronics installed today will be obsolete within 3-5 years.
The communications cable installed will be obsolete in 5-10 years.
The conduits and cable trays will have an average life of 20-30 years. Given this fact, it is important that adequate effort and care be taken for to assign communication routes that can accommodate changes in the future. In the long run, the communications routes will determine the main cost of future upgrades.
Communications wiring will remain a major challenge for designers, facilities managers, and owners for many years to come.
Workstation modules allow convenient and adaptable power, data,
video and voice connection to the workstation. Courtesy: AMP, Inc.
Two categories of issues must be examined for both power and communication cabling systems: the inter-building and the intra-building concerns.
Today, many commercial and industrial facilities are experiencing capacity shortfalls over time. The degree of severity may vary greatly based on the age of the building as well as factors particular to the installation. This assertion appears counterintuitive at first glance because the power consumption of most electronic devices has decreased over time. Although this is true, the number and capability of electronic devices have increased at a much faster rate. So the power requirements of today's equipment-intensive commercial and industrial facilities have increased to a point that has outstripped the normal spare capacities built into the original design the original system. The distribution systems in many facilities face potential overload conditions that necessitate capacity expansion.
The inter-building distribution system normally consists of
Zone distribution of cable for power, data and communications
often requires a raised floor that is easily accessible for changes.
Courtesy: Tate Access Floors, Inc.
conduit duct banks that are embedded in concrete. When installing new duct banks among buildings in a complex, it is important to:
Allow adequate spare conduits to accommodate future expansion.
Pay close attention to routing alternatives. The direct connection between two buildings might be the lowest cost, but may also reduce future flexibility.
Before digging the ground for new trenches, locate existing underground utilities to the fullest extent possible. Otherwise, the damage to existing underground utilities can cause interruptions and additional cost to the project.
Keep good records of the as-built drawings. Often, the field changes in construction projects are not reflected in the construction drawings. As a result, the record drawings of the facility will be erroneous.
Make sure that manholes are marked correctly in the drawings. Manholes must have an adequate drainage system. In some cases, lift pumps may be necessary. This can reduce future maintenance problems.
Install a built-in ladder in the manhole if possible. It can facilitate the maintenance work.
Intra-building issues can be expensive because of building construction materials, architectural design and aesthetic considerations. If asbestos is present, the cost will be much higher. When faced with such decisions, the solutions may be at odds with one another. For example, installing surface-mounted wiring will be lower in cost, but may not be aesthetically acceptable. With some of the new products available for premises wiring, the task has become relatively easier. However, it is still important to install additional spare circuits during such capacity expansion projects. In the long run, they pay for themselves.
Another factor that should be considered here is the downtime required for cut-over of the new capacity. Since most operations cannot afford extended downtimes, a parallel system needs to be built and the cut-over accomplished during low periods (such as evenings or weekends).
Imagine it is late in the afternoon and you are finishing an important document for a critical meeting to the board of directors. As you are putting the finishing touches to your worksheet, suddenly the computer locks up and you lose all of your data. Or worse, you notice smoke and the smell of burning insulation at the outlet of the partition wall. Such horror stories about ''dirty power'' have become common in the past decade. What can facility managers do?
Power Quality Problems
Today, the electrical distribution system is cluttered with a wide variety of devices that generate power disturbances and interferences. These disturbances, commonly referred to as electrical noise, are the result of transients and harmonics. They can cause hardware failures, logic errors, database corruption, keyboard lockups and other problems. In this chapter, we will discuss the impact of harmonics and power transients, and ways to protect equipment from their effects.
The presence of modern electronic equipment has drastically changed our lives for the better. However, they have also changed the characteristics of electrical loads by introducing a plethora of "non-linear loads" such as personal computers, variable frequency
drives, microwave ovens, lighting dimmers and a myriad of other appliances. These loads have created very diverse effects in our electrical distribution systems in the form of voltage distortion, excessively neutral current, overheated transformers, decreased distribution capacity, lower power factor and increased motor bearing wear. The problem with the non-linear load is their non-sinusoidal current draw. This is the root cause of harmonics in the distribution system.
Before discussing the impacts of harmonics, let us look at the principles behind the theoretical concept. Ideally, AC electricity is a pure sinusoidal wave of one single frequency which can also be referred to as "clean power." More specifically, in the United States this frequency is 60 cycles per second (60 Hz). In mathematical terms, such a voltage or current can be represented as (see Figure 5-1):
In a power distribution system, as long as the circuit devices consist of linear elements such as resistors, unsaturated inductors and capacitors, the AC power wave shape will remain the same as shown in Figure 5-2. However, as soon as non-linear equipmentswitching devices, asymmetrical devices, saturated inductorsare introduced to the distribution system, the wave shape of the AC power will be distorted as shown in Figure 5-3.
The distortion wave can be mathematically analyzed using a Fourier series as a sum of integer multiples of the fundamental AC power, known as harmonics. In other words, for a 60 Hz system, the frequencies for the second, third and fourth harmonics are 120 Hz, 180 Hz, 240 Hz respectively as shown below:
The first term stands for the fundamental frequency and the others represent the specific harmonics. One of the characteristics of this series is that the magnitude of the harmonics is steadily decreasing. In other words, the amplitude of the harmonics decreases as one moves to the higher harmonics. For instance, if a device generated third, fifth and seventh harmonics, the dominant harmonic will be the third followed by the fifth followed by the
Sinusoidal AC wave.
seventh. In a non-linear device, the load current is not proportional to the instantaneous voltage.
Moreover, the load current is not continuous. In other words, the current is only flowing for a small part of the cycle, usually about 2-3 milliseconds for half a cycle in contrast to drawing current for the full cycle16.67 milliseconds (see Figure 5-4).
One of the common measures of calculating the influence of harmonics is total harmonic distortion (THD), which gives a general idea of the overall condition without giving a definite view of a specific situation.
THD is defined as the sum of the root mean square (RMS) of all harmonics as a relative percentage of the fundamental frequency. The ratio of THD to the magnitude of the fundamental frequency is an important factor.
Sources Of Harmonics
Typical non-linear devices that cause harmonic wave distortions include:
Variable frequency drives.
Linear circuit response.
Fluorescent lighting systems.
Copying machines and facsimiles.
Computers, modems and other peripheral computer equipment.
Heat pumps and air conditioners.
All sources, however, fall into three major classes of non-linear devices: switching devices, ferromagnetic devices and arcing devices.
Switching Devices - Several electronic components are used in power circuits that switch the flow of electric current. They include diodes, thyristors, gate turn-off thyristors (GTOs) and triacs.
A diode is a bipolar element which fully conducts power in one direction and blocks the flow of power in the opposite direction.
Response to a non-linear circuit element.
Therefore, when a diode is used in an AC circuit, the resultant power will be a pulsating DC current which will be rich in third, fifth and seventh harmonics. Diodes are commonly used in half-wave and full-wave rectifiers where we want to convert AC power to DC.
A thyristor, also known as a silicon control rectifier (SCR), serves a similar purpose. However, it only conducts power when a pulse is applied to its control gate. Therefore, not only the direction of current flow can be controlled, but so can the quantity of power. Because of additional asymmetry inherent in thyristors, they generate more harmonics than do diodes.
One way to reduce this effect is to use GTOs, where the conducting half cycle of the current wave is symmetrically chopped. Thyristors and GTOs are used when we want to convert constant voltage AC power to variable voltage DC power. A triac is a bidirectional thyristor that chops the current in both direction. It is frequently used when we are controlling RMS voltage to a load.
Triacs are commonly used as light dimmers and heater controls.
The above switching devices are found in variable frequency drives (VFDs), uninterruptible power sources (UPS), self- or line-commutated converters, cycloconertors, asynchronous generators and similar equipment. The harmonics generated are equal to (np±1)
Waveshape effects on measurement.
where p is the number of converters and n is any integer. Therefore, for six-pulse equipment, the fifth and seventh harmonics will be of main concern. Typically, such equipment generate 10-30 percent THD.
Ferromagnetic Devices - These are devices that have a coil wrapped around an iron core such as transformers, motors, reactors and similar equipment. The typical magnetic flux relationship of these devices is linear. However, if they are overloaded, the flux current relationship will no longer remain linear at saturationthus the third, fifth and possibly seventh harmonics will cause a problem. In a three-phase system, if the winding is connected as delta, the third harmonic will remain as circulating current within the device.
Arcing Devices - Arcing devices include fluorescent, sodium vapor lighting and mercury vapor lighting, as well as arc furnaces.
Effects Of Harmonics
For transformers, the presence of harmonics dramatically increases hysteresis, eddy current and stray losses for the iron core
and conductors. in some cases, the transformer might need to be derated as much as 50 percent according to the IEEE 519-1992 Standard for feeding non-linear loads.
Circuit breakers and other protective devices, as well as measuring devices, will not function properly in presence of harmonics.
For example, many of these devices do not measure the true RMS value of the current directly. Instead, they either rectify the current signal and multiply the average by 1.1, or measure the peak of the wave current and divide by 1.41. This is because the form factor defined as RMS current divided by average current for a true sine wave is 1.1. Similarly, the crest factor defined as peak current divided by RMS current for a true sine wave is 1.41. With the presence of harmonics, the form factor can typically range from 1.5 to 5 and the crest factor from 2 to 3. That is why the meter readings will be incorrect and the relays and circuit breakers will underprotect the system.
Generally, the effects of harmonic current are localized. However, if the magnitude of the current distortion is large, it will lead to voltage distortion and the problem can spread to much of the rest of the electrical system.
Some of the other problems that are caused by current harmonics include:
Deterioration of electronic equipment performance.
Erratic operation of controls and protective relays.
Faulty readings of Wh meters.
Failure of fluorescent or mercury lighting ballasts.
Failure of electrical system components.
Additional heating in transformers, conductors and switchgear.
Creation of hot spots in the windings without any overall appreciable temperature rise.
Low power factor that can lead to utility surcharges in addition to capacity problems.
Overvoltage of system components and voltage distortion.
Nuisance tripping of circuit breakers and adjustable frequency drives.
Capacitor fuse blowing.
With harmonic voltage problems, an additional set of difficulties can occur. First, most computer systems are equipped with sensors to shut off the system when the voltage drops below a certain minimum value. This is to protect information in process and prevent disk head crashes. In cases of severe voltage distortion, an undervoltage sensor might incorrectly cause the computer system to shut down unnecessarily. In addition, computers and many other electronic devices use the zero crossing of the voltage wave form as a timing mechanism. Normally, the voltage wave crosses the zero line twice for every cycle or every 1/120 of a second for a 60 Hz system. Voltage distortion near the zero crossing can cause three zero crossings instead of one, resulting in a timing signal three times faster than what was intended. Moreover, with load change it may go back to normal speed.
Therefore, the effects of voltage distortion are:
Unnecessary computer shutdowns.
Timing errors due to multiple zero crossings.
Metering and relay errors.
Lower power interruption tolerances.
Increased heating of transformers, motors and switchgear.
Methods Of Reducing Harmonic Effect
The response to harmonics is a function of the distribution system inductance, capacitance and resistance. If the natural frequency of the system components is close to one of the harmonics generated by non-linear elements, then the amplitude of the harmonic will increase. To minimize this, make sure that the square root of the system capacitive reactance divided by inductive reactance is above
8.5. The undesirable harmonic can be prevented from flowing into the system by installing a filter tuned to the specific frequency of interest (the tuned filter creates a high-impedance condition for the particular harmonic; it also diverts the harmonic current by creating a low impedance shunt path).
Several approaches can reduce the impacts of harmonics:
Derating Distribution Transformers - In the absence of any harmonics, transformers can be fully loaded to their rated value, under normal ambient conditions, without any problems. If there are harmonics in the system, then the transformer must be derated accordingly.
For instance, if the third and fifth harmonics are 20 percent each, then the transformer should be derated by 8 percent. If the third and fifth harmonics are about 70 percent each, then the transformer will be overloaded by connecting it to even a small load.
Originally, power engineers spent a great deal of time looking for ways to eliminate the effects of harmonics on electrical systems. But was we can see from the above example, derating in harmonic-rich situations cannot make them go away. We can at least minimize the problem, however, by employing K-rated transformers that are specifically designed to handle harmonics. Many of these are more-efficient than older equipment, which is a bonus.
Oversizing The Neutral Conductor - In a three-phase wye system, the neutral conductor only carries the unbalanced current. Therefore, if the system is balanced, the current flowing in the neutral wire is zero. However, this is only true in the case of the fundamental frequency current. If there is a third harmonic (or any other triplesixth, ninth, etc.) current in the three phase system, the neutral current will add up instead of canceling out for the fundamental frequency. The neutral current can potentially be as high as 1.73 times the phase current. Since the neutral circuit is not normally protected by an overload device, such an overload will likely result in burned wires and electric fires, especially at the connectors and splices.
Therefore, avoid the use of shared neutral wire when supplying a single-phase non-linear load, or double the size of the neutral when shared neutral must be used.
Loading Circuit Breakers - The voltage harmonic distortion can result in errors in relays and metering devices. This may reduce
the power interruption tolerances of switching devices. The current harmonic distortions affect the calibration of overload devices and meters. So harmonics can result in nuisance trips on circuit breakers operating near their design trip point. This is due to the peak current heating of the contacts and the vibrations induced by the higher harmonic currents.
It is recommended that when serving non-linear loads, including computers, the panel circuit breakers should not be loaded above 80 percent of their continuous-load capability.
Electric Motors - The quality of electric power can also affect motor efficiency. Harmonics will induce a current that produces rotating magnetic fields. Half of these fields rotate forward, and the other half rotate backward. For instance, the 5th, 11th, 17th, 23rd, etc. rotate backward, while the 7th, 13th, 19th, 25th, etc. rotate forward. There are several losses associated with this effect. Although the harmonic current will be relatively small, copper losses will be much higher because of the skin effects of higher-frequency harmonics. The backward rotating fields produce a torque in a direction opposite to the motor rotation. The slip for harmonics will be large and the current will crowd toward the top of the rotor cage, causing an increase in the copper losses for the rotor. Since the iron losses are a function of frequency, they will be substantially higher. These factors raise the overall motor temperature, which not only reduces efficiency, but reduces the useful life of the motor.
Power Factor Capacitors - The presence of harmonics reduces the system power factor to a lower value. Adding more capacitors could potentially cause resonant conditions, attracting high-frequency currents and resulting in overheating or failure of the capacitors.
To avoid such problems, install a harmonic trap which is a series LC (inductor/capacitor) circuit tuned to the lowest harmonic. In addition, there are a number of devices that can reduce harmonic effects. The common ones are phase-shifted transformers and filters, discussed below.
Phase-Shifted Transformer - A phased-shifted transformer cancels one part of the harmonic current load with harmonic current from another part of the load. Unlike a filter which converts harmonic current into heat, current cancellation is the operating principle here. When a load is connected to the secondary of such a transformer,
the fundamental voltage is shifted by the same amount as the phase differential between the windings. However, the harmonics are shifted by multiples of the fundamental voltage.
For example, in a two-output phase transformer with a phase shift of 30 degrees, the fifth harmonic currents are shifted by 180 degrees in relation to each other, and the seventh harmonic currents are shifted by 180 from each other, resulting in total cancellation of the currents.
Not all harmonics cancel out entirely. However, by varying phase shift and utilizing more outputs, up to the twenty-fifth harmonic can be canceled effectively. The triplens harmonics are canceled by the delta winding of the primary.
Phase-shifted transformers are reliable and do a credible job of protecting the equipment from the adverse effects of harmonics.
Passive Filters - Passive filters control harmonics by preventing the flow of current by high-impedance series filters and low-impedance shunt filters.
A series filter can be nothing more than a line reactor. A reactor is usually a coil with a specific inductance. As the inductive reactance is directly related to frequency, the 60 Hz fundamental voltage will act like a short circuit. However, the reactance for the harmonics will be high, thus blocking the harmonic currents. Series filters must carry the full current and withstand the full line voltage.
A shunt filter consists of one or more capacitors in parallel. Since the capacitance reactance is inversely proportional to the frequency, the harmonic currents will be shunted while the fundamental current will be blocked. A shunt filter only needs to carry a small fraction of the line current, but it must be able to withstand the full line voltage. Via an inductor-capacitor combination, a passive filter will absorb harmonics at the source. The value of the inductance and capacitor is chosen based on the particular harmonic that is the most dominant.
One word of caution about passive filters: They are bi-directional devices. This means there will be both a sink and a source of harmonics. So it is conceivable that the presence of passive filters will result in tripping the circuit breakers or blowing the components of the filter as they try to trap the harmonics of the utility power system.
Active Power Line Conditioners - Active power line conditioners (APLC) consist of two filtersone in series with the
load and another in parallel with the load. Both filters are connected by a common DC link. The digital control devices control both filters with high-speed switching using pulse width modulation.
The series filter provides continuous, instantaneous voltage regulation and input voltage harmonic compensation. So with an input voltage variation of up to 15 percent and THD of 10 percent as input to the APLC, the output voltage will vary by less than one percent, with a THD of less than one percent. Moreover, the response time for the APLC is less than one millisecond, which exceeds the requirement of most sensitive equipment. The parallel filter generates all of the harmonics drawn by the load. So, loads with THD of more than 100 percent can be accommodated. If the load harmonic distortion is worse than this, the APLC will still provide harmonic reduction. The harmonic currents will be less than 3 percent. Moreover, a power factor as low as 70 percent is corrected to unity.
It is important to recognize that an APLCs is adaptive to load changes and shifts in line power quality, so it adjusts itself automatically and instantaneously. In addition, since it does not interact with other filters or source impedances, no unexpected and potentially damaging resonances are produced (unlike passive filters).
Electronic Ballasts - In the late 1980s and early 1990s, electronic ballasts installed in fluorescent lighting systems to save energy were often blamed for increasing harmonic distortion levels. At that time, although electronic ballasts represented only a small part of the total load, they could produce THD levels as high as 30 percent, which received a lot of attention in the trade press. Most manufacturers responded with reduced harmonic (RH) ballasts that lowered THD to less than 20 percent so as to qualify for most utility rebate programs. Today, these ballasts are widely available. Manufacturers are in a competitive race toward 0 percent THD, so far able to get it below 10 percent. When specifying electronic ballasts, ensure that they are RH ballasts and will not adversely affect THD levels.
Other Considerations For Harmonic Reduction - Use low-impedance distribution transformers connected in a delta wye configuration whenever feasible. This way, the third harmonic will be trapped in the delta winding. The IEEE Standard 519-1992,
Recommended Practices And Requirements for Harmonic Control in Electric Power Systems, has established quantitative guidelines for harmonic distortion levels. The Standard limits voltage distortions to 5 percent of total harmonic distortion and three percent for special critical applications. The Standard also defines current distortion limits for various ratios of source short circuit current to load current.
Generally speaking, further investigation is warranted if any of the below conditions are present:
1. For a branch circuit when ...
a. THD for voltage is more than 6 percent.
b. THD for current is more than 20 percent.
c. THD is caused by one harmonic.
d. THD is concentrated at higher harmonics (i.e., above 17).
2. At the service entrance when ...
a. THD for voltage is more than 4 percent.
b. THD for current is more than 10 percent.
3. At the transformer when ...
a. All THD results from one frequency.
b. THD is concentrated at high frequencies.
c. The unit is loaded more than 70 percent and any harmonic current is present.
The other major problem area with power quality is power disturbances. They can be classified as:
When the power level drops below 80 percent of its nominal value for more than two seconds, it is considered an outage. Outage
can cause system failures, component damage to sensitive electronic devices, database corruption, loss of data, disk head crashes and destruction of files.
When the line voltage drops below 80 percent of nominal level, it is considered an undervoltage condition. Similarly, when the line voltage rises above 110 percent of nominal level, it is considered an overvoltage condition. The impacts of both on the system are similar to power outages.
Sags And Swells
If the voltage drops to lower than 80 percent of nominal value for two seconds or less, it is called a sag. In addition, when the voltage rises more than 110 percent above the nominal value for two seconds or less, it is referred to as a swell.
Sags and swells are usually caused by starting or stopping of heavy equipment. They both can result in unwarranted equipment shut down.
Transients are random, high-amplitude, high-frequency power spikes, only microseconds in duration. There is no exact definition for transients in power systems. They can assume a wide range of voltage magnitudes and wave shapes that are included as transients. More specifically, transients can be as low as twice the system voltage and as high as thousands of volts, and can last half a microsecond up to 200 microseconds. Transients can be an impulse or an oscillatory type.
The impulse type has a rapid rise and decay and is unipolar while an oscillatory type has a fast rise time but decays exponentially in an oscillatory manner. Common causes include lightning strikes, switching operations (particularly large capacitors and inductors), arcing faults, static discharges and the firing of silicon control rectifiers and triacs.
Lightning strikes can produce powerful transients either by direct or near hits. The transients can be diverted to ground by using lightning arresters.
Switching capacitors may also cause severe voltages with transient oscillations.
The impact of transients can be immediate and severe, ranging
from breakdown of solid state components to more subtle and mysterious effects such as erratic and erroneous operations of computing devices (by introducing spurious commands or negating valid command signals, as well as permanent memory loss or program damage).
Electrical noise is the unwanted distortion of the normal sine wave caused by superimposing a much higher frequency and relatively lower magnitude wave with the line power. Noise can be steady state or intermittent. It is caused by switching power supplies, arcing load, power electronic circuits, etc. There are two types of noisenormal mode noise and common mode noise.
Normal mode noise is the voltage noise that always exists equally among each line-to-line and line-to-neutral conductor.
Common mode noise occurs between the line conductor and grounding conductor, or between the neutral conductor and the grounding conductor.
The common causes of noise are broadcast transmission sources such as microwave radiation, corona discharge, electrostatic processes, arcing faults and sparking commutations of motors and generators. Unlike transients, the causes of noise do not have to be physically connected to the power system. In other words, the coupling can be achieved electromagnetically or electrostatically.
Noise has a lower amplitude than transients and is repetitive as well as longer in duration. Noise appears more as a ripple than a spike superimposed on a fundamental frequency.
The majority of problems caused by electrical noise centers around low-voltage devices that operate with fast internal clocks. All microprocessor-based equipment such as work stations, monitor displays, data acquisition computers and controllers can be affected by logic errors, database corruption and system lock-ups.
Other noise interferences that can affect sensitive electronic equipment result from radio frequency interference (RFI) and electromagnetic interference (EMI) radiated or conducted along data or power lines. Such noise can come from automobile ignition systems, mobile radios, electrical power transmission systems, etc.
Again, electrical noise can cause spurious, erroneous and erratic operation of computers. In addition, it can result in slow degradation of computer components. The installation of an electrostatic-shielded isolation transformer can minimize noise problems. Another problem to be recognized is the fact that some of the devices
Graphical representation of transient impulse and noise on an AC power sine wave.
that are susceptible to noise are capable of generating noise themselves, such as computer peripheral equipment.
Power Conditioners: Tools To Reduce Power Disturbances
To minimize the impact of power disturbances, there a number of different power conditioning products available. However, the prerequisite for selecting the proper solution requires the identification of the source of the problem, the voltage fluctuation, transients, frequency variation and noise levels. The optimal solution is usually a compromise between economics and the nature of the problem.
It is essential to clearly understand the capabilities and limitations of the various power conditioners that are available on the market. There is no one type of power conditioner that is appropriate in all cases. In addition, the most expensive unit is not necessarily the best one. Some of the common techniques for minimizing the impacts of power disturbance are discussed below.
Dedicated Lines - Dedicated lines refer to a separate circuit run from a point well upstream in the power system to the protected equipment. This is usually recommended for mainframe computers or other major sensitive devices. With dedicated lines, the electromagnetic and electrostatic coupling along the length of the power line is shielded from disturbances downstream from the point of connection.
However, dedicated lines do not solve any power problems originated upstream of the point of connection. That is why, in many cases, they are supplemented by additional power conditioners.
Transient Suppressers - A wide range of devices are classified as transient suppressers. The simple ones are nothing more than electronic fuses intended to self-destruct in suppressing a transient; the complex ones are highly engineered and sophisticated devices that can cost tens of thousands of dollars. They are very effective in chopping high frequency spikeshowever, they cannot do anything about frequency variations, voltage problems and power interruptions. Moreover, they have little effect in suppressing electrical noise.
Another problem of the cheap units is a lack of an indication that the unit has already failed; we can go on believing the system has protection, while in reality it does not.
Voltage Regulators - Voltage regulators can help maintain output voltage based on a prespecified variation in input voltage. The common voltage regulators are ferroresonant, saturable reactor and automatic tap-changers. They are effective in maintaining output voltage within a narrow range of one percent, even with an input voltage variation of up to 15 percent. However, voltage regulators do not have any effect on other power quality problems.
Isolation Transformers - These are usually one-to-one transformation transformers that prevent the transfer of electrical noise on the power line to the equipment. The additional isolation between the input and output is accomplished by electrostatic shielding. They are energy-efficient. However, while they can effectively block common mode noise, they cannot block transverse mode noise or any other power quality problem.
Uninterruptible Power Supplies - Static uninterruptible power supplies (UPS) provide an effective solution to most power quality
problems such as frequency variation, undervoltage and overvoltage fluctuations, electrical noise and frequency deviations as well as momentary or sustained power outages.
The cost of a UPS is relatively more than the other devices mentioned. The only problem with a UPS is that in the event of a sustained power failure, unless it is supplemented by an emergency generator, operation will be limited based on the size of the battery bank. UPS is covered more fully in Chapter 6, where we discuss emergency power.
Power Quality Measurement
The analysis of power quality is becoming an important issue for many facilities managers today. Again, when we refer to a power quality problem we are referring to a variety of possible abnormalities, so it is important to initially identify all possible sources and the type of the problem (or at least make an educated guess).
The best way to measure harmonics is to use a spectrum analyzerhowever, there are other less expensive devices that can give us some information about harmonics.
For transient disturbances, a power disturbance analyzer is commonly used. Both the spectrum analyzer and the power disturbance analyzer are portable units. In some cases, however, where constant monitoring is required, they can be stationary units connected to a computing device to continuously monitor and store information about power quality.
A visual inspection of the electrical equipment, using harmonic-detection devices, can give us the first indication that harmonics might be the source of our power quality problem. The next step is to see if any of the common harmonic effects are being experienced by the system.
The simplest test is to measure the system voltage or current and see if it is in the expected range. For a sinusoidal wave, the true RMS value is 0.707 times greater than the peak and 1.11 times greater than the average value. The ratio of the peak value divided by the RMS value is called the crest factor. So, for a sinusoidal wave, the crest factor is equal to 1 ÷ 0.707 = 1.414.
Many electric meters sense peak or average value and display
the RMS value by the appropriate multiplier. If the wave is a true sinusoidal, the reading will be accurate. If there is some harmonics present, however, the crest factor or the ratio of the peak to average will be much different. If the value of the reading with a peak or average meter is different from a true RMS meter, therefore, it implies the presence of harmonics. This is a simple technique to determine the presence of harmonics, but the specific harmonics can not be identified.
A quick way of determining the presence of triplen harmonics (i.e., third, ninth, etc.) is to investigate the neutral conductor. The neutral conductor carries any unbalanced current in a three-phase system. If the system is balanced, the fundamental current will be zero, although the current from triplen harmonics will add up in the neutral conductor and can be as high as 176 percent of the rated line current. So if the neutral conductor is warmer than the other conductor, it is a quick way of detecting the presence of triplen harmonics. A better way is measuring the neutral currentif the current is higher than the load imbalance, triplen harmonics are present in the system. Finally, if the neutral connector for electrical partitions fail, there is a good chance that triplen harmonics are present.
Another quick method of identifying excessive neutral current is to measure the voltage difference between the neutral and ground. This voltage must be measured when the load is still connected to the system. If the difference is below 2 volts, then the neutral current is within safe limits. If the voltage difference is above 5 volts, then there is a good chance of excessive neutral current. If the voltage is between 2 and 5 volts, then it is questionable situation.
Now that we have identified the sources and symptoms of harmonics, a qualitative way of measuring harmonics can be accomplished by using an oscilloscope. The display monitor of an oscilloscope can give us a visual display of what the current or voltage wave looks like, but cannot give any detail about the specific harmonics present in the system. To get such detail, a spectrum analyzer is needed.
A power monitor or spectrum analyzer breaks down the wave into its separate harmonics and measures the percentage as well as the phase angle of every harmonic in relation to the fundamental line frequency. In addition, it can calculate THD, which is the square root of the sum of the squares of the individual harmonics. THD is an important parameter for setting up system tolerances for harmonics. The power monitor can display the information or print
a hard copy of the data in tabular and graphical form. The printout also includes useful information such as the date and the time of the analysis. These devices can be connected to a computer to record trend data for a long period of time.
Power Disturbance Measurement
The most common device used in monitoring such abnormalities is a power disturbance analyzer. The disturbance analyzer is a programmable device capable of accepting preset thresholds for a number of parameters such as overvoltages and undervoltages, power line sags and swells, transient and noise impulses, and neutral-to-ground voltages. This means whenever any of these disturbances are experienced by the system, the analyzer will record the date and time of occurrence in addition to the type, duration and amplitude of the power disturbance.
When using a disturbance analyzer, pay close attention to the threshold settings before it is connected to the circuit that needs to be monitored. This is because tight settings will result in many unnecessary recordings and conversely, if the setting is too wide, important information will be missed. Reasonable initial settings for such a test can be:
From 88% to 108%
Minimum sag or swell duration
Transient impulse duration
0.5 microseconds to 1,000 microseconds
Noise voltage level
10 volts peak
For successive tests, the thresholds may be altered every time the original setting is exceeded. This will suppress multiple recordings of a disturbance with minor variations. It is a good practice to keep the analyzer connected to the system for a full business operations cycle. This way, all possible sources of the disturbance can be identified.
After the characteristics of the disturbance have been recorded, the next step is to analyze the results. Today, for more complex systems there is computer software available to assist in the analysis. In applications where several sources need to be monitored, various disturbance analyzers can be multiplexed and connected to the same computer for analysis and data storage.
Commercial, institutional and industrial environments are experiencing further proliferation of electronic equipment. This has dramatically increased the percentage of nonlinear loads in electrical systems. Additionally, the use of more sophisticated, sensitive devices demands a higher power quality for their reliable operation. Lack of attention to power quality problems can result in equipment downtime, premature equipment failure and high service cost. The cost of dirty power in the United States is estimated to be in the billions of dollars annually. There is no standard solution that can work in every situation. This is yet another challenge that facility managers of the '90s must overcome. Determining the source of these problems is critical.
Before jumping to conclusions and using any expensive testing device, it is desirable to look for the obvious sources of the problem. This can be accomplished by a visual survey of the equipment, followed by determining whether the probable cause of power quality problem is due to harmonics or some other power disturbance.
It should be noted that harmonics are an ongoing problem, while other power disturbances will more than likely be an intermittent problem.
This means in addition to having the right kind of testing device, some level of deductive reasoning can go a long way in helping us investigate specific power quality cases based on specific symptoms and tangible evidence. Poor power quality can be deceptive, treacherous and sneaky. However, it should be recognized that although the symptom of a power quality problem in the electrical system might appear bizarre, there is always a scientific explanation. Whenever faced with power quality problems, one should ask the following questions:
1. What Is The Source Of The Dirty Power?
Although a few uncontrollable elements such as weather can be the source of a power quality problem, experience has shown that a large percentage of problems are created by users' lack of understanding about the electrical characteristics of certain equipment and the interaction of different electrical devices connected in a circuit. Sometimes, individual loads may be fine, but the synergetic effect among them is causing the problem.
Often, power quality problems come from very obvious sources such as lack of proper grounding, poor design, etc. In other cases, the problem might not be easy to trace. If the damage is immediate
and severe, the source of the disturbance may be easily identified. However, effects can be subtle and damages may be latent. In some cases, the damage is not caused by a single transient occurrence. The cumulative effect of repeated events may result in hardware failures. If the problem occurs intermittently, detection becomes that much harder.
2. What Specific Type Of Problem (i.e., Harmonic, Transient, Noise, Voltage Sag) Is The Main Concern?
The text below is similar to earlier sections in this text, but bears some repetition here as a key management step.
A qualitative way of measuring harmonics is by using an oscilloscope. The display monitor of an oscilloscope can give us a visual display of what the current or voltage wave looks like, but cannot give any detail about the specific harmonics present in the system. To get such detailed information, a spectrum analyzer is needed. A power monitor or spectrum analyzer breaks down the wave into its separate harmonics and measures the percentage as well as the phase angle of every harmonic in relation to the fundamental line frequency. In addition, it can calculate the THD, which is the real culprit. THD is a very important parameter for setting up system tolerances for harmonics.
The power monitor can display the information or print a hard copy of the data in tabular and graphical form. The printout can also provide other useful information such as the date and the time of the analysis. These devices can also be connected to a computer to record trend data for a long period of time.
For other power interference problems, the most common device used in monitoring such abnormalities is a power disturbance analyzer. The disturbance analyzer is a programmable device capable of accepting preset thresholds for a number of parameters such as overvoltages and undervoltages, power line sags and swells, transient and noise impulses, as well as neutral to ground voltages. This means whenever any of these disturbances are experienced by the system, the analyzer will record the date and time of occurrence in addition to the type, duration and amplitude of the power disturbance.
When using a disturbance analyzer, pay close attention to the threshold settings before it is connected to the circuit that will be monitored.
3. What Is The Threshold Of Equipment Susceptibility?
The threshold of susceptibility will vary greatly among different electrical loads. These levels can be found in the IEEE Standard 519-1992, Recommended Practices And Requirements for Harmonic Control in Electric Power Standards. In an earlier section of this chapter, some guidelines were provided as a starting point. A good source of information is the manufacturer's specifications and stated tolerances in their warranties. Prior experience with the equipment and the idiosyncrasies of a particular system can also tell us a lot.
For harmonic problems, if the voltage THD is more than 6 percent for branch circuits or above 4 percent at the service entrance, all THD is from one harmonic. THD concentrated at higher harmonics is another cause of concern. In most situations, finding the source of the problem can be the most challenging part of the process. This is especially the case if the problem is intermittent. Test for harmonics with an oscilloscope or a spectrum analyzer. The instrument commonly used to detect power disturbances is a transient disturbance analyzer. It can measure the length and magnitude of spikes. Most of these units have programming capability as well as RS232C interfaces for data transfer to PCs.
4. Does The Interaction Between The Power Source And The Equipment Mitigate Or Worsen The Problem?
One issue that is closely related to the interaction of various equipment is the propagation of disturbances. For instance, higher harmonics attenuate very rapidly as they move away from the source. If two devices generate the same harmonic, and if the waves are in phase, the resulting harmonic will be equal to the mathematical sum of the two. This means that if only one of these devices is in operation, they may not cause any problem, but when both units are in operation, the resultant harmonic may be above the threshold of equipment susceptibility. On the other hand, if the same order harmonics of two devices are 180 degrees out of phase, the waves will cancel each other.
The common approach among many facility managers concerning power quality has been reactive. That is to say, they only take action when a power quality problem has arisen. There are consequences to such a reactive approach. First, due to lack of preventive action, costly damages can be sustained that might otherwise be avoided. Service interruptions not only increase cost, but can contribute to other problems involving safety, system
reliability, etc. Finally, there may be a delay between the cause and effect in some power quality problemssome power quality problems accelerate the eventual hardware failures of electronic devices, for example. In a reactive modus operandi, the connection may be obscured by other variables for a long time while the equipment is subjected to a greater potential for damage.
A better approach is to proactively examine the potential for a power quality issue before a problem is experienced. Today, all electric power distribution systems are subjected to a lot of unwanted harmonics, transients and other disturbances. According to MIS Week, in 1980 only 3 percent of data processing equipment downtime was attributed to absence of adequate management of power quality in electrical infrastructure. By 1990, this percentage climbed to 27 percent and it is estimated that by the end of the decade it will approach 47 percent. In other words, almost half of the downtime for processing equipment will be attributed to the power quality issue. In an environment where every business is becoming more and more dependent on real-time computing, on-line systems, and shorter response times, a proactive approach is a must.
To put it in proper perspective, consider a credit processing facility where at every minute of the day thousands of transactions have to be processed to authorize merchants to accept payment for goods and services from customers using credit cards all around the world. During the height of the Christmas shopping season, the power system experiences a glitch and the credit card processing is interrupted by half a day. What kind of cost and opportunity loss is incurred by this company? Needless to say, it will be in the millions of dollars. Now consider the impact if the problem occurs several times during the holiday season.
Let us take another example and look at the reservation system for an airline. If the computer system experiences shutdowns because of a power transient or another disturbance for any appreciable period of time, it can mean large losses.
To overcome such problems and reduce power interruptions, many companies end up spending large sums of money on standby systems and in some cases have backups for the standby systems. The cost of these solutions can be huge and at best they only reduce the system downtime. It is still a reactive solution because the presence of the standby system is masking the real issues rather than attacking the root cause of the power system interruptions. If the sources of failure are identified and corrective action is taken, the need for elaborate standby systems will be reduced.
Therefore, facility managers should proactively improve the condition of the electrical infrastructure by examining the system load, determining the characteristics of all major devices connected to the system, monitoring the actual voltage and current wave forms, then taking appropriate actions. Such a systematic approach will reduce the effects of power quality problems.
Another analogy that may shed some light on this problem is a man who is in the high risk category for having a heart attack. The traditional approach may suggest to the individual that well, I am feeling fine and have no discomforts and when I feel bad, then I will seek medical assistance. With this philosophy, the individual may live to be a hundred and die without any heart problem. However, there is a great probability the individual will experience a heart attack, become paralyzed and spend the balance of his life in a hospital. Naturally, a more prudent approach is for the person to manage his well-being with periodic checkups and corrective actions when necessary.
For power systems, periodic analysis is analogous to performing an EKG on human beings. It will be a means of determining whether the problem has remained the same or deteriorated over time. That is why for very sensitive and critical devices, constant monitoring instrumentation that can record important electrical parameters is the only way that adequate analysis can be done.
With such an issue as power quality, be careful to manage by facts rather than perceptions that may or may not reflect reality. Power quality problems are here to stay and are increasingly a significant concern for electrical systems.
Short Circuits, Electrical Failures And Emergency Power
A short circuit, electrical failure or power interruption can occur for a variety of reasons. Every electrical system should be designed to anticipate and handle large pulses of overcurrent and overvoltage, or risk failed equipment and possibly electrical fires. In addition, to ensure safe evacuation of people during an emergency, protect data and keep critical processes moving without a damaging interruption, emergency power should be provided.
Electricity always wants to follow the path of least resistance. Under normal conditions, the flow of electrical current is confined within the conductor by the insulation that is surrounding it. A fault is caused when there is an unintended connection of two or more conductors with a potential difference between them. This usually happens when the insulation between two conductors or a conductor and ground is lost. Consequently, an unusually large current commonly referred to as short circuit or faulted current will be induced. It is called a short circuit current, signifying the current found a path shorter than the expected path.
The short circuit current is much greater than the designed thermal capacity of the conductors and other distribution elements. It is not unusual for short circuit current to be hundreds of times larger than the normal current. In fact, fault current can be as high as 10,000 times the rated current. This resultant rise in temperature can cause damage by annealing metallic parts, charring insulation, and possibly causing fires and personal injury.
The value of short circuit current is independent of the load current; in fact the short circuit current is a function of the maximum current the distribution system can supply. When a short circuit occurs, the line voltage in the vicinity of the fault will be zero because of the induced high fault current. To illustrate this assertion, let us look at a water dam analogy. Normally, the amount of water leaving a dam is related to the diameter of the pipe connected to the dam. On the other hand, if the dam breaks, the flow of the avalanche of water will depend on the amount of water available in the dam.
Under normal conditions, the resistance of the distribution components is negligibly small compared to the load resistance. So the circuit current is determined by the load resistance. During a fault, when the load resistance is dropped to zero, the current is limited by power system resistance. Therefore, the available fault current of an electrical circuit is a function of the impedance of the distribution component. For instance, assume a single-phase 208V 2 VA load can be served by a 5 kVA or a 2,000 kVA transformer. The impedance of the two transformers are 0.5 and 0.01 ohms, respectively. Under normal working conditions, the load current will be equal to 2,000 ÷ 208 = 9.6 amperes regardless which transformer it is connected to. If a short circuit occurs, the fault currents are equal to:
1. For 5 kVA transformer ... I = 208 ÷ 0.5 = 416 amperes
2. For 2,000 kVA transformer ... I = 208 ÷ 0.1 = 20,800 amperes
As shown, the available fault current is 50 times higher for the larger transformer. In the first case, the system's protective devices should withstand 416 amperes, while for the latter case, it must be able to withstand 20,800 amperes and interrupt safely. The question that needs to be addressed is, where is there source of this current? The most obvious answer is the power utility grid. But in reality, there are a number of other sources that contribute to short circuit current.
Sources Of Short Circuit Current
There are four possible sources for short circuit currentthe utility power grid, in-house generators, synchronous motors and induction motors.
Utility Power Grid - The available current from the utility depends on the primary and secondary voltage levels, the transformer size, and the cable and transformer total impedance. The utility transformer for a typical substation can be as high as 500 MVA. Based on the earlier example, as the transformer size increases, the impedance of the unit will be that much lower, which in turn translates into higher available short circuit current. The average impedance of a transformer ranges from 1 to 5 percent. So the available short circuit current can range from 1 ÷ 0.01 = 100 to 1 ÷ 0.05 = 20 times the rated current. This implies that if the transformer impedance is doubled, the available fault current is dropped in half. Therefore, the power utility distribution can normally provide 20-100 times the rated current. In a fault condition, this high current level will continue to flow indefinitely unless it is interrupted by protective devices.
In-House Generators - The second source of short circuit current is the presence of any in-house generators that are in operation during a fault. The amount of current provided by the in-house generator is a function of the size of the unit, the distance from the generator to the fault, and the impedance of the generator. It should also be stated that during the first few seconds of a fault, the impedance of a generator will be about 70 times smaller than steady-state impedance. This means that during the first few cycles, the available short circuit current will be much higher than steady state.
Synchronous Motors - The third source of short circuit current is a synchronous motor in operation. A synchronous motor generates a counter EMF which has the same frequency as the line voltage. This counter EMF is somewhat lower than the line voltage; during a fault, however, since the system voltage has dropped greatly, the synchronous motor will feed the fault. Similarly, the amount of current available will be a function of the motor size and the impedance. The available fault current from a motor will be about 4-5 times the rated value of the motor current for over half a cycle.
Induction Motors - The last source of short circuit current is an induction motor in operation at the time of the fault. The available fault current from an induction motor is about 2-3.5 times the rated full load current of the motor.
All Sources - The total available short circuit current is the sum of the above four sources. The short circuit current can be symmetrical or asymmetrical about the horizontal axis. Prior to the short circuit condition, the voltage and current will symmetrical and the phase angle between and current level will be a function of the load impedance. When a bolted short circuit occurs, the current will be lagging the voltage by 90 degrees. Now, if the short circuit occurs at the instant the voltage is at the maximum level, the current will be completely symmetrical and the fault current will be at the minimum level. On the other hand, if the fault occurs at the instant the voltage is crossing zero, the short current will be asymmetrical and will be at the maximum possible level.
It is assumed that the current may be asymmetrical for the first three cycles and after that the current will be symmetrical. The asymmetrical current will have two components, a symmetrical AC current and a DC current which is gradually decaying (see Figure 6-1). The DC component is about 1.4 times the AC symmetrical component. So the asymmetrical current can be as high as 2.4 times the symmetrical current. In practice, there is no way to know at what instant of the voltage wave the fault will occur. So, for short circuit calculations, it is important to take a risk-averse position and calculate the short circuit for the worst scenario. This means the system protection devices must be able to clear the peak asymmetrical current safely.
Types Of Faults
Up to this point in the discussion, the three-phase circuit, commonly known as three-phase bolted fault, was addressed. In practice, only less than five percent of the faults that power systems experience qualify as three-phase bolted fault. So the question is, with the low probability of such faults, why do we need to be concerned with them? This is because the current in a three-phase short circuit is much higher than in any other types of fault, which means if the protective devices can safely clear these faults, they will not have much problem with any other possible condition. In other words, this is the worst-case scenario, and if the system can handle this, it can handle anything. That is why the three-phase bolted fault current is used for this purpose. Other types of faults include line-to-line, line-to-ground and double line-to-ground faults. With these faults, the current is no longer symmetrical, meaning the current will no longer be the same in all three phases.
In a short circuit current, the current is asymmetrical for
the first three cycles, after which time it becomes symmetrical.
If this asymmetrical current is analyzed, it can be broken into three symmetrical systems designated as positive-sequence, negative-sequence and zero-sequence current. The positive-sequence current is the same as the three-phase balance current. The negative-sequence current is a balanced current that is rotating in the opposite direction in relation to the positive-sequence current. The zero-sequence current accounts for any DC component in the circuit. After the asymmetrical current is broken into these three symmetrical systems, the circuit can be analyzed separately and the resulting values added together to determine the total impact.
A fault also might be caused by a sustained overload. This commonly results when a circuit is loaded by more than the design limits, resulting in increased operating temperature that leads to premature insulation failure in electrical components. So, unlike the short circuit, the initial impacts will not be dramatichowever, if the overload is not addressed promptly, it can cause system damage. In a short circuit condition, the system protection devices will clear the circuit as soon as possible. But in an overload condition, there will be a delayed reaction inversely related to the current level.
When a fault occurs, it causes a major disruption to the power distribution system in several different ways. Some of the common impacts will include:
1. Inducing a large fault current that can damage electrical equipment as well as distribution components.
2. Causing arcs and electrical flashes, which may start fires.
3. Changing the system voltage outside the acceptable levels, which may trip many sensitive electronic components.
4. Interrupting the flow of power to the load.
5. Causing brown-outs which can result in three-phase equipment failures.
A fuse is an overcurrent protective device which opens a circuit based on a predetermined current level. It contains a fusible link held under tension. When the current increases beyond a certain point after a preset amount of time, the fusible link will be heated and as the link begins to melt, the spring pulls and the contact is severed (see Figure 6-2). This will interrupt the flow of current in the circuit. The speed of the meltdown process is a function of the current and the type of fuse.
The time-current characteristics of fuses is an important feature in choosing the appropriate fuse for any application. Fuses are divided into two general categoriesinstantaneous and time-delay.
Instantaneous Fuses - Instantaneous fuses have no intentional time delay built into them. They are employed to interrupt the circuit as quickly as they can when the current goes beyond a certain prescribed level. They are also called current-limiting fuses. Since the available current in a system can be large, the key here is to limit the current to a small fraction. This implies that a current-limiting fuse must interrupt the circuit within a small portion of the first quarter-cycle (a fraction of a millisecond). This makes current-limiting fuses fast-acting devices.
Current limitation of a fuse.
Built-In Time Delay Fuses - The reason for a time delay fuse is to address the high in-rush currents that certain electrical equipment have. For instance, the starting current for a Class A induction motor can be as high as seven times the normal full-rated current. If the system design does not take this into consideration and tries to use a current-limiting fuse with an interrupting current of twice the full-load rating, the fuse will blow every time the motor is turned on.
The time delay characteristics of these fuses accommodate such in-rush currents and will operate if the high current levels are sustained beyond a certain predetermined time period. This is why the rating of a time delay fuse can be pretty close to the rating of the circuit. The primary role of these fuses is to prevent sustained overloads. The speed of circuit tripping is inversely related to the level of overload current that the fuse is subject to. In other words, the time delay fuse will trip faster if the overload current is 200 percent versus 150 percent of the circuit rating.
Fuse Selection - There are four important factors that need to be considered in choosing fuses for a system:
1. Voltage RatingFuses come in all standard system voltage ratings. The voltage rating must be the same or higher than the distribution system voltage.
2. Current RatingThis rating designates the maximum current that the fuse can carry without exceeding specified temperature rise levels; within these parameters, the fuse can operate indefinitely. Make sure that the fuse ampacity is the same as the current rating of that particular circuit. If the fuse ampacity is smaller, then the circuit can experience unnecessary nuisance trips. On the other hand, if the fuse ampacity is larger than the circuit rating, it cannot provide adequate protection and thus defeats the primary reason for using the fuse in the first place.
3. Interrupting RatingThis refers to the maximum short circuit current that a fuse can safely open. This rating is designated in maximum symmetrical fault current. The standard interrupting ratings are 10,000, 50,000, 100,000 and 200,000 amperes. The interrupting rating of the fuse must be equal to or higher than the maximum available short circuit current in the system. This is a reliable way to insure that the fuse will be able to withstand the mechanical and thermal forces that are exerted during a short circuit condition.
4. Current LimitingThe current limiting capability of a fuse limits the flow of the current in a circuit to a small portion of the first quarter-cycle of a short circuit, thus minimizing the potential damage to the system. When choosing a current-limiting fuse, it is important to consider the in-rush currents of circuit equipment to avoid nuisance trips.
Rating Fuses - The rating of fuses is established by ANSI, NEMA, and UL standards. The designation is displayed on the fuse label as class G, H, J, K, etc.; for current-limiting fuses, it is determined by the maximum let-through current. The descriptions of various low voltage fuse classes are:
1. Class G FusesSmall fuses which can have a current rating of up to 60 amperes. They are available in different physical sizes. These devices are time delay fuses with a minimum delay of 12 seconds at 200 percent of rated current.
2. Class H FusesThese fuses have a current rating of up to 600 amperes and are used in residential or small commercial applications. They can have renewable or non-renewable elements. Non-renewable elements are the disposable type, while the renewable elements can be taken apart to replace the fusible link. These fuses have an interruptible rating of 10,000 amperes, so they can only be used in circuits with a low available fault current.
3. Class J FusesThese fuses have the same current ratings as Class H fuses. However, they have higher interrupting ratings than Class H. The interrupting ratings of Class J fuses can be as high as 200,000 amperes. So they can be used in areas with higher available short circuit currents, where Class H cannot safely interrupt.
4. Class K FusesThese fuses also have the same current ratings as the last two classes. They were developed to allow Class H fuse installations be replaced with current-limiting units. They have the same interrupting ratings as Class J fuses. Class K fuses have a minimum 10-second delay at five times the rated current.
5. Class L FusesThese fuses have a current rating ranging form 601 to 6,000 amperes. They come in different physical sizes and because of their high current levels, many have bolt-type terminations. They are time-limiting fuses and the rating specifies the peak let-through current. The minimum time delay for Class L fuses is 4.5 seconds at five times the rated current.
High-Voltage Fuses - High-voltage fuses are covered by ANSI C37-46 and NEMA SG2-1982 standards. For high-voltage fuses, in addition to the above parameters that were mentioned for low-voltage fuses, the BIL dielectric strength that a fuse can withstand is also important. There are two types of high-voltage fusesexpulsion power and current limiting.
Expulsion Power Fuses. These fuses interrupt the circuit by the deionizing action of the gases liberated from the interrupting chamber of the fuse. When the current reaches above a certain level, the heat generated will begin to melt the fuse link. The arc is lengthened by the spring-charged mechanism within the fuse. As the link is severed and the arc is generated, the deionization process starts and the circuit is cleared. As the fuses clear the circuit, all of
the energy stored in the system is also drainedthus, no high-voltage pulses are created. Expulsion fuses are exclusively used for outdoor applications in substations away from people.
The limited use of this fuse led to the development of boric acid or solid material fuses, which have been in use at residential, commercial, and industrial installations. These power fuses can be used for indoor and outdoor applications. They are available for voltages up to 138 kV and a current rating of 720 amperes. The fusible link in these switches is indestructible and does not age. The time-current characteristics of these fuses remain permanently accurate in spite of age, high temperatures or excessive vibrations that might be caused by power surges.
Since these fuses are delayed-time action units, they can be sized close to the full rated current of the system and provide good protection.
Current-Limiting Fuses. These fuses by definition are fast-acting units. Because of this capability, current-limiting fuses can reduce the potential stresses and damage on circuit components during a fault. They employ a fusible link, which is embedded in sand. As the fusible link melts and the arc is formed, the high temperature causes the sand to vitrify and create a glass tunnel enclosing the arc. Rapid cooling and restriction of the arc increases the resistance. Consequently, the current will go to zero quickly before all of the system energy is totally drained.
This phenomenon is called wave chopping.
Unlike power fuses, the fuse link of a current-limiting fuse can be damaged by in-rush currents approaching the link's minimum melting point temperatures. Because of this limitation, a safety zone or a setback allowance for the time-current characteristics of these fuses is required. To accommodate these restrictions, as well as allow normal in-rush currents, the ratings of a current-limiting fuse for a circuit will be much higher than the rated current.
Electronic Fuses - With the advent of solid state electronics, a new type of protective device was introduced to the market in the mid 1980s, called electronic fuses. These devices provide very desirable characteristicsa high continuous current rating as well as wide range of time-current attributes. The principle of operation behind these units are more complex than other types of fuses.
Looking at conventional fuses, the main challenge is finding a fusible link that can accommodate high continuous current capacity under steady state conditions, coupled with desirable time-current
characteristics. In most situations, satisfying these two contradictory requirements necessitates a trade-off. This implies that the fuse link does not provide either of the two requirements as well as it could individually.
This problem is overcome in an electronic fuse, because unlike a conventional fuse there are two separate links in the fuse assembly and each one is chosen to maximize the particular desired characteristics. These two separate components are the interrupting module and the control module.
The interrupting module consists of two sections, one of which carries the current during normal conditions and consists of a copper bar connected to copper tubes at both ends. The second element is a copper ribbon which is embedded in the sand and is connected in parallel with the coppers bar. Although both elements are connected in parallel under normal conditions, practically all of the circuit current is carried by the copper bar. However, during a faulted condition, the ribbon will be carrying the short circuit current.
The control module is a solid state programmable current sensing device that monitors the current continuously and responds according to preset time-current characteristics. When a fault occurs, the control sensing device sends a signal and activate the power cartridge device. Within one quarter of a millisecond a high pressure gas of 27,000 pounds per square inch (psi) is generated in the tube, which forces the main current path to be interrupted within another quarter millisecond. As the only current path left is through the copper ribbon, it will simultaneously melt in several locations and the circuit will be opened. The total process takes about 0.65 milliseconds or about 15 percent of the quarter cycle for a 60 Hertz system.
Electronic fuses offer many desirable characteristics. The time-current characteristics of the fuse can be varied over a wide range which simplifies fuse coordination for complicated systems. Unlike current-limiting fuses, they are not subject to damage due to current surges. They have independent continuous current ratings and current-limiting characteristics. Since they are still relatively new devices, one cannot see many of them in power installations, but as time goes by and more individuals become familiar with them, the popularity will certainly increase.
A protective relay is a sensing device that reacts to changing conditions in an electrical system. It responds to variations in circuit
parameters that could potentially affect the operation of the distribution components as well as the equipment that are served by them. So the main role that protective relays play is to minimize damage to the electrical systems and equipment during a fault.
Relays must operate quickly and reliably to sudden changes in power systems. They are required to only operate in an abnormal condition, thus remain dormant under normal and sudden load increases. So speed, sensitivity and accuracy are important parameters to consider for relays.
Relays can open or close one or more contacts to connect or disconnect several control circuits. There are four basic types of relayselectromagnetic attraction, electromagnetic induction, thermal induction and solid state.
Electromagnetic Attraction Relays - These relays consist of a stationary and a movable contact. The movable contact is held in its original position by a spring. The relay coil can be energized with AC or DC current. As the current flows in the coil, the magnetic field will overcome the spring tension and attract the relay arm. Consequently, the movable contacts will either come into contact with the stationary contact or move away from it, resulting in closing or opening the control circuit. When the current is removed, the movable contact will return to its original position because of the spring tension. The relay construction can be a plunger, a solenoid or a hinged type.
Electromagnetic Induction Relay - An electromagnetic induction relay works based on the principles of an induction motor, so it can only work with AC power. The relay consists of a thin disc that is held by a rod which goes through the center of the disc and terminates on jewel bearings at both ends. There is a horseshoe electromagnet in which the disc can freely rotate through the horseshoe opening. The movable contact is on the disc rod. As the coil is energized, the disc starts rotating and the movable contact will engage with the stationary contact. The electromagnetic induction relay has a number of desirable characteristics. The time-current characteristics of these relays have a wide range. The relay can operate based on one or the sum of the differences of several parameters, as well as directional control.
Thermal Induction Relay - Thermal induction relay operation is based on the difference of thermal expansion coefficients between
two metals. It contains a bimetallic helix assembly attached to the movable contact. When the current increases beyond a certain point, the bimetallic ribbon is heated; since the two metals expand at a different rate, this will make the ribbon bend and start rotating. Thermal induction relays are time-delayed devices, applicable for safeguarding against sustained overloads rather than instantaneous faults.
Solid State Relay - Solid state relays consist of a number of electronic components that duplicate the functions of electromagnetic relays. They have no moving parts and designers have more control over the characteristics of the relay. This makes solid state relays versatile and able to meet very unique requirements that can be difficult to accommodate with conventional relays. These relays have gained more acceptance in the past decade.
Time-Current Characteristics - Similar to fuses, the time-current relationship of relays is represented by a family of hyperbolic curves. This means the higher the current gets above a designated level, the faster the relay will trip. Relays have a pick-up current, which signifies the minimum current needed before the relay will operate. Based on the time-current curves, relays are divided into categoriesinverse, very inverse and extremely inverse. Each has a relatively different application.
As the name suggests, the extremely inverse relay will operate fastest. For fuses, the time-current characteristics are used to determine how quickly the circuit can be interrupted. By contrast, determining the relay with the most appropriate time-current characteristics is more complicated. The underlying reason is that with a fuse, as soon as the filament melts, the circuit will be cleared. On the other hand, a relay does not directly open the electrical circuit.
When a relay trips, it energizes the trip mechanism of the circuit breaker and the current will be interrupted when the circuit breaker clears. This means the time needed to clear the circuit is the sum of the time for the relay to trip plus the time it takes for the breaker to open. So even if the relay trips very quickly, but the breaker operates very sluggishly, the system may not be adequately protected.
Application of Relays - Fuses protect circuits from overcurrent conditions. However, relays can provide protection for a variety of different circuit abnormalities. Some of the common types of relays are overcurrent, overvoltage, differential, directional and ground-fault.
1. Overcurrent RelaysOvercurrent relays are similar to fuses in application. They are used to monitor and trip the circuit if the current increases above a certain value.
2. Overvoltage RelaysThese relays are used to protect the system from overvoltage conditions. If the voltage rises above a predetermined level, the relay will trip the circuit.
3. Differential RelaysDifferential relays are used to trip a circuit if there is a difference of current levels between two separate circuits. These relays are used for a number of different applications, such as winding protection for transformers and generators. For a transformer, a differential relay can monitor the current ratio between the primary and secondary windings. When the transformer is operating properly, the net current through the relay will be zero and the relay will not operate. However, if there is even a partial fault which short circuits a only a few turns of one of the windings, the current ratio between the primary and secondary windings will be altered. This results in a net current through the relay which trips the circuit. Similarly, in the case of a generator, a differential relay can monitor the currents on both sides of the alternator winding; a partial short circuit in alternator windings results in a current through the relay, which trips the circuit.
4. Directional RelaysThese relays are used to monitor the direction of current flow. They are primarily used with generators that are connected to the power grid. Under normal conditions, the current will be flowing from the generator to the distribution system. If the current flows to the generator, it means the unit has been motorized. The relay will sense this change of conditions and trip the circuit.
5. Ground-Fault RelaysGround-fault relays protect the distribution system against ground faults. There are various connections to obtain ground fault protection. For a wye connection, balanced three-phase load monitoring of the current in the neutral circuit can be one approach. Normally, the currents of the three phases will cancel out, which results in a zero neutral current. However, if any of the phases experience a fault with the ground, the current in the phases will no longer be the same. Consequently, the neutral will have a non-zero current, which trips the relay.
This relay will also trip the circuit in the case of brownouts caused by any single-phasing. If the circuit has a combination of three-phase and single-phase loads, ground-fault relaying will include a differential relay which monitors the resultant current of the three-phases and compares it with the neutral current. Under normal conditions the two currents cancel each other out. However, in a ground fault, the currents will be different and the relay will trip.
Maintenance Of Protective Relays - Protective relays serve a very important role in power distribution systems. Proper care must be taken in testing and maintaining them to ensure proper calibration. This is critical because electromagnetic relays have a tendency to drift over time and if not adjusted may either initiate nuisance trips or not respond when a fault occurs. Since relays are sensitive and delicate instruments, they should be handled and serviced by knowledgeable individuals.
The maintenance of relays should include visual inspection for any abnormalities, cleaning dirt or dust, replacing broken glass, dressing pitted contacts and replacing damaged bearings.
Coordination Of Protection Devices
The main function of protection devices is to detect a potentially damaging overcurrent condition, operate promptly to isolate the fault, and minimize the stress on distribution components and electrical equipment.
Moreover, it is also important that the protective devices interrupt power only to the affected areas. In other words, all of the protective devices should be properly coordinated. This ensures that the nearest upstream devices to the fault will open.
The second-nearest upstream device will serve as a backup for the first device, meaning that if the first device fails to operate, the backup unit will trip the circuit. The same idea will be valid if one looks at the third-nearest upstream device and so on.
The way to ensure proper coordination is to draw the time-current characteristics of all of the protective devices upstream to a fault. If the characteristic curves are not crossing each other, then the system is properly coordinated. For instance, if there is a fault on the secondary side of a transformer, the first upstream device on the secondary side should operate to clear the fault. If for whatever reason the device did not operate, then the first protective device closest to the primary side of the transformer should operate.
Today, there are PC-based computer software packages available that a facility manager or engineer can use to examine the coordination of protective devices. These programs are helpful in examining coordination after any changes in the distribution system.
Finally, the choice between using fuses or relays with circuit breakers requires some attention. Both are reliable and can protect a system, but depending on the particular situation, one might be preferable over the other.
Power fuses offer a number of advantages. They are simple to install and require practically no maintenance. They are prompt in clearing a fault. They do not require recalibration, nor can they be recalibrated if so desired. In addition, unlike circuit breakers, they do not depend on any other external power source to clear the fault. Fuses inherently have faster response characteristics compared to circuit breakers, so they have the ability to remove the faults rapidly and minimize damage. They also minimize any voltage dip for the remaining loads in the unfaulted sections.
On the other hand, when a fuse element blows, it has to be replaced. Since the time-current characteristics cannot be adjusted, achieving good system coordination may be difficult. By contrast, circuit breakers have a high degree of service continuity at a lower overall cost. The time-current characteristics can be changed relatively easily. Their main disadvantage is complexity and the required periodic maintenance.
The destructive effect of the failures caused by short circuits can be dramatic. The mechanical forces of attraction or repulsion coupled with large temperature rises impact current-carrying conductors with an extraordinary intensity. Electric arcs and flash-overs occur because of loose connections or deteriorated insulation. Such failures can deform busbars, burst transformer casings, melt circuit breakers and produce flying sparks. In this section, we will discuss the strength and effects of mechanical forces generated by an electrical failure, the thermal effects of short circuits, and then move on to types, causes and how to prepare for electrical failures.
During an electrical failure, incredible mechanical forces are created that can damage electrical equipment and cause other damage.
One of the most effective means to avoid a short circuit condition is by attentively
maintaining the insulation cable. Shown is a series of devices used to measure the
resistance properties of insulation. Reprinted by permission of Associated Research, Inc.
Conductors - The mechanical forces between conductors in a cable or busbar is directly proportional to the square of the current and inversely proportional to the distance between conductors. During a short circuit, the current may potentially rise up to 10,000 times the normal rated value. This means the forces generated can be as high as a hundred million times the normal conditions.
Transformers - For a transformer, the wire loops in the coils will have a tendency to widen and force outwards during a fault. The pressure exerted on a coil can be as high as 400 pounds per square inch. Therefore, for a large transformer, the radial forces can be in the order of hundreds of tons. The transformer shape can influence the deformity that the transformer casing will experience. Normally, rectangular units will sustain more damage than cylindrical ones.
If the primary and secondary windings of a transformer are wound on top of each other, in additional to the radial forces, there will also be axial forces that have a telescoping effect on the unit.
Since the relative current direction in the primary winding is in the opposite direction of the secondary winding, the two windings will try to repulse each other and move away along the transformer core. However, it should also be pointed out that the axial forces are much smaller as compared to the radial pressures.
Circuit Breakers - The forces developed on a circuit breaker's cross bars and parallel leads can be extensive as well. Typically, a 100,000A short circuit current can exert about 1,500 pounds of force on the cross bars. This force will be so substantial that it may cause the circuit breaker to open even if it was not the right one to open during a particular short circuit.
Rotating Machines - For a rotating machine such as an electric motor or generator, the short circuit current can act like hammer blows on the end connectors. For a generator, the short circuit forces blow the stator windings away from the rotor, causing the insulation tubes to be broken up where they emerge from the stator. This causes a flash-over between the windings and the machine casing, resulting in the burning of insulation around the connectors.
In addition to the mechanical forces, short-circuit current greatly increases the temperature of the conductors in a short period of time. The temperature is a bigger concern for motors, generators and transformers as opposed to busbars because of the high concentration of windings in a relatively small, confined space. Even a modest short circuit current may result in high temperatures.
Needless to say, since the high temperatures and powerful mechanical forces occur simultaneously, the devastation can be significant. Bear in mind that such dramatic failures are usually preceded by one or more indicators that provide warnings. If proper care is taken, a major failure can be avoided.
Causes Of Electrical Failures
The most common cause of failures is the breakdown of insulation. This is primarily caused by excessive heat, vibration, overvoltages, dust and moisture. The excessive heat is normally
caused by sustained overloads, high ambient temperatures and loose connections. These factors can greatly accelerate the insulation deterioration. It therefore makes sense to examine some of these early warnings and sources of failures to avoid major and catastrophic failures.
The main precursors to large failures include a corona, treeing, ferroresonance and lightning.
Corona - For an overhead system, the conductors are supported by porcelain insulators in which the different phases (cables) are separated from each other by air. Under normal conditions, the insulation value of air is high enough to prevent a flash-over between the different phases. But a flash-over can occur in bad weather due to an overvoltage caused by lightning strikes or switching surges. It can also occur in fine weather if an insulator is cracked, porous or moistened by rain or fog, resulting in dirt and moisture on the conductor.
The flash-over is normally preceded by the presence of a high electric field, causing the ionization of the air and breakdown of the insulation. This phenomenon is called corona discharge. These discharge currents can shatter an insulator.
Corona discharge can be invisible or emit a bluish or green glow around the conductor accompanied by pulsating, hissing, crackling or humming noises. Other symptoms of corona include the presence of gray powder on cables and an ozone odor.
Corona has a number of undesirable characteristics:
1. Significant power losses on transmission lines.
2. Significant radio and TV interference.
3. There can be a sharp rise in the discharge current when the voltage increases beyond a certain value. This results in introducing harmonics both in the voltage and current. Corona discharges primarily contain the third, fifth and seventh harmonics.
4. The corona discharge can establish an arc and generate heat. If severe ionization continues, more insulation will be destroyed which can cause a catastrophic failure.
Dielectric Fluid Failure - Dielectric fluids, such as mineral oil,
are used in transformers and switches. They normally have desirable insulation characteristics. But severe arcing can result in breakdown since the oil can decompose into a gas and bubbles of enormous dimension are generated.
If the arc is not deeply embedded in the oil, the rising gas may not be adequately cooled, thus setting the surface of the oil on fire. This condition may arise if the oil level in the switch or transformer has dropped below the recommended levels.
Even if the oil is above the arc, an explosion of the combustible gases collected on the surface of the oil may be ignited by splashing white-hot metal particles from the breaker contacts. This sudden explosion, with resulting vaporization and decomposition of oil, develops pressures that are hundreds of atmospheres high. As the pressure travels away from the arc in a spherical surge, it will be moderated by the oil mass and large surface area of the unit, but the final force may still be sufficient to cause bulges and cracks in the oil tank. In addition to this high pressure, the arc temperature can be several thousand degrees, exacerbating the damage.
Oil failure will accelerate with the presence of impurities, sludge and moisture in the oil. If arcing occurs in the proximity of cellulose insulation, carbon monoxide and carbon dioxide will also be formed. Over time, acids, water and other contaminants will significantly reduce the dielectric value of the oil. If this condition is not abated, more dramatic failures will occur.
Treeing - This is a prefailure phenomenon which occurs in solid insulation. Just like the prefailure phenomena for liquid and gaseous insulators, treeing does not cause a major breakdown initially. However, if the symptoms are not attended to in a timely manner, they will eventually result in catastrophic failures.
There are three types of treeing: electrical, water and electromagnetic.
1. Electrical TreeingElectrical treeing is caused by the decomposition of solid dielectric materials such as polyethylenes; high electric fields; and the presence of voids, imperfections and other contaminants introduced into the insulation during the manufacturing process. Another possible source is the presence of loose fibers that can act as stress risers. Electrical treeing may start in a highly localized area and cause eventual failure. An effective way of reducing electrical treeing for transmission level voltage is filling cable voids with oil or hexafluoride gas.
2. Water TreeingWater treeing is usually temporary and diffuse. It is commonly caused by the presence of water with contaminants and imperfections in the semiconductive sheets of cables. If the water contains ions, electromagnetic treeing will occur in the inner as well as the outer surface of the insulation.
Treeing is a more common phenomenon for new cables because the main source of the problem is manufacturing deformities. So it is a good idea to visually check new cable for tiny treelike cracks after the cable is in service. This practice can avoid possible more serious cable failures.
Ferroresonance - This phenomenon is mostly initiated by improper switching in power systems. High currents can flow through the circuit and the system voltage may rise up to six times the rated level. The circuit current will only be limited by the resistance of the transformer and the cable, which will normally have a small value. Ferroresonance is possible as long as the capacitive-to-inductive reactance ratios are between 0.1 and 10. The closer the ratio is to unity, the stronger the impact of ferroresonance.
Common symptoms of ferroresonance include loud humming and vibration of transformers; spark-over of arrestors; insulation failure of system elements; and motors running backward.
Ferroresonance can be avoided using six methods:
1. Grounding the neutral windings for all wye transformers.
2. Energizing cables first and then the transformers. This implies that an additional disconnect switch near the transformer is needed.
3. Energizing transformers with some load to dampen the effect of ferroresonance.
4. Energizing all three phases simultaneously.
5. Installing fuses both at the cable entrances and at the transformers.
6. Keeping the capacitive-to-inductive ratio greater than 10:1.
Lightning - Lightning is the source of millions of dollars of damage for electrical equipment annually. A lightning strike consists of a gargantuan arc that is generated by a sudden discontinuous discharge of electricity through the air. Normally, a negative charge is built on the bottom of clouds while a corresponding positive charge is induced on the ground. As the winds move the clouds over the ground, the charge on the cloud is gradually increased. When the magnitude of the charge reaches a point where the potential difference to ground can overcome air resistance, a lightning strike occurs. The voltage magnitude of a lightning strike can be in millions of volts.
Lightning flashes consist of 3-4 strikes in which individual strikes last for a few milliseconds. The current strokes rise from zero to thousands of amperes almost instantly. The time duration between successive strikes is roughly 40 milliseconds. So the total duration of a lightning flash is about 1/5 of a second. Because of the interruptions between successive strikes, the lightning appears to flicker.
The energy released by a single lightning strike is estimated at 10W per ft. covering an average length of two miles with total energy of about 100 kW. So with an average duration of 30 microseconds, the power released by a single stroke is roughly 10 million MW. Globally, the aggregate energy content of total lightning strikes is estimated at around one billion kWh annually, which is about one sixth of average annual energy production.
There are 2,000 thunderstorms in progress at any given moment around the world. There are 45,000 thunderstorms daily and 16 million annually. The United States experiences about 100,000 thunderstorms annually.
Since lightning strikes generate voltage levels up to one billion volts with currents as high as 200,000 amperes, they can induce damaging voltages in electrical systems. A direct stroke on an outdoor transmission line can induce severe voltages ranging from half a million to 15 million volts between the line and ground. The typical propagation velocity of the voltage wave is around 600 ft. per microsecond. The voltage rises several hundred kV in about one microsecond.
If the voltage wave is such that the insulators cannot withstand the over voltage, a flash-over will occur, tripping the circuit breakers or fuses.
However, if the insulator can withstand the overvoltage, the wave will continue until it encounters a substation, inflicting serious damage.
To reduce the effect of the impulse voltage on the substation and other circuit elements, surge arresters are utilized to clip off these voltage peaks above a certain value.
Protective DevicesSurge Arrestors
Lightning and switching can generate overvoltages in electrical systems. The role of a surge arrestor is to provide a low resistance path for these overvoltage impulses between the line and ground so that most of the electricity goes there rather than continuing on through the system. In addition, as soon as the voltage returns to its normal operating levels, the flow of current must be broken instantly.
Surge arrestors are connected in parallel with the equipment they protect. Under normal conditions, they are dormant until a voltage impulse is experienced by the system. This is accomplished by having an enclosed gap or a series of gaps that can withstand the operating system voltage. As soon as the higher impulses are generated in the system, the gap will spark over and become conducting-to-ground until the surge is quenched.
There are principally two types of surge arrestors: expulsion-type and valve-type. We will also look at several other types.
Expulsion-Type Surge Arrestors - In an expulsion-type surge arrestor, the gap is arranged in such a manner that the spark must pass over a gas-evolving material. As the arc occurs, the gas is released, which will interrupt the flow of current by expulsion action. Every time the arrestor operates, some of the gas-producing material is destroyed, so this type of an arrestor can operate only a limited number of times safely. Additionally, the gaseous discharge makes it unsuitable for mounting in equipment that is enclosed.
Valve-Type Surge Arrestors - In a valve-type surge arrestor, the arrestor exhibits a low-resistance path during overvoltage conditions and the voltage pulse is quickly drained. However, as soon as the voltage returns to normal levels, the resistance is increased to a high value and the current is interrupted when it goes through zero for the first time in the wave cycle. Valve arrestors do not have the limitations that expulsion-type arrestors have. That is why they are almost exclusively used in power distribution systems.
Altitude - Generally, the standard arrestors are considered suitable for up to a 6,000 ft. altitude. However, there are special-purpose arrestors available for altitudes of up to 18,000 ft.
Distribution-, Line- And Station-Type Surge Arrestors - Surge arrestors above 1,000V are divided into three classificationsdistribution-type, line-type and station-type.
Distribution-type arrestors are available from 1-18 kV. They are compact, light-weight and easily installed on the cross arm or power poles. They can protect transformers, cables, switching devices and distribution capacitors.
Line-type arrestors are available for voltage levels ranging from 20-73 kV. They are also light-weight, small-sized and relatively lower in price compared to station-type. They are commonly used for transformers and substations in the medium-voltage levels.
Station-type arrestors, while heavier in construction, provide better protection and greater reliability than other types of arrestors. Moreover, station-type arrestors are capable of discharging the most amount of energy, available in standard ratings from 3-800 kV. In most facilities, station-type arrestors are preferred for this reason.
Application - The application of arrestors is analogous to fuses. Fuses protect the system from higher current, so the fuse interrupting current level is chosen at a value not to exceed the maximum current capacity of circuit elements. Similarly, an arrestor protects systems from overvoltages. This means that selection of the system insulation should be properly coordinated with arrestor specification. The time-voltage characteristics of the arrestor should be determined so that it activates before the voltage pulse can overwhelm the insulation.
Emergency Preparedness and Standby Power Systems
An important measure of an organization's strength is its ability to respond successfully to emergenciesparticularly if the loss of human life and property is possible. It is true that no amount of preparation and backup systems can totally eliminate the risk posed by emergencies, but innovative design and careful planning can significantly reduce their impact. Disasters occur infrequently, of course, but when they do, they can destroy a company, so the same attitude that is given to buying a strong insurance policy should also motivate effective emergency design and planning.
Effective emergency response starts with a system based on sound engineering principles. But although this is a necessary requirement, it is not altogether sufficient. An effective operational
plan is also needed. Such a plan should be comprehensive enough to give adequate guidelines for action, yet flexible enough to adapt to sudden changes and varying demands. A good plan not only protects human lives, but also reduces exposure to liabilitynamely, the accountability for actions or lack thereof in view of one's authority and responsibility.
In addition to a sound electrical system and an emergency response plan, we often need to build emergency and standby power supplies to keep critical processes and equipment moving during the emergency.
Ideally, we would like to see a seamless transition between normal and standby power. In real situations, however, it may not be an economical choice. Therefore, compromises must be accepted.
Determining Standby Power Requirements
The first step in determining the standby power requirement is to determine what kind of a power interruption can be tolerated. Based on this criterion, electrical loads fall in one of the following categories:
If a load cannot be interrupted for more than half a cycle (i.e., 1/120 of a second for a 60 Hz system), it is called a critical load.
If a load cannot be interrupted for more than 10 seconds, it is called an essential load.
If a load can be interrupted for the duration of the normal power failure, it is called a non-essential load.
Of course, the standby power requirement for each category is different.
Based on the source of electric power, standby and emergency systems are categorized into four types:
Batteries - Batteries are an effective means of providing emergency power for fire alarms, emergency communications, exit signs, protective relays and emergency lighting. These loads are connected to a bank of batteries. A battery charger, connected to
normal AC power supplies, charges the batteries. According to the NEC, the system should be capable of maintaining the load for 90 minutes without dropping below 87.5 percent of normal voltage. These systems are simple, reliable and robust. The maintenance requirements also are minimal, although since batteries have a finite life they must eventually be replaced.
Generators - The electric generator is the most common source of standby power. The engine is powered by natural gas, gasoline or diesel.
The load is coupled to normal power and the emergency generator through the transfer switch. Under normal conditions, the transfer switch connects the load with the utility power source. When there is loss of utility power, the generator will start running and the transfer switch will connect the load to the generator. This process takes about 10 seconds. When the utility power is restored, the load is transferred to normal AC power after 15 minutes. The advantage of a generator is that it can serve the load for an extended period as long as there is an adequate supply of fuel. Since during a power interruption there is a 10-second delay before the generator can service the load, the load must have more than a 10-second ride-through.
A number of factors must be taken into account when a generator is specified. The type of fuel is important. For example, natural gas generators are efficient and easy to maintain. From a pollution point of view, the flue gas is quite clean. But natural gas generators are not considered true emergency systems, because if there is an interruption from the utility gas supply, the unit will be useless in an emergency. That is why certain agencies such as the Joint Commission for Accreditation of Hospitals does not consider a natural gas unit as a true emergency generator.
For critical loads, therefore, gasoline- or diesel-powered generators are considered appropriate. Normally, for small-sized units, gasoline is more appropriate. For larger units, diesel is more preferable due to its lower operating cost, lower maintenance requirements and improved safety (diesel fuel has a high flash-point and low volatility).
Alternate Power Source - Another method of providing standby power is by serving a load with more than one source, via a double- or triple-ended power station. This technique will be effective as long as both feeders are powered by different and
independent sources. Such an arrangement provides redundancy for feeder cable, transformers and circuit breakers for the primary systems. The transfer to the alternate feeder can be manual or automatic. Similar to a standby generator, there will be a momentary power interruption of about 10 seconds during the transfer.
Uninterruptible Power Supplies (UPS) - A UPS is used where continuous power is required. A UPS also protects the load from power disturbances such as harmonics, transients and voltage surges and sags (see Figures 6-4, 6-5, 6-6 and 6-7).
There are generally two types of UPS systems: rotary and static.
A rotary system consists of a motor-generator set which isolates the critical load from normal power. During a power interruption, the motor-generator set can continue to provide power to the load for at least 100 milliseconds by its kinetic energy. This can be extended by many seconds with the addition of a flywheel. However, for an extended power outage, an auxiliary energy source (i.e., batteries or a generator) is needed. In most cases, a by-pass circuit is provided. This will enable the system to operate with normal power if the UPS malfunctions. The transfer to by-pass circuit can be manual or automatic.
A static system utilizes a rectifier and inverter module in connection with backup batteries. The rectifier circuit converts normal AC power to DC power which charges the batteries and supplies the inverter section. The inverter converts DC back to AC. During a power interruption, the batteries will continue to supply the inverter until they are drained or the normal power is restored. Normally, the batteries must be large enough to last a minimum of 20 minutes.
Determining The Appropriate System
Determining the appropriate standby power system is, in principle, no different than any other managerial decision. That is, given a set of constraints, how can we pick a system that will ensure an optimum solution?
One of the first elements for consideration is the quality of service and frequency of utility power failure. This depends on the particular utility company, the geographical location and the time of the year. It is important to gather data for the average duration as well as the range of these failures. Obviously, if the reliability of the utility service is not high or if the average power outage is long it will influence the type and size of the standby power. Moreover, if
the quality of utility power is not satisfactory, a UPS will address this concern as well.
UPS units are used to ensure instantaneous
switch to emergency power for critical oper-
ations in the event of a power failure. The unit
shown is designed to operate quietly and filter
and condition incoming power so that the
power reaching its points of use is low on
harmonic distortion. Courtesy: International
The next step is to classify the loads into three functional categories to determine whether the load is supporting people, equipment or the entire building.
The people category encompasses loads associated with human safety such as life support systems in a hospital, airtraffic control systems, etc. In these cases, a UPS is required.
The equipment category consists of systems such as data processing centers, industrial processes, etc. that are needed to operate a larger system such as a factory or chemical processing plant.
The building category consists of all support systems in a building such as lighting, HVAC, elevators, communications, security, fire alarms, etc.
For the last two categories, only equipment such as a data processing center will utilize UPS, while the rest typically use a battery or emergency generator for backup. Batteries are normally used for smaller loads such as lighting, alarms, control circuits, telephones, fire protection systems and security alarms. For larger loads such as HVAC, elevators and other essential devices, a standby generator is used.
In examining electrical failures, let us pause for a moment and look at the basic characteristics of electricity. Electricity is essentially
One-line diagram for a UPS system where input voltage equals the output voltage. Courtesy: International Power Machines.
One-line diagram for a UPS system where input voltage does not equal output voltage. This configuration provides
a separately derived source and a fully isolated output under maintenance bypass conditions. UPS output external
circuit protection is required. Courtesy: International Power Machines.
One-line diagram for a UPS system with a dual input option. In this case, the bypass (reserve) AC input power
must be the same voltage, frequency, phase rotation sequence and configuration as the system AC output power.
Courtesy: International Power Machines.
a flow of electrons, made possible if there is enough potential difference between two points to overcome the resistance along the current path (see Appendix I). This is similar to gravity; if we pour a bucket of water at the top of a hill, it will roll down the hill.
When a system is working properly, then the electrical current will be flowing only through the intended paththat is, through the established conductor. The value of the current will be a function of the voltage and circuit resistance, which implies that for a given potential level, the current is limited by the resistance. However, as soon as the integrity of the main resistance (insulation) is compromised, then a separate current path may be established where the intensity of the new current will be independent of the original one. This means that regardless of the original circuit current, in a short circuit condition the current is limited primarily by the capability of the power source; higher available current translates into more damage.
Maintenance Can Prevent Short Circuits
Since the probability of short circuit conditions is directly related to the frequency of insulation failures, we need to have an effective program of maintaining the dielectric value of the insulation. Having a good maintenance program reduces the probability of electrical failures. If one looks at any preventive maintenance program, the majority of the activities revolve around ensuring and protecting the longevity of the insulation value of electrical components. The insulation level can be maintained if electrical elements are kept in a clean, cool and dry environment.
Ambient Conditions - First, when looking at any new installations, examine the ambient conditions to decide on insulation materials that can withstand the severities of the given environmental conditions. For example, in a higher-temperature installation, choose electrical distribution components that can withstand these higher temperatures and, if necessary, incorporate adequate ventilation to limit temperature rise.
Vibration - If vibration is a problem, make sure that flexible connections are used. In many projects, due to lack of good coordination among various utilities, it is not unusual to find water pipes running on top of electrical cubicles, or electrical equipment in close proximity to steam lines, steam traps or heat exchangers. Any water leakage or failures can trigger an electrical failure.
Record-Keeping - Keeping a good record will help a great deal because one can track early signs of insulation deterioration. In almost all cases, major electrical failures do not happen without some warning signs. With good records and an effective preventive maintenance program, maintenance personnel can detect initial warning signs.
Minimizing Impact Of Short Circuits
One way to minimize the damage of electrical short circuits is to limit the available fault current. This can be done by installing line reactors in series with the main feeders.
In the past two decades, many facilities have installed more-efficient transformers. These units have a lower impedance and thus cannot limit short circuit current to the degree that less-efficient transformers can. If the facility installs more-efficient transformers or upgrades the size of the incoming feeder, it is important to check the available fault current and make sure that the ratings of the protective devicessuch as fuses and breakersare the same or higher than this factor. This is important to ensure that the protective devices can interrupt the circuit safely. Otherwise, fire, explosion or other damaging conditions can result.
Fuses And Circuit Breakers - Generally fuses and circuit breakers are used to protect electrical circuits. Fuses have a lower initial cost. They require no maintenance or adjustment over time and are very reliable. However, they need to be replaced after they have interrupted the circuit.
By contrast, circuit breakers have a relatively higher initial cost. The molded circuit breakers require hardly any maintenance and are reliable. When they trip due to an overload or short circuit, they can easily be reset. The protective device can interrupt the circuit caused by a sustained overload as well as by a short condition. If a circuit breaker has both a thermal and magnetic trip mechanism, it can accomplish both tasks.
In addition, in many installations, fuses and circuit breakers are used in series. The current-limiting fuse will protect the system against short circuits while the breaker will protect against sustained overloads.
Coordination - As mentioned earlier in this chapter, the coordination of protective devices is important to avoid unnecessary blackouts.
One can never eliminate the possibility of a short circuit. However, following the guidelines discussed in this section can significantly reduce the probability of occurrence. Whenever a circuit breaker trips or a fuse blows, before the circuit is restored, it is important to find out what caused the overcurrent condition. One of the common problems experienced by some maintenance staffs is that they restore the circuit without analyzing what caused the failure. If the circuit is restored without any regard to the root cause, the fault may still be present which will trip the circuit very quickly or result in more damage later.
Emergency Preparedness And Standby Power
The primary motivation for providing a standby power source is to improve the overall reliability of the system. As every system is made of many parts, the reliability of individual parts impact the overall system reliability. An important rule for having a robust system is simplicitye.g., keeping the number of system parts (especially moving elements) to a minimum. In addition, determine those critical elements that have the most impact on system performance.
With a standby generator, for example, the transfer switch plays a critical role because it has to operate correctly during normal power and emergency power. If the transfer switch malfunctions, it might transfer to generator power unnecessarilythus causing an unneeded power interruptionor it will not transfer during a power failure. So one could be worse off with such a backup generator than not having one.
With a UPS, if the anticipated utility outages are longer than the capacity of the batteries, an auxiliary generator can be used to charge the batteries.
Another important aspect of standby power is having an effective maintenance program. This way, we can ensure that the system will operate when it is needed. An essential element of preventive maintenance is periodic testing of the total system. In many cases with generator maintenance, scheduling becomes a big challenge because power will have to be shut down.
Maintenance requirements differ widely depending on the equipment. For instance, batteries simply need to be checked for water level, tight leads and the specific gravity of the acid. For a backup generator, the maintenance consists of checking the engine, the generator, the transfer switch and the control circuitry. The maintenance procedure for a UPS is more involved because of more
sophisticated electronic parts. Manufacturers' recommendations should be followed.
When deciding on standby power, a cost/benefit analysis is required to determine the need for such a system. The motivation for standby power is either a code requirement; protection of life, property and profits; or any combination of these. For code requirements, we can refer to the appropriate sections of ANSI/NFPA, ANSI/IEEE, ANSI/UL or ANSI/NEMA for details.
All other standby power decisions can be based on economic considerations. This means: What kind of power interruptions can be tolerated? What is initial cost of a standby system? What are its operating and maintenance costs? What combination of decisions will give us an optimum solution?
Finally, it should be recognized that the presence of a standby source only improves the reliability of the system on a statistical basis. It cannot guarantee that the facility will never experience a service interruption.
For more discussion on emergency standby power systems, see Chapter 2.
Rate Structures And Power Industry Trends
In 1992, President George Bush signed the National Energy Policy Act into law. While most facility managers concerned themselves with the more pressing scheduled banning of the inefficient fluorescent lamps, a set of provisions in the massive 900+ page document could change how facility managers buy electricity in the future. The provisions called for the deregulation of the electric power industry, with implications that may offer incredible benefits to facility managers.
How Utilities Charge for Electricity
From generation to distribution, the electric power industry is the largest in the United States. Although the commercial electric market is only a century old, it is a $190 billion industry. The U.S. population represents about six percent of the world's population, but the electrical generation in this country is disproportionately about 36 percent of the world's production.
Electricity is a unique industry in one sense in that it affects every home, commercial operation and industrial plant. At the same time, it can not be stored in any appreciable quantity, which means supply and demand must match at all times.
During the past two decades, energy market prices have experienced a roller coaster environment because of major global energy issues. A basic rule of economics is that for commodity
products, if the price of a product increases, consumer demand decreases. Consumer demand for substitute products will increase at the same time, however, resulting in higher prices for these substitute products. The prices of coal, gas and oil obey this rule, but electricity does not. In fact, the cost of electricity has been relatively stable compared to other forms of energy. The underlying reason is that fuel is only 3 percent of the total cost of electricity, which means even primary fuel prices doubled, the overall cost impact on the cost of producing electricity overall will still be relatively negligible. The largest cost of producing electricity is capital needed to build power generation plants. The second largest cost is the transmission, distribution and substation equipment.
The unit cost of electricity for large commercial and industrial customers can be a function of a number of parameters, such as energy cost, maximum power demand, power factor, voltage levels, social programs, etc. Some utilities have more than a dozen different rate schedules for customers, and no two are completely alike.
Energy Charge - The energy charge is related to the number of kWh used by a customer during the billable time period. Depending on the utility, there can be a constant unit cost or variable cost for a kWh. In other words, the energy cost at peak hours of the day will be higher than for off-peak hours. This is an effort by the utility to curb consumption during peak hours and encourage consumption during off-peak times, similar to the long-distance telephone rate structure. Or the cost can be on a sliding scale where the more energy that is used, the lower the rate becomes.
Reducing The Energy Charge - Facility managers can reduce the energy charge via energy-efficient technologies in lighting, motors and other products. Energy-efficient electric motors, for example, reduce electrical input without sacrifice in the amount of work it can dothe tradeoff is that it costs more upfront. New lighting products, such as energy-efficient lamps, ballasts and automatic lighting controls can also reduce both components of the energy costwattage and amount of time the system is in use.
Demand Charge - The demand charge is a function of the peak power requirement of a facility in kW, or sometimes in kVA. The power demand can be measured for a fixed windowusually 5, 15 or 30 minutes at fixed intervalsor a sliding window which measures the highest power demand during any 5-, 15-, or 30-minute period.
The measurements are taken by the utility using a demand-meter.
These charges may constitute a significant part of an electric bill. The reason is that, since the utility company has to make electricity available for the customer on an instantaneous-demand basis, in turn it needs to make the capital investment for generation, transmission, distribution and substation infrastructure even if the customer only uses it for half an hour a year. In other words, if a facility has a light bulb, whether it turns the bulb on or not is meaninglessthe utility must still build the power plant, transmission and distribution equipment to supply the bulb. The demand charge is meant to recapture this cost.
The Ratchet Clause. Some utilities impose a ''ratchet clause'' on the facility's electric bill to take this cover-the-cost concept one step further. The ratchet clause locks the demand charge in at its highest point of the year, usually the summer. Since the majority of electrical utilities in the United States are summer peaking because of large mechanical air conditioning loads, the power demand charges in the summer months are usually higher than the rest of the year. In addition, during the summer months, the demand charges for peak hours such as 10 am-4 pm are higher than the rest of the day.
This means that any demand charge billed during the winter months cannot be lower than a percentage of the maximum demand levied during the previous summer. Suppose the local utility uses a rate of 80 percent for its ratchet clause and bills $1,000 as the maximum demand charge during the summer. Later, during the winter, the demand charge is assessed at $400. But because of the ratchet clause, the bill is raised to 80 percent of $1,000, which is $800.
Reducing The Demand Charge - To reduce these charges, facility managers develop strategies to reduce demand everywhere possible by shifting load, duty cycling, power peak demand limiting, running peak-shaving generators (see Chapter 2) and upgrading building systems to reduce wattage. Another strategy is to stagger the startup of electric motors in a plant, because motors require a high in-rush of current when starting up.
Power Factor Charge - The power factor charge is usually assessed when a customer has an overall power factor lower than 85 or 90 percent. The reason for the charge is that lower ("normal") power factor equipment increases transmission losses for the utility as well as reduces system capacity. This means that the utility must
make an investment in more generation, transmission and distribution equipment to service the load. Internally, it takes twice the amount or size of wiring to teed the same amount of equipment if lower power factor equipment is used.
The charge may be formulated:
A = Energy Consumption (kWh) ÷ Facility's Actual Power Factor
B = Energy Consumption (kWh) ÷ Utility's Standard (Typically 0.9)
Power Factor Charge ($) = (A - B) × kWh Rate ($)
Reducing The Power Factor Charge - Use high power factor equipment whenever economically feasible.
Fuel Adjustment Charge - The other elements of an electrical bill may reflect a fuel escalation clause and charges assessed for social programs. Sometimes, the price of oil or other fuel may rise dramatically faster than the utility can go through its regulating body to authorize a rate increase, so this clause allows the utility to unilaterally add on a charge to off-set the increased expense.
Service Voltage Charge - The service voltage and size of the service has an impact on the cost of electricity. Normally, purchasing electricity at a higher voltage is less expensive than at the lower level. The reason is if a customer is purchasing power at higher voltages, he owns the high-voltage substation and is responsible for the operation and maintenance of the station. The rate difference between different voltages can be significant. There are many instances where it is cost-effective for a customer to build his own substation. These projects can have a payback of 3-5 years.
Regulation Of Utilities
Electric companies in the Unites States are regulated utilities governed by public utilities commissions (PUCs) or other regulating bodies. The principle behind this is if a company has the monopoly to offer its services for a designated territory, the overall societal costs will be lower. The role of the PUCs are to regulate costs and to approve rates that will be fair for the customers and at the same time generate a reasonable profit for utility shareholders.
The rate of electricity has traditionally been determined by agreeing upon a rate of return for the utility assets. This way, the PUCs protect the customers from excessive utility charges and at the same time guarantee the utilities a certain rate of return to ensure their financial sustenance.
Trends in the Electric Power Industry
The subject of regulated monopolies versus open market competition has been the subject of discussion among economists for a long period of time. Deregulating electricity and restructuring the electric industry are major issues that will impact all customers in generaland facilities managers in particular. To get a better appreciation of this topic, it is important to look at the historical background of the electrical industry as a basis for projecting possible future paths.
The first commercial electric distribution system was built and placed in operation in New York City by Thomas Edison in 1882. This was a DC system with a nominal voltage of 100 volts that served a small number of customers in the vicinity of the plant. The station had a capacity of 560 kW and was supplying about 400 incandescent lamps. Soon after, there were a number of other urban and suburban areas served by commercial facilities and industrial plants. For instance, by 1887, there were six electric companies in New York, five in Duluth, MN and four in Scranton, PA.
By 1883, John Gibbs and Lucian Gaullard invented the AC motor in England. Three years later, Professor Galileo Ferraris developed polyphase circuitry in Turin, Italy and in the same year the first transmission power lines were built in Turin. This line was transmitting 100 kW of electricity for about 20 miles using step-up transformers to raise the voltage to 2,000 volts for transmission and reduced back to 100 volts for use.
The first hydroelectric station was installed in Appleton, WI and by 1903, the first all-turbine station with a capacity of 5,000 kW was installed in Chicago. In the United States, the first power distribution system using AC went in operation at Great Barrington, MA, utilizing 500 volts for transmission and serving several stores, hotels, doctor's offices, the post office and other buildings.
With the adoption of AC, the size and span of power distribution systems grew rapidly. Expansion in this era was haphazard and devoid of long-range planning. Practically all of the power transmission and distribution systems were overhead installations. The expansion of these systems was unregulated and basically governed by a laissez-faire economy. The utilities were expanding and setting rates as they saw fit. The number of electric companies steadily increased until 1917.
Table 7-1. Average cost per kWh in the United States, by state. Note the charges listed in the Table do not represent solely the energy charge per kWh, but instead the utility's average revenue per kWh based on all charges and averaged over all utilities in the state. Also note the information is a bit dated, published in 1989. Courtesy: Energy Information Administration, U.S. Department of Energy, 1989.
(table continued on following page)
(table continued from previous page)
After 1917, there was a decline in the total number of companies, mainly due to mergers and consolidations and the creation of holding companies. (A holding company is an entity that owns sufficient stock in one or more corporations where it could influence the management of those companies.) The number of holding companies grew rapidly because they offered superior profits for their owners with a relatively small equity. But in general, many practices of these holding companies were detrimental to the public welfare. Congress responded by passing the Public Utilities Holding Company Act (PUHCA) in 1935. The Act transformed the electric utility holding companies into regulated vertical monopolies much like are today.
During the following three decades, 1940-1970, electrical power consumption increased at an average rate of about seven percent per year, almost twice the rate of growth of the gross national product (GNP). This increase was principally due to population growth, increased economic activity, and a dramatic proliferation of electricity in many aspects of our lives. This was the golden era for electric utilities, because all economic conditions were favorable for them. Inflation was low, the cost of capital was reasonable, fuel prices were low, and there was hardly any pressure regarding environmental issues. The utilities were continually building larger and larger plants, still benefiting from economies of scale, which in turn was stimulating more demand for electricity. Such conditions created bullish expansions of nuclear and fossil fuel plants. This was a time when electricity was being promoted as the clean form of energy and nuclear plants were hailed as the panacea. There were some who believed nuclear plants would eventually reduce the cost of electricity to such a low value that installing meters for domestic customers would no longer be needed.
By the early 1970s, these halcyon days came to an abrupt end. The Energy Crisis of 1973 almost doubled the cost of primary fuels in just several weeks. The higher fuel prices triggered higher inflation which significantly increased the cost of capital for new plants. Environmental concerns emerged, placing additional economic burdens and long delays on electric utilities. The Three Mile Island accident opened a new chapter in the eventual demise of any new nuclear plants. The slowdown of the economy, coupled with a reduction in population growth and higher energy costs, reduced or at best flattened the demand for electricity. Moreover, the capacity of plants had reached the point where no more gain in economies of scale could be made.
Consequently, in a period of less than a decade, the electrical utilities changed from a growth industry into a mature industry. These dramatic changes forced many electric utilities to examine their long-range expansion plans and scale back wherever possible. Other utilities that had plants partially built experienced major delays and significant cost overruns, causing some utilities to abandon plants and write off hundreds of millions of dollars of investments. The electric utilities that had nuclear plants under construction experienced an additional burden from the Nuclear Regulatory Commission (NRC). Thanks to the public outcry concerning the safety of these plants, an avalanche of new safety requirements and elaborate emergency evacuation procedures were demanded prior to issuing operating licenses for nuclear plants. The public hearings for the communities surrounding the proposed plants were emotional and the additional costs and delays became economically unbearable for many utilities.
As a result of these developments, many utilities canceled their plans to pursue nuclear plants. For instance, in the early 1980s, Cincinnati Gas and Electric has invested more than 15 years and almost completed a nuclear plant along the Ohio River at Moscow, OH. As a result of hearings and additional costs, the company decided to convert the Moscow plant into a fossil fuel plant with an additional significant expense and several more years of delay. On the national level, out of the 446 generation units that were in design and construction between 1975-1982, more than 100 units were canceled and another 130 were deferred. For the units that survived, cost overruns and delays were huge. In some instances, the final cost of a project ended up 10 times the original estimate.
Contrary to the earlier decades when new plants were lowering the overall cost of electricity due to economies of scale, in the 1970s the cost of electricity was skyrocketing. The reason for the dramatic increase lies in the idiosyncrasies of utility accounting principles. When utilities are building new plants, they need to raise capital to finance the project and pay interest on that capital, in many cases for over a decade before units can be brought on-line. In most situations, electric utilities are not allowed to recover any of these costs when the plant is under construction. The idea behind this is the regulatory agencies do not want the current utility customers to pay for capacity being built for future customers. In addition, they do not want the current earnings of the utility to be whittled away by charges associated with plants that will not be in operation until some time in the future.
To accommodate this approach, an accounting line item is added as "income for allowance for funds used during construction" (AFUDC) in the income statement of the utility. In reality, there is no real income generated by the utility, but this pseudo income works as a plug figure to accommodate a double-entry accounting system. As one would expect, the AFUDC line will continually increase every year while the plant is still in construction. When the plant is completed and brought on line, the construction as well as the financing cost of the plant is capitalized and added to the total asset base of the utility. This asset base is the denominator in the return on utility assets formula: Return On Utility Assets = Income ÷ Utility Assets. As the asset base grows, the rate of return shrinks, and the only way to correct the imbalance is to increase the incomethat is, raise prices. The rate increases were dramatic partly because of major cost overruns, higher interest rates and delays in construction. In addition, the load demand was not increasing at a high rate to fully utilize the additional capacity.
The PUCs have traditionally established rate of return for the utilities based on their fixed assets. Moreover, during the growth period of the industry, the PUCs were establishing higher rates of return to attract more investors. This had created a vicious cycle where the utilities were constantly trying to add to their fixed asset base to increase total revenues. To attract more capital in financing new fixed assets, the utilities were paying attractive rates of return. The impact of this spiral was being negated by the increasing demand for electricity and economies of scale. In other words, despite a higher return to the investors and an increase in the asset base of the utility, the unit cost of electricity continually declined or remained constant. However, after 1970 this scenario was no longer valid. As the new plants came on-line and higher rates were demanded by the utilities, the PUCs faced stiff resistance to the rate hikes. Heeding to public outcry, the PUCs began segregating new utility asset increases into essential and non-essential categories. The idea behind this move was to limit the addition of new assets to the utilities' base only to those items that were considered essential for the customers.
The financial markets reacted to these moves by negatively impacting the credit worthiness of utilities. For instance, in 1970, about 96 percent of the utilities had a credit rating of "A" or better. This figure dropped to less than 67 percent in just one decade.
Responding to the 1973 Energy Crisis, the U.S. Congress enacted a number of energy laws such as the Fuel Use Act (FUA), Natural
Gas Policy Act (NGPA), National Energy Conservation Policy Act (NEPCA), Public Utilities Regulatory Policy Act (PURPA) and others. The principal motivation behind these were to reduce the U.S. dependency on foreign oil and utilize more coal. All of these rulings had an effect on the electric utilities.
The one that had the largest impact on electric utilities was PURPA, since it dealt with providing incentives for cogeneration projects. As previously mentioned in Chapter 2, some utilities actively created barriers to cogeneration via financial incentives to discourage some customers from having cogeneration plants. Meanwhile, pressure continued to grow on utilities to reexamine their basic supply side approaches and seek new and innovative ways to meet future power demands. This resulted in the pursuit of demand side management (DSM) by many utilities in early 1980s. By the mid-1980s, the concept gained popularity among many utilities in improving their financial performance.
Demand Side Management (DSM)
DSM includes all activities planned and implemented by the electric utility to influence the consumption of electricity and attain a desired load profile. So DSM is a means to intervene in the marketplace in a variety of ways, such as load management, strategic conservation, customer generation and other strategies that will modify the overall electric load. These activities include peak shaving, valley filling, load shifting, flexible load shifts, energy conservation, etc. that the electric utilities can identify to attain the desired result. This way, the utilities can influence the time pattern and magnitude of electrical demand.
The utility offered financial assistance to customers to promote energy conservation. This reduced the demand for electricity, which reduced the need for more plant capacity, saving significant amounts of money and increasing the utility's rate of return. In 1985, about 75 utilities invested $582 million for energy conservation as part of their DSM efforts. This offset about 7,240 MW of new capacity which would have required over $15 billion of capital for new plants.
By 1989, about 90 percent of the investor-owned utilities had spent over $1 billion on DSM, deferring 21 plants. These plants would have cost around $5 billion each for a total capital requirement of $105 billion. In California, as part of its long-range plan, Pacific Gas & Electric was planning to spend $2 billion to offset 75 percent of its projected growth. As can be seen, a dollar invested in DSM goes a long way.
Another factor that pushed DSM was the environment. Presently, one-third of the carbon dioxide emitted into our atmosphere is produced by power plants. Carbon dioxide is the primary "greenhouse gas" attributed to global warming by some scientists. Nitrous oxides and sulfur dioxide, other by-products of electric generation, have been attributed to smog and "acid rain." The Clean Air Act amendments of 1990 intended to pass "environmental externalities''costs assigned to pollution associated with the generation of electricityon to the power producers. In the next decade, environmental compliance will cost the electrical industry more than $22 billion, which could translate to about another 5 percent increase for the coal-burning plants. DSM can reduce some of these cost impacts by retiring older and less-efficient plants.
In short, DSM created an excellent opportunity that resulted in a lower overall cost for all parties. Customers were getting financial incentive to install more energy-efficient equipment, the ambient air quality was improving because of reduction in overall load, and the utilities were getting an excellent return on their money thanks to scaled-back plant construction.
Unfortunately, DSM has declined in recent years. With a trend toward deregulation gathering steam, utilities have seriously scaled back DSM programs across the country as they gear up for potential competition in the future.
Deregulation Of The Electric Industry
Today, electric utilities are vertically integrated monopolies where generation, transmission and distribution services are bundled to the consumers in single rates. The retail electric customer can only buy electricity from the local utility. On the other hand, local utilities have an obligation to serve all customers within their service territory. Meanwhile, wholesale customers have access to utility transmission lines.
There are approximately 3,200 electric utilities operating in the United Sates. The majority of them are small municipal power systems and rural electrification cooperatives. There are 265 investor-owned utilities that produce about 80 percent of the nation's electricity. In fact, more than 2,000 of the municipal power systems are embedded in investor-owned utility systems.
Roots Of Deregulation - The National Energy Policy Act of 1992 (EPACT) laid the foundation for creating an atmosphere of
increased competition in the wholesale and retail electric market. EPACT created the most fundamental restructuring of the $190 billion industry since the passage of PUHCA in 1935.
EPACT provided the statutory basis for restructuring the market and laid the groundwork for the deregulation of the industry. In addition, there were some other EPACT provisions that impacted electric utilities. These were in the areas of DSM, integrated resource planning (IRP) and energy conservation. EPACT required all states to adopt new rate standards with respect to rate changes that encouraged investment in energy improvement projects.
The IRP emphasized the framework and selection procedures for new energy resources, forcing utilities to evaluate alternatives for adequate and reliable service to customers at lowest possible rate, utilizing new generation capacity, power purchases, energy conservation/efficiency improvements, cogeneration and renewable energy sources. The IRP process essentially means that the utility must forecast short- and long-term demand, then provide a plan for how to meet that demand at the lowest cost using all available strategies, including DSM, renewable energy, etc. as stated above.
The general framework of electrical industry as prescribed by EPACT is in one way fundamentally different from the deregulation of telecommunication and gas industries. Although EPACT gave the state commissions the ability to play an active role in formulating policies that will encourage competition, it banned federal authority over retail power wheeling. This means that although wholesale power transmission is considered interstate commerce and thus covered under the purview of the Federal Energy Regulatory Commission (FERC), state and local governments will maintain ultimate authority at the retail level. Therefore, it is up to every state to address the deregulation of retail power if and when they desire.
How Retail Wheeling Works - If all states go along with deregulation, corporations in California, say, will be able to buy their power from any utility in the state, or in the country for that matter. Corporations could bargain for the lowest power rate and the best services. This is why many utilities have scaled back DSM and put money into marketing and special services for customers, including energy conservation services.
Even without retail wheeling, large corporations have been making deals with their utilities. Rather than having the power come to them at the best rate, which is the essence of retail wheeling, the larger power users simply told the utilities they would leave the
territory to find a better rate and lower costs. Since large sites such as industrial plants are stable, complex power users, highly desired by a utility, many corporations were able to bargain for a low rate and lock it into a long-term contract.
Independent Power Producers - As of the end of 1995 there are 12 states, among them California, Michigan, Ohio, and New Hampshire, who have been studying deregulation and retail wheeling. It is important to keep in mind that today even with the absence of deregulation, the independent wholesale power producers who are exempt from PUHCA account for $41 billion in annual revenues in the electricity wholesale market. It is expected, according to some analysts, that the figure may quadruple when retail wheeling is available for all customers. Even at the wholesale level, the forces of the market economy have resulted in impressive savings. During the past 15 years the Florida power pool has saved more than $1 billion for the local electric utilities. There are many other examples of power pool saving, although the numbers might not be as impressive as this. It is clear that market forces will make electric utilities more competitive.
For instance, the gas industry, which has $90 billion in annual sales, has experienced a cumulative price drop of over $83 billion for the past decade as a result of deregulation. Since the electric industry is more than twice the size of natural gas, it is reasonable to assume that savings for the ultimate customers will be even more impressive.
Problems With Deregulation - But before we get carried away with this impressive potential for cost reduction, the fundamental complexities surrounding deregulation of electricity needs to be resolved. These issues include the impact of deregulation on the reliability and quality of power, the obligation of the local utilities to serve all customers, the recovery of utilities' sunk costs, the fate of DSM and the impact on the environment and other social programs.
It should also be kept in mind that deregulation affects only the generation of electricity; the distribution of electricity by the local electric utility will remain regulated and non-competitive. However, customers will have the ability to sign a contract with a third party to purchase electricity and use the local utility distribution grid to receive the power. The breakup of AT&T and the deregulation of long distance with regulated local telephone companies is a good
analogy in this aspect of electrical deregulation.
In the traditional scenario, where electric utilities are monopolies, they are in a better position to estimate future load growth and can develop plans to meet the projected growtheither by negotiating and locking in long-term contracts with independent power producers or another utility, or by planning to construct additional generation plants. It should also be kept in mind that constructing a new plant takes more than a decade and substantial financial commitment. Since the utility is under obligation to serve all customers within its territory, it has no choice but to take such a long-term view.
In the deregulated environment, it is not clear who will be responsible for such long-term planning, or whether market forces will be allowed to determine future needs, because there are no guarantees that the customer will ultimately be buying its future power from the local utility or a third party. In a different scenario, what will happen if a utility finds that serving a particular customer is not economical for them because of remote location or load profile? In a pure competitive environment, market forces will determine the match between suppliers and customers. Neither of the two has an obligation to enter into an agreement. In other words, the consumers buy a product or service from a number of possible sources and the vendors are not under any obligation to serve any particular customer. However, it is not clear how the PUCs will reconcile the customer's right to shop around for better prices, while holding utilities to their obligation to serve customers who demand service.
The deregulation provisions of EPACT concerning retail wheeling only apply to investor-owned utilities. This is because the PUCs only have jurisdiction over these entities, while municipal and rural electric cooperatives are not under the same regulatory regime. Municipal and rural electric cooperatives can compete for customers currently served by the investor-owned utilities without any requirement for reciprocity. This inequity is made worse because many of these cooperatives receive some form of government subsidies or tax exemption status. Therefore, government policies will need to resolve this inequity.
The recovery of utilities' stranded assets will be the biggest barrier and bone of contention among utilities, PUCs and consumer advocate groups. Based on their obligation to serve the customers and the projected future electric demands, the utilities had to enter into long-term contracts with independent power producers or plan
the construction of new plants. The remaining useful life of many of these plants will easily span another 20-30 years. Currently, the investor-owned utilities have an asset base of over $300 billion. If some or most of their generation capacity is no longer economical in the new environment, who should bear the cost of these investments?
The utilities argue they made these expenditures in good faith to be proactive and assure continuity of service to all of the customers in their service area. Therefore, they are entitled to recover all of their investments from the ratepayers because it was built for their benefit. On the other end of the spectrum, some consumer groups argue that the utilities invested in these plants at their own risk and if their plants are inefficient, the cost should be born by the shareholders. More than likely, a compromise solution closer to the utilities' position is expected to prevail.
Consequences Of Deregulation - While the intent of deregulation is to inspire competition that will ultimately reduce power rates for consumers, in the near term this may not be the case. As discussed above, if the utilities are allowed to recover the cost of their inefficient generation assets faster than their useful life, the electric rates will probably increase or at best remain flat in this transition period which could last longer than a decade.
The specter of deregulation has caused utilities to reassess DSM programs. Electric utilities strongly supported DSM programs in the past decade to reduce the demand for new generation capacity. With retail wheeling, this pressure will reduce drastically. According to an Electric Power Research Institute (EPRI) survey, DSM programs rapidly climbed until they peaked in 1990. Starting in 1993, a decline in the number of programs emerged. For instance, in 1993, there were 2,300 DSM programs offered by 600 utilitiesone year later, the number dropped to about 500 utilities offering 2,000 programs. To illustrate these difficult issues further, let us look at deregulation efforts in California. Although specific circumstances and issues might not be the same as in other regions of the country, California's example still offers insights on what lies ahead for the rest of the country. Additionally, the complex nature and huge size of the Californian electric market brings up most of the conceivable issues that other states will eventually face.
The California Experience - In California, deregulation efforts started earlier than many other states. Responding to EPACT, the
California PUC (CPUC) aggressively laid the groundwork for deregulation.
In February of 1993, the CPUC staff published a paperreferred to as the ''yellow book"recommending retail wheeling, and by April of 1994 the CPUC published its formal proposal in a publication now referred to as the "blue book" with these general goals in mind:
1. Lower Electric Rates: Currently, California's electric rates are 130-150 percent above the national average.
2. Reduce Regulation: The existing regulatory regime has created an immense administrative burden on utilities. Reducing this overhead will naturally have positive impacts on the rates.
3. Promote Competition: By positioning California's electric utilities to aggressively compete, this will create downward pressure on electric rates. In addition, the CPUC wanted to accomplish this without changing its fundamental duty to protect customers by ensuring safe, reliable and reasonably priced electricity in an environmentally sound framework. The CPUC proposed to have hearings in July and August of 1994 with a tentative plan to implement a performance-based rate-making mechanism starting January of 1996. The specific proposals included:
3a. On January 1, 1996, the CPUC would allow large utility transmission customers to purchase electricity from competing power providers (generators) for retail electric service. This capability should be gradually expanded to more customers, and by January 1, 2002, all customers will have direct access to power markets.
3b. The ratepayers should compensate the utilities for stranded investments that are deemed uneconomic in the new structure.
3c. Electric utilities may compete in the generation market and at the same time maintain their current monopoly on transmission and distribution markets.
3d. Utilities would migrate from the traditional regulatory rate-making to a performance-based system where the
shareholders can reap the rewards of superior performance. This means most of the regulatory reviews will be eliminated.
3e. The CPUC will maintain an environmentally sound energy system. All future DSM programs should be based on competitive solicitations. The CPUC will identify funding for programs oriented toward social objectives.
The three major California electric utilitiesPacific Gas & Electric (PG&E), Southern California Edison (SCE) and San Diego Gas & Electric (SDG&E) came out with their own positions concerning the CPUC proposals.
This was PG&E's position:
1. Strongly support the current command and control regulatory system with a performance-based framework.
2. Supported direct access, but with a slower implementation schedule. They proposed to offer direct access to all customers in a span of 12 years where the residential customers will be the last to benefit from direct access by 2008.
3. The utilities should be able to compete in the generation market on an equal basis with the non-regulated entities.
4. Customers should not pay for the transition costs.
5. The DSM programs should be paid by all customers connected to the utility regardless of buying power from the utility or a third party.
This was SCE's position:
1. Promote greater efficiency, wider competition, and reduce electric rates.
2. Support the concept of direct access. The utility believed that the proposed form would hurt the investor-owned utilities because of one-way competition. SCE proposed the power pool concept as an alternative.
3. Utilities should be able to recover their investment in uneconomic generation resources. These investments were considered sound decisions based on the information available at the time; thus, there is a need for a fair recovery of these assets.
4. The utilities should be allowed to compete in the generation market in an equal basis with the non-regulated entities.
5. Since the legislators have made their judgments to advance environmental conditions and other policy objectives, the CPUC cannot change this statutory framework.
6. The development of alternate mechanisms to promote environmental and other objectives should be a precondition for direct access transition.
This was SDG&E's position:
1. The CPUC needs to address two issues: the gap between wholesale and retail electric prices and the gap between marginal costs and the embedded costs of generating electricity.
2. The utilities should be able to recover the associated transition cost for the current uneconomic generation capacity within the next 10 years. Afterwards, CPUC can decide whether direct access is needed, because the type of market that supports direct access does not exist today.
3. The utilities should be involved in DSM programs. However, without the availability of ratepayer funding, utility DSM funding will decrease substantially.
The consumer advocates came up with their own position:
1. The rates should be lowered and the transition cost needs to be shared by all stakeholders.
2. Support direct access as the cornerstone of true competition.
3. The DSM programs should eventually be replaced with the services of private energy service companies (ESCOs).
4. Customers who choose direct access need not pay any part of
the DSM funding. The shareholders may participate in DSM funding.
As the CPUC conducted hearings in 1994 on these issues, it became obvious that its original implementation date of January 1996 was ambitious and not a practical one.
By May 1995, the CPUC issued two proposed decisions. The majority proposal endorses the pool concept. An alternative proposal considering direct access was still kept on the table for further consideration. The CPUC majority decision approved wholesale power pools to commence operating by January 1, 1997. This will be accomplished by the creation of Independent System Operators (ISO), who will control the transmission lines and serve as a buffer between the generators and the local utility companies. The current investor-owned utilities will be designated as local distribution companies (LDC). ISO will be regulated by the Federal Energy Regulatory Commission (FERC). It will purchase power from independent power producers, power pools and other LDCs through bilateral agreements and spot market. The LDCs will purchase power from ISO and distribute it to the ultimate customers. The retail rates will be unbundled to individual market components such as generation, transition, distribution, competitive transition charge (due to utilities' stranded costs for uneconomic generation capacity), and other system benefit charges based on public policy programs.
In addition, the CPUC is proposing direct access to become available for some customers by January 1, 1999. When all of these plans come to fruition, customers will have three options:
1. Purchase power from any electric generator through bilateral contracts.
2. Obtain power through a spot-market power pool.
3. Remain a customer of the local power utility.
However, there are still many major unresolved issues that need to be answered this year if retail wheeling is to become a reality in 1997.
In the meantime, the PUC for the State of New Hampshire passed a resolution approving retail wheeling to starting May 1996.
Electricity is slowly being transformed from a utility service to a manufactured commodity, with three distinct market entitiesgenerators, wholesalers and retailers. This is the antithesis of the existing vertically integrated monopolies.
Since electricity is a unique form of energy that is not a directly storable commodity in any sizable quantity, methods other than traditional manufacturing models must be utilized to cope with matching the instantaneous power demands with production capacity.
Therefore, a well-coordinated plan is necessary that instantly matches power shortages and surpluses while achieving the lowest cost. Some analysts project that the power companies will experience major shake-ups in the next decade as a result of this restructuring.
Traditionally, even in major recessions, the electrical utilities have resisted reducing staff. However, the threat of restructuring has caused many utilities to reengineer many of their processes and improve efficiency. For instance, in the past two years, PG&E has reduced its workforce by thousands of employees. This downsizing will continue for many years to come. It is estimated that more than 40 percent of the utilities will change ownership in this transition period.
The cost implications and opportunities presented by these power industry trends will be enormous for the facility manager.
The new environment might make it possible for retail power customers to receive service in packages which can include gas, electricity, communication, water, DSM planning, economic development, etc. In the long-run, direct access will lower the electric cost for all customers. However, in the next decade or so, tumultuous times are ahead and we will witness major changes and confusion in this transition period. This will be yet another challenge that facility managers will have to face.
Besides lower electric rates, there are other opportunities and developments that facility managers should be aware of to take advantage of the evolving power industry.
According to most industry analysts, one of the main factors that will determine the cost of electricity in the future will rest on the uniformity of load demand. In other words, although some utilities have time-of-day rates, the price deferential for the cost of electricity at different times of the day will be appreciably different.
One simple step that facility managers can take right now is to install more sophisticated metering to sharpen the load profile of their institution. In addition, as a result of bigger price differentials between the peak and off-peak cost of electricity, installing peak-shaving generators may be economically attractive (see Chapter 2).
This will be particularly rewarding if standby generators are needed for other considerations and the incremental cost of the synchronizing gear will not require a large investment. If this is the case, the payback will be very attractive.
Improvement In Power QualityPerhaps Guaranteed
The deregulation will have an unexpected impact in improving power quality. Although only a few states have taken steps toward deregulation, it is clear that the days of the vertical monopolies are numbered. Given that customers will have a choice, they are in a position to be more selective and require better product and service than what they have received from the utilities in the past.
Traditionally, the utilities have tracked frequency, voltage and service availability. The most common index is the Average Service Availability Index (ASAI) from which most utilities claim a factor of 99.99 percent. This implies that on average the total power failure for customers is about one hour per year. But up to now, most electric utilities are not measuring or guaranteeing any power quality levels. Part of the problem is lack of universal standards that everybody can agree on. But recently in Michigan, Detroit Edison Company entered into a long-term agreement with General Motors, Ford and Chrysler in which the three auto manufacturers will remain customers of the Detroit Edison; in return, the electric utility guarantees a certain level of power quality. If the utility fails to deliver the minimum power quality, it will compensate the auto makers for their losses.
International Standards - Although the concept of the power quality guarantee is new in the United States, some European countries have established standards for public utilities. In 1994,
the International Technical Commission (IEC), which has members from 50 countries, developed a quality standard, EN 50160, titled ''Voltage Characteristics of Electricity Supplied by Public Distribution Systems." The group is working to develop quality benchmarks that can be used both for the power delivered to customers as well as the interferences emitted from the customer site.
The French Model - There have been a number of projects in the United States, Canada and Europe to monitor power quality conditions across the utility grid. In 1994, Electricite de France (EDF)the French national power companylaunched the Emeraude program in which the utility entered into contracts with 6,500 commercial and industrial customers. Under the terms of this contract, the utility must supply electricity within a certain quality level and, in return, the customer will install the needed protection equipment to limit the propagation of interference from the customer site. The utility will compensate the customers for any losses. The utility has provided this program at no additional cost to its customers.
If the minimum quality levels guaranteed under this contract are still not acceptable, the customer may elect to enter into a contract for receiving power under the Reseau PlusSuper Networkcontract. In this case, the customer has to share the additional cost of higher quality power delivered to them.
Deregulation gives customers the choice to purchase power from different sources, and so they will demand more responsive vendors. This means that utilities must become innovative, aggressive and service-oriented to survive.
Facility Managers Will Be More Savvy Power Buyers
Facility managers should keep up with the trade journals concerning the latest developments and implications of deregulation. There is no doubt that in the future, decisions surrounding electric purchases will require more knowledge and expertise. Facility managers who have this expertise will be in higher demand.
In some states such as California, which are further along in the deregulation process, many large customers are constantly being contacted by energy brokerage firms and consultants. Some of these
firms claim to save the companies large sums of money and they strictly work on commissions. In other words, they want the customer to agree to share a certain percentage of their savings for the next X number of years, or secure the rights to be the sole provider of electric energy for the customer. Facility managers should review such proposals with great care. Some of them may be beneficial for certain entities. However, those are the exceptions. Certain electric brokers have been trying to offer devices which may be stretching the broker's ability to perform its contract, or they may promise conditions that the state PUCs have not decided on. So it is very important for facility managers to be well-versed in the latest developments before they enter into long-term contracts with such brokers or consultants.
Fundamentals of Electricity
During the past century, electricity has tremendously improved the quality of life for a large portion of humankind. In this chapter, we will cover the basic building blocks of knowledgetechnical theory and definitionsthat will lead to an appreciation and understanding of electrical energy.
Electricity and electrical devices surround our daily lives in such a prevalent manner that we almost take the availability of this clean and convenient form of energy for granted. It is difficult to imagine our lives without electricity.
Electrical energy is fundamentally different from other forms of energy. First, in most non-electrical energy, the most common concept is mass. Second, one can easily sense the size and immense power of other forms of energy. And finally, other forms of energy can usually be stored. By contrast, electricity does not have a mass. In addition, the immense capability of electricity cannot be experienced. This is because electricity normally appears dormant unless their is a fault. And electricity cannot be stored in large quantities economically.
The purpose of stating this is to emphasize the important role of energy in our lives, but at the same time demonstrate the subtle and fundamental differences between electricity and other forms of energy.
Units of Measurement
The first step to understanding electricity is to know the basic units of measurement. The first question usually raised is whether one uses English system or metric system units. Luckily, there is only one set of units for electricity which has been incorporated as part of the International System of Units (SI). This naturally eliminates the need for remembering many tedious and confusing conversion factors between the English system and metric systems. The most common electrical units used are shown in Table A-1.
Depending on a particular application, many of these electrical units can be very large or very small. In such cases, a multiplier is needed. For instance, a farad is a large unit for capacitors. Normally, one deals with microfarads, or one millionth of a farad. On the other hand, for major power problems, the unit called a watt is too small and one normally deals with kilowatts (1,000 watts) or megawatt (1 million watts).
The common multipliers used for power systems are shown in Table A-2.
These prefixes assist us in using the same electrical units for the
Table A-1. Units of measurement of electrical properties.
Magnetic flux density
entire range of applications. Now, let us briefly define the aforementioned electrical units: volts, amperes, ohms, watts, frequency, capacitance and inductance.
Water or any other liquid will flow from a higher elevation to a lower elevation thanks to gravity. By the same analogy, to induce a current flow, a potential difference is needed. For electrical systems, this potential difference is called electromotive force (EMF). The EMF is commonly referred to as voltage, and is measured in volts.
One volt is the amount of potential difference that induces one ampere of current when it is applied across one ohm of resistance. Volt is sometimes described as the ''push" of an electric charge through a conductive material.
When electricity is flowing in a circuit, it faces resistance opposing the push. This electrical resistance is measured in ohms. One ohm is the resistance caused by a 42-inch column of mercury
Table A-2. Common multipliers used for power systems.
which is 14.45 grams in mass at 32ºF. Another way of defining an ohm is the resistance when one volt of potential difference causes one ampere of current flow.
The conductance measured is the reciprocal of resistance. It is measured in "mho" or siemense which are the same units.
Electric current results when free electrons flow through a conductor, the result of push overcoming resistance. By convention, electric flow is assumed to be in the opposite direction of the flow of free electrons. In most cases, the number of electrons flowing is not important. This is because the rate of electron flow is a more meaningful parameter. So, in determining the flow of current, both the number of electrons and the time period is needed.
The current flow is normally measured in amperes rather than number of electrons per second. An ampere ("amp") is equal to the flow of one coulomb, which is equal to 6,280,000,000,000,000,000 electrons per second.
As we can see, because of the large quantity of electrons in one ampere, dealing with the number of electrons is not practical.
The watt is the unit of measuring the rate of delivering energy and is directly related to the voltage multiplied by the ampere. For a direct current (DC) system or an alternating current (AC) system with unity power factor, a watt exists when one volt of potential is applied and one ampere of current is flowing through the circuit. For AC, the wattage is equal to volts multiplied by the ampere multiplied by the power factor.
A watt is a small unit, so in power systems one usually deals with kilowatts (thousand watts) and megawatts (million watts). In addition, if we are interested in measuring energy, we need to use the watt-hour. A watt-hour is the amount of energy when one watt of power is delivered for one hour. This energy is normally measured in kilowatt-hours (kWh) and megawatt-hours (MWh).
We use wattage to describe the power input of a particular electrical device and the watt-hour to describe the amount of energy it consumes, which is the product of power and time. Note that a manufacturer's rated wattage is based on in-house testing and while credible enough, we may see minor fluctuations in the field due to diverse conditions.
Basic symbols used in electrical drawings and schematics.
Electrical parameters can be constant or oscillatory. When a parameter oscillates, it goes through continual change in a periodic manner. The frequency is defined as the number of cycles a parameter goes through in one second.
There are two common frequencies used in AC power60 Hertz and 50 Hertz. In the United States, the power systems are 60 Hz while in most other countries, 50 Hz systems are used.
So for a fluorescent lamp operating in the United States, it will turn on and off 120 times per second, which is called lamp flicker and attributed by some to workplace fatigue. Electronic ballasts increase the oscillation to 25,000-60,000 times per second, eliminating lamp flicker.
Oscillation is also blamed for the stroboscopic effect caused by high pressure sodium lamps oscillating at the same frequency as rotating machinery. The machinery appears not to be moving, which is dangerous. A simple solution is to select a different light source, such as metal halide, or stagger the lamps across three phases so that they do not operate synchronously.
Capacitance is a measure of the amount of charge stored between the two plates of a capacitor for a certain voltage level.
If a difference of potential of one volt causes one coulomb of charge across the plates of a capacitor, this is equal to one farad.
The inductance is a measure of the electric potential induced when a certain amount of change occurs in the level of current. So it is important to note that the key is not the amount of current, but the rate of change of current.
If a potential difference of one volt is induced because of a current change of one ampere per second, the inductance of the circuit element is equal to one henry.
Electrical Properties of Material
Materials can be divided into two types based on their electrical property. If a material can conduct electricity, it is called a conductor, and if it cannot conduct electricity easily, it is called an insulator. This is where the "rubber meets the road" in specifying power systems. Once we establish a load, or the total demand for power, we must ensure that 1) the proper wiring is in place to distribute sufficient electricity, and 2) that the wiring is properly insulated to both efficiently and safely permit flow of electricity.
The purpose of insulation is to inhibit the flow of electricity. This is the material that prevents fire and electric shocks. Insulation materials are divided into two groupsorganic and inorganic. They can be solid, liquid or even gaseous. They are classified by temperature stability. And their service life is severely affected by temperature and other factors.
Organic Insulators - Organic insulators include materials such as rubber paper, oil, cotton and many thermoplastic polyurethane compounds composed of long molecular chains of hydrocarbons and other elements such as chlorine and oxygen. These materials cannot withstand temperatures exceeding 150ºC.
Inorganic Insulators - Inorganic insulators include materials
such as mica, porcelain, fiber glass and asbestos. These materials can resist temperatures exceeding 1,000ºC. Among these, asbestos is no longer used because of its adverse health effects.
Forms Of Insulators - Insulators can be a solid, liquid or gaseous substance.
Solid. The most common type of solid insulators are natural polymers, such as rubber, and synthetic polymers, such as polyvinyls, polyester, nylon and other products. These materials are used to insulate wires for motors, transformers, electromagnets and distribution cables and wires. There are many applications where synthetic insulators are used with natural insulators such as cotton and paper for high-voltage conductors. For high-voltage applications such as heater elements, mica and porcelain insulators are used.
Liquid. The most common liquid insulators used in electrical systems are mineral oil and varnish. Varnish is used on wires and coils. Mineral oil is used as a dielectric fluid in transformers both as an insulator and a cooling medium.
Gaseous. The most common gaseous insulator is air. Other important insulating gases are sulfur hexafluoride and hydrogen. Sulfur hexafluoride is used as an inert gas in dielectric beakers and switches.
Classifications - Electrical insulators are divided into a number of different classifications by the temperature stability of the material. The most common insulation classes are:
Class A. Includes materials such as cotton, silk, paper or any combination of these materials immersed in oil. Their comparable thermal life is 105ºC.
Class B. Includes materials and combinations of mica, fiberglass, asbestos and similar products with a comparable temperature life at 130ºC.
Class F. Includes materials and combinations of mica, fiberglass, asbestos and similar products with a comparable temperature of 155ºC.
Class H. Includes materials and combinations of materials such as silicone elastomer, mica, fiberglass and similar products with a comparable temperature of 180ºC.
Class N. Materials or combinations of materials with a comparable temperature of 200ºC.
Class R. Materials or combinations of materials with a comparable temperature of 220ºC.
Class S. Materials or combinations of materials with a comparable temperature of 240ºC.
The last three classes consist of materials such as glass, porcelain, quartz and similar materials. The comparable life expectancy of the insulators is 15 years at the above temperatures.
Temperature Rise And Service Life - Let us examine how the above information is used to determine the allowable temperature rise in various equipment. In the United States, unless otherwise stated, the electrical machinery name-plate data is based on ambient temperature of 40ºC. It is assumed that the temperature difference between the hottest spot and the surface of most insulators can be about 15ºC. If we subtract 55 (40 + 15) from the above figures, therefore, it will give us the allowable temperature rise for every insulation class.
So for a Class A insulator with a temperature rise of 50ºC (105 - 55), its average life expectancy will be 15 years. Let us examine what will happen if the temperature rise is modified. Looking at the temperature life curve for insulation, it becomes clear that a 10ºC temperature rise will cut the life of an insulator by half. Conversely, a 10ºC drop will double the average life expectancy of an insulator. Going back to the above example for a Class A insulator, therefore, if the temperature rise is increased from 50ºC to 60ºC, the life expectancy drops from 15 years to 7.5 years. Another 10ºC will drop it to 3.75 years and so on. Conversely, if the allowable temperature is dropped to 40ºC, the average life expectancy is increased to 30 years. That is why for all electrical equipment with Class A insulation the maximum allowable temperature is 40ºC.
There are other ambient factors that contribute to the deterioration of insulators: humidity, vibration, acidity, oxidation and aging. As insulators are subjected to the above conditions, they will slowly begin to crystallize and become brittle and hard, until even minor mechanical vibrations will break the insulation and cause failures. Proper care of insulators, therefore, is essentialthey must be must be kept dust-free and cleaned regularly.
Finally, note that insulators cannot withstand higher and higher voltages indefinitely. There is a certain critical voltage for any insulator where it suddenly loses its insulating properties and breaks down. The breakdown voltage is a function of the material thickness and type of material.
The ratio of the breakdown voltage to material thickness is called the dielectric strength. Another important parameter for insulators
is the basic impulse level (BIL). This signifies the highest standard shape impulse voltage an insulator can withstand. The standard impulse for determining the BIL has a rate of rise of 1.2 microseconds to its maximum value followed by 50 percent decay in 50 microseconds. The BIL rating is normally several times higher than the maximum voltage that an insulator is subjected to in an electrical system.
Insulation Value - The insulation value of an insulator is measured in ohms. There are several tests for this purpose. The simplest one is the short-time insulation resistance measurement where the resistance of the insulator is measured with a megohmeter and compared with previous records. The resistance of a good insulator must be above 1,000 ohms per volt. This means for a 120V system, the resistance of the insulator must be at least 120 kilo-ohms.
The other two tests are the dielectric absorption test and the polarization index. The dielectric absorption test consists of connecting a megohmmeter to the insulator and recording the resistance every minute for 30 minutes. If the resistance continually increases during the test, then the insulator is in good shape. If the resistance remains fiat after a short time before the conclusion of the test, however, the insulator probably needs cleaning or drying. The polarization index is the ratio of the 10-minute to one-minute insulation resistance of an insulator. A good, clean insulator will have ratio of four or more. If the ratio is about 1.5, usually the insulator needs to be reconditioned. An index of less than one indicates the development of carbonized paths through and around a dirty insulator.
Conductors are used to transmit electricity from point A to point B. Electricity flows through the conductive material via the movement of billions of electrons.
Just like insulators, conductors can either be solids, liquids or even gases. Conductivity is measured in ''mho" or siemense which is the reciprocal of resistance. For example, if the resistance of a material is five ohms, the conductance is 1/5 or 0.2. Ideally, a conductor has a resistance of zero which means a conductance of infinity. By the same token, a resistor has zero conductance or infinite resistance. But in reality, there is no substance which has zero or infinite resistance.
Materials - Good conducting material includes copper, aluminum, silver and gold. Because of the relative low cost and availability of copper, it is the conductor of choice for most applications, followed by aluminum. Similarly, the best insulators are glass, mica, rubber and many plastic compounds.
Resistance - The resistance of a material is a function of three factorsthe material type, the length of the material and the cross-sectional area of the material. The resistance is directly related to the length and inversely related to the cross-sectional area.
This means doubling the length of a conductor will double the resistance, and doubling the cross-sectional area will reduce the resistance by half. That is why utility power distribution wires suspended from high-tension towers are so thick and transmit electricity at incredible voltages, because the electricity must be sent over a long distance. Although facility managers do not deal with this, the same basic principles apply to wiring a building.
For instance, the resistance of copper wire No. 14, which has an area of 2,583 circular mils, is 4 ohms for 1,000 ft. So for a wire 2,000 ft. long, the resistance will be 8 ohms. Also, the resistance of No. 19, which has half the cross-sectional area as No. 14, is 8 ohms for 1,000 ft. (see Chapter 5 for more on wiring and cabling).
Temperature. In addition to the above, the resistance of a material is also a function of the temperature. For a conductor, the resistance is directly related to temperature. So as the temperature increases, the resistance of a conductor will increase, and if the temperature decreases, the resistance will drop.
By contrast, the opposite is true with of an insulator. This means if the temperature of an insulator is increased, its resistance value will drop. Both for conductors and insulators, the rate of change in resistance is a function of the temperature coefficient of the specific material.
Electrical components can be arranged in a circuit in one of two waysseries and parallel.
In a series circuit, the current is the same at all points in the system (see Figure A-2). To get the total value of resistance in a series circuit, add up all the individual resistance values of the
Resistance And Temperature
The resistance inherent in a conductive material is a function of its length, diameter and temperature. In regards to temperature, a material is rated with a coefficient of temperature rise to help electrical engineers choose the best material for the job.
For example, the coefficient of temperature rise for copper is 0.00393. Assume that the resistance of a particular copper conductor is one ohm at 20ºC and the temperature is raised to 50ºC. Now the increase in resistance of the copper conductor will be equal to (50- 20) × 0.00393 = 0.1179.
The coefficient of typical conductive materials is:
Let us calculate what the temperature rise is for a 100W 120V incandescent lamp when it is turned on. The resistance of a lamp at room temperature of 20ºC is 11 ohms. When the lamp is turned on, it draws 0.83 amperes. So the temperature of the tungsten filament is:
Rt = 120 ÷ 0.83 = 145 ohms
11 = Ro (1 + a × t) = Ro (1 + 0.0045 × 20) = Ro (1.09)
Ro = 11 ÷ 1.09 = 10.1
Rt = Ro (1 + a × T)
145 = 10.1 (1 + 0.0045T)
T = 2968ºC
This example illustrates the high temperatures that on incandescent experiences before a filament glows with visible light.
resistors in the circuit.
For instance, if there are three resistors with the individual values of 4, 5 and 20 ohms, the total resistance is equal to 4 + 5 + 20 = 29 ohms. As one would expect, if additional elements are added in a series circuit, the total circuit resistance will increase, which will in turn reduce the current in the circuit. This allows us to control the level of current.
The sum of the voltage drops around the series circuit is equal to the applied voltage. Since there is only one path for the flow of current, if any element is removed from the circuit, the current will be interrupted for the entire circuit. An example of this is when a fluorescent lamp operated by a magnetic ballast goes outthe other lamp will either glow dimly or also extinguish. Other common examples of a series circuit are Christmas ornamental lights and, in some cases, street lights.
A parallel circuit is the most widely utilized circuit in electrical distribution systems (see Figure A-3). To get the total conductance in a parallel system, add up the total resistance as we would for a series circuit, then take the reciprocal of this value.
With a parallel circuit, there are multiple current paths. So if an individual element is removed from the circuit, it will not affect the remaining elements in the circuit. Many fluorescent lamps are now operated in parallelif one lamp goes out, the others will continue to light normally.
Unlike a series circuit, the total resistance of resistors in parallel is less than the smallest resistor in the circuit. In addition, if more elements are added to a parallel circuit, the total circuit resistance will drop further. The connection of home appliances to the power system is a typical example of a parallel circuit.
There are three basic laws that need to be understood when dealing with electrical systems. These are Ohm's Law, Kirchoff's Current Law and Kirchoff's Voltage Law.
Ohm's Law - This Law represents the relationship between voltage, current and resistance in an electrical circuit. The principle
Resistance In Parallel Circuits
Suppose we have a parallel circuit where three resistors have a resistance value of 4, 5 and 20 ohms respectively. The resistance is:
Resistance = (1 ÷ R1) + (1 ÷ R2) + (1 ÷ R3)
Resistance = (1 ÷ 4) + (1 ÷ 5) + (1 ÷ 20)= 1/2
Resistance = 1 ÷ 1/2 = 2 ohms
behind this theory is that if we know any two of these three parameters, we can use this information to determine the third.
Ohm's Law, therefore, can be stated in three ways:
E = I × R
I = E ÷ R
R = E ÷ I
E = EMF in volts
I = Current in amperes
R = Resistance in ohms
Another way of looking at this, then, is that the current in an electrical circuit is directly proportional to voltage and inversely proportional to resistance. This means if we are trying to increase current in a circuit, it can be accomplished by increasing voltage, reducing resistance or both.
Suppose we have a light bulb with a resistance of 50 ohms. If we connect it to a 120V circuit, the current will equal 120V ÷ 50 ohms = 2.4 amperes. Now if the bulb were connected to a 240V circuit, the current would be double to the current4.8 amperes.
Kirchoff's Current Law - This Law deals with the conservation of electrical current in a circuit. It states that the total amount of current entering a junction is equal to the total amount of current leaving the junction.
This means if one takes any portion of an electrical system and draws a closed loop, the current coming into the area. Although this Law sounds intuitively obvious, it plays a fundamental role in solving most power network issues.
Kirchoff's Voltage Law - This Law states that in any closed electrical circuit, the sum of voltage rises will equal to the sum of voltage drops. This Law can be viewed as a form of voltage conservation. This means all of the voltage put into an electrical system and dissipated from it must total zero.
Schematic of a series circuit.
Electrical systems are either direct current (DC) or alternating current (AC). They can also be single phase or three phase.
With a DC power source, the voltage level and polarity remains constant (see Figure A-4). Similarly, the current level is constant and always flows in one direction. Normally, DC systems are associated with battery systems. Although it was the power system of choice in Edison's time, today DC power is mostly used for special applications such as trains and plays a minor role in electrical distribution systems. However, since DC systems are simpler to analyze, they make a good introductory subject for studying power systems.
AC power is the lifeblood of most typical electrical systems. As the name suggests, AC power continuously oscillates with time. Ideally, the shape of an AC system is a sine wave (see Figure A-5).
Schematic of a parallel circuit.
We can see from the sine wave curve that over 360 degrees or a full rotation, the wave starts from zero, reaches a peak by quarter of a cycle, and follows with a decline to zero by the middle of the cycle. It then changes direction to a negative peak by the quarter of the cycle and goes back to zero. The oscillation is repeated indefinitely.
So with AC systems, there are a number of different ways a parameter can be measured. For instance, with voltage one can have instantaneous voltage, peak voltage, average voltage and root mean square (RMS) voltage. Since we defined the first two earlier, we will define the average and RMS voltage here.
The average voltage is represented by dividing the area under the sine wave curve by time, which is equal to 63 percent of the peak voltage.
The RMS value is harder to visualize, yet is the most important parameter. The RMS value of an AC voltage is related to its ability to produce power of an equivalent DC voltage. It is equal to 70.7 percent of the peak voltage value. For AC systems, unless otherwise stated, the RMS value is voltage that we normally work with.
Example: for a 120V system, the peak voltage the system
Schematic representation of DC voltage.
experiences is equal to 120 ÷ 0.707 = 170 volts. It should also be mentioned that this relationship is valid when the wave shape is perfectly sinusoidal. Otherwise, the factor between the peak and RMS values will be different. Unfortunately, we rarely see a textbook perfect sinusoidal shape in AC power, which is discussed in more detail in Chapter 5.
Another issue that needs to be addressed with AC systems is the phase difference between the voltage and current phase angles. This means that both current and voltage waves may or may not be in sync with one another. This raises the concept of power factor, which is addressed in Chapter 1.
Single-Phase And Three-Phase Systems
For an electrical circuit to be complete, one needs two wires to establish the current path. In other words, when there is a potential difference between two terminals, if they are connected with a conductor that includes one hot wire and one neutral wire, the current will flow in the circuit. This is the most basic and fundamental electrical circuit, which is referred to as a single-phase system.
Most AC distribution systems and equipment require three-phase
Schematic representation of AC voltage.
power, however. This implies that the power system will have three hot wires and possibly a neutral wire. Before we discuss how three-phase power is generated, let us examine the underlying reason why three-phase power is used in the first place. The three-phase power system has a number of advantages over single-phase:
1. Three-phase electrical devices are cheaper, smaller, and more efficient than their single-phase counterparts.
2. Three-phase systems, although using more wire, require less total copper for transmitting the same amount of electricity compared to single-phase systems.
3. The voltage regulation of three-phase systems is inherently better.
4. With three-phase motors, the direction of rotation can easily be altered by interchanging any of the two phases. Altering direction of rotation in single-phase motors is more complicated.
5. Three-phase motors are self-priming devices, while for single-phase motors a separate starting circuit is needed.
AC Voltage And Current
An AC voltage is expressed as V = Vm sin(w × t).
An AC current is expressed as I= Im sin(w × t).
V, I = The instantaneous voltage, current
Vm, Im= The peak value of voltage, current
wt = The frequency multiplied by time
6. Single-phase motors inherently generate a pulsating torque with a net zero value in addition to the main positive torque which results in motor vibration. Three-phase motors do not have such a torque, so they run smoother. This can impact the lice expectancy of the motor and its coupling with the load.
So why not convert every system to three-phase? The answer lies in the complexity of the three-phase system versus its benefits for small loads. So for most equipment in small commercial and residential systems, single-phase power is used, while for larger equipment and distribution systems, three-phase is more prevalent.
Let us examine how three-phase power is established in a wye or delta formation.
Three-Phase Wye System - Assume we have three single-phase voltage sources where the phase angle difference between the three voltages are 120 degrees and moreover, each voltage source is connected to an identical load. So the current in each circuit will be the same as with the other two. Now, we will examine interconnecting these three single-phase circuits in two different ways. The first approach is to connect the three neutral wires together. Looking at the phasor diagram of the three currents (which are equal in magnitude but 120 out of phase with each other), it becomes clear that the three currents in the neutral circuit always cancel each other. Consequently, the resultant neutral current is zero. Now that the current in a conductor is zero, its presence or absence in the circuit does not play any role. So by connecting one terminal of the three devices together, the need for the neutral wire is eliminated and the power to the three equal single loads can be delivered with the three single-phase wires. This arrangement is
called the wye connection three-phase system (see Figure A-6). So as demonstrated above, the amount of wire needed to transmit the same amount of energy in a balanced three-phase is equal to 1/2 of three separate single systems.
Three-Phase Delta System - We can examine the above three single-phase equipment in a different way. Assume connecting the first terminal of load A with the second terminal of load B. Then connect the first terminal of B with the second terminal of C and finally, connect the first terminal of C with the first terminal A. Afterwards, examine the net voltage within this closed loop. Looking at the phasor diagram of the three voltages it becomes clear that the summation of the three voltages will be zero at all times. This implies that in most three-phase systems, there will not be any circulating current. Looking at the wires of the three systems, it becomes obvious that two wires are taken in parallel to the three loads. So the three single-phase power sources can be connected in the same configuration and only three wires are needed to be extended to the circuit. Again, the amount of conductor needed to serve the above three loads is cut in half. This arrangement is called a delta system (see Figure A-7).
Comparison With Single-Phase Systems - Let us examine the relationship between these systems and their equivalent single systems (see Figure A-8). For a wye system, the current for the single-phase and three-phase systems are the same. The voltage between any of the two phases is equal to the phase and neutral voltage multiplied by the square root of three (1.732). So, Vab = Vac = Vbc = 1.732 × Van
For a delta system, the opposite is true. The single-phase and three-phase voltage levels will be identical for a delta system. Similarly, the three-phase current is 1.732 times the single-phase current. Based on this, the power for a three-phase system is equal to: P = 1.732 × V × I.
The above relationship is true both for delta and wye connections. This means a wye system has higher voltage and lower current as compared to delta systems for the same power requirements.
The final topic that needs to be addressed is what happens if the individual loads of the three-phases are not equal. For a delta system, an unbalanced load will result in circulating current around the delta circuit. For a wye system, when the load is unbalanced, a neutral
Three-phase wye connection system.
wire is needed to carry the resulting unbalanced current. A delta connection is a floating system which means there is no neutral connection.
By contrast, a four-wire wye system has a neutral connection and can accommodate both single-phase and three-phase loads (see Figure A-9).
Electricity and Magnetism
In 1831, Joseph Faraday discovered that if a conductor is moved close to a magnet where the magnetic field is crossed, an electrical potential is induced. Moreover, as he perused his experiment, he discovered some fundamental relationships between the voltage and the magnetic flux. Magnetism is the main link for our ability to convert mechanical energy to electrical energy and vice versa. This is why any study of electrical systems must include a discussion of the properties of magnetic materials. In fact, many magnetic characteristics are analogous to their electric counterparts. This facilitates and simplifies the required discussion in gaining some fundamental understanding of electromagnetism.
Three-phase delta connection system.
Similar to electrical systems, magnets have two polesnorth and south. The designation is derived from the fact that a magnet will try to align itself along the earth's own magnetic poles. This brings to mind the high school science experiment where we can see that like poles of magnets will repel each other, while unlike magnetic poles attract (''opposites attract'').
The strength of the poles is represented by the magnetic motive force (MMF, which is analogous to EMF), measured in ampere-turns. The magnet poles generate a magnetic field between each other. This field can be physically visualized as imaginary smooth curves that originate from the north pole and terminate in the south pole. The flux is analogous to electrical current and is measured in Webers.
A Weber is the amount of magnetic flux caused when linking a circuit of one turn produces in turn an EMF of one volt in one second. The flux density is measured in Teslas, where one Tesla is equal to one Weber per square meter.
The magnetic properties indicate how easily the magnetic flux can travel through a material. This is measured as the reluctance of the material, which in turn is a measure of the resistance that a magnetic field experiences as it travels through the medium.
Phase differences in a three-phase system.
Similarly, permeability is a measure of how easy a magnetic flux can go through a material.
Like resistance, the reluctance of a material is directly proportional to the length of the material and inversely related to the cross-sectional area. Moreover, the three basic laws of electric circuitsOhm's Law, Kirchoff's Current Law and Kirchoff's Voltage Lawalso apply to the properties of magnetism.
There is one difference between electrical and magnetic circuits, however. In almost all electrical systemsexcept capacitorswhen the electrical power is turned off, no energy remains stored in the system. By contrast, in a magnetic circuit, even after the source of flux is removed, some level of magnetic properties will remain in the material. This phenomenon is called hysteresis. Some materials have a high rare of retention of their magnetic properties which is helpful for creating magnets. On the other hand, for most electrical devices, since the polarity of electricity changes due to the nature of AC power, high rates of retention translates into high losses.
An important factor in the efficiency of electrical equipment, therefore, rests with the magnetic properties of magnetic cores in the equipment. Electric motors and transformers are examples of
Four-wire wye system.
electrical equipment with magnetic cores. As a general principle, high permeability and low hysteresis result in higher efficiency.
Now let us look at the relationship of electricity and magnetism. The relationship is best articulated in three lawsthe Right Hand Rule, Faraday's Law and Lorentz Law.
Right Hand Rule
This law determines the relationship between the direction of electrical current and the polarity of the magnetic flux.
The Law states that if one holds an electrical wire with one's right hand, where the thumb points will designate the direction of the current, while the direction of the other fingers will designate the direction of the magnetic field.
This Law determines the relationship between the magnitude of voltage induced as a function of the strength of the magnetic field.
When a conductor crosses a magnetic field, the voltage generated is directly proportional to the strength of the field, the number of turns in the conductor, and the speed in which the field is crossed.
The voltage induced is also a function of the angle that the field is crossed by the conductor. The maximum voltage is obtained when the field and the direction of conductor motion are perpendicular to each other.
This Law describes the relationship between the current of a conductor in a magnetic field and the forces generated.
When a current-carrying conductor is placed in a magnetic field, an electromagnetic force is generated. The force is directly proportional to the current level, the strength of the magnetic field, and the relative orientation of the two magnetic fields (the two magnetic fields are the original field and the field induced by the current). The underlying cause of the force stems from the tendency of magnetic fields to line up with each other. That is why the force will be maximum if the relative angles between the fields are 90 degrees.
Bibliography of Sources
Ahlstrom, J., "Wiring Methods Promise Savings in The Long Run," Electrical Contractor, October 1993.
Alfiere, J.P., "Critical Building Electronic Systems: Ensuring Power Quality And Avoiding Breakdowns," Maintenance Solutions, September/October 1993.
American National Standards Institute, Surge Arrestors for Alternating Current Power Circuits, ANSI C62.1, 1975.
Baltes, R. et al, "Clean Power Factor Communication," Electrical Contractor, July 1993.
Beck, B., "The Emerging Market for Electric Energy," Building Operating Management, October 1994.
Beeman, D. Industrial Systems Power Handbook. New York: McGraw-Hill, 1955.
Berutti, A. and Waggoner, R.M., Quality Power for Sensitive Electronic Equipment. Overland Park, Kansas: Intertec Publishing Corporation, 1993.
Bonnett, A., "An Update on AC Induction Motor Effeciency," IEEE Transaction on Industry Applications, Vol. 30, No. 5, September/October 1994.
Bouman, D. and Basista, D., "K-Rated Transformers Put a Damper on Harmonics," Consulting-Specifying Engineer, Mid September 1993.
Craft, T. et al. American Electrician's Handbook. New York: McGraw-Hill, 1970.
DeMarco, A., "Harmonics Wreak Havoc in The Workplace," Facilities Design & Management, January 1995.
Douglas, C. and Munger, E. Construction Management. Englewood Cliffs, New Jersey: Prentice-Hall, 1969.
Dozier, M., "An Rx for Hospital Harmonics," Consulting-Specifying Engineer, April 1993.
Eaton, J.R. Electric Power Transmission Systems. Englewood Cliffs, New Jersey: Prentice-Hall Inc., 1972.
Edelston, B., "Bulk Power Sales And Retail Wheeling: An Industry in Transition," Presentation to The National Conference on Transition Access, Wheeling & Deregulation of America's Utilities, Arlington, Virginia, May 24, 1994.
Electric Power Research Institute, Study of Distribution System Surge Harmonics Characteristics, EPRI EL-1627, RP 1024-1, McGraw-Edison Company, November 1986.
Elgerd, O.I. Basic Electric Power Engineering. Reading, Massachusetts: Addison-Wesley, 1977.
Englemann, R.H. Static Rotating Electromagentic Devices. New York: Marcel Dekker, 1982.
Griffith, D.C., "Harmonics in Power Distribution Systems," AIPE Facilities, July/August 1993.
Gruzs, T.M., "The How's And Why's of Isolated Grounding," Power Quality Assurance, March/April 1995.
Hoenvenaars, T. and Meri, J., "Current Electrical Issues in Facility
Management," IFMA Conference Proceedings, 15th Annual Conference & Exposition on Facilitiy Management, St. Louis, November 6-9, 1994.
Hoevenaars, T. and Meri, J., "Harmonics: the Electrical Facility Management Issue of the 1990s," FM Journal, May/June 1994.
Institute of Electrical And Electronic Engineers, Grounding of Industrial And Commercial Power Systems, IEEE Standard 142-1982.
Kalbach, J.F., "Electrical Environment for Computers," Conference Record of The Industrial And Commercial Power System Technical Conference, St. Louis, MO, May 1981.
Kessler, P., "Harmonic Currents," Today's Facility Manager, October 1994.
Knable, A. H. Electric Power Systems Engineering Problems And Solutions. New York: McGraw-Hill, 1967.
Lazer, I. Electrical Systems Analysis And Design for Industrial Plants. New York: McGraw-Hill, 1980.
Loorya, J., "Learn How to Perform a Power Quality Survey," Electrical Contractor, January 1994.
Matteson, G., "Electrical Energy Service OptionsDeregulation Presents New Opportunities," Business Officer, June 1995.
McPartland, J.F. Handbook of Practical Electrical Design. New York: McGraw-Hill, 1984.
Micheals, K.M., "Effective Grounding of Electrical Systems," EC&M, April 1994.
Nelson, J., "Impacting Bottom Line by Purchasing Utilities Effectively," FM Journal, March/Aptil 1994.
Nelson, N., "The On-Site Power Generation System," AIPE Facilities, March/ April 1993.
Neuenswander, J. Modern Power Systems. Scranton, Pennsylvania: International Textbook, 1971.
Pansini, A.J. Power Transmission And Distribution. Lilburn, Georgia: The Fairmont Press, 1991.
Qayoumi, M.H., "Demand Side Management," Facilities Manager, Vol. 8, No. 4, Fall 1992.
Qayoumi, M.H. Electrical Distribution & Maintenance. Alexandria, VA: Association of Physical Plant Adminisrators for Colleges And Universities, 1989.
Qayoumi, M. H., "Solid State Electronics," Facilities Manager, Vol. 3, No. 3, Fall 1987.
Qayoumi, M.H., "The Cogeneration Alternative: Feasibility And Factors," Facilities Manager, Vol. 3, No. 2, Summer 1987.
Qayoumi, M.H., "High Voltage Cables," Facilities Manager, Vol. 3, No. 1, Spring 1987.
Qayoumi, M.H., "Clean Power: A Case of Avoiding 'Power Corruption'," Facilities Manager, Vol. 2, No. 4, Winter 1986.
Qayoumi, M.H., "Variable Frequency Drives," Proceedings of 73rd APPA Annual Meeting, Boston, Massachusetts, July 1986.
Qayoumi, M.H., "Premises Wiring: The Architecture of Communication," Facilities Design & Management, July 1995.
Qayoumi, M.H., "Standby Power Systems," Maintenance Solutions, May 1995.
Qayoumi, M.H., "Motors and Maintenance: Enhancing Energy Efficiency," Maintenance Solutions, September 1995.
Qayoumi, M.H., "Power Quality Testing," Maintenance Solutions, February 1996.
Reason, J., "Nine Ways to Break a Current," Power, May 1980.
Reinbach, A., "Wire/Cable Distribution: More Than Meets The Eye," Buildings, November 1993.
Smith, G.W. Engineering Economy. Ames, Iowa: Iowa State University Press, 1987.
Stallcup, J.G. Designing Electrical Systems. Homewood, Illinois: American Technical Publishers Inc., 1992.
Stein, R., and Hunt, W. Jr. Electric Power System Components. New York: Van Nostrand Reinhard Co., 1979.
Stringfellow, M.F., "Don't Be Deceived by Grounding Myths," Electrical Contractor, November 1993.
Viemerster, P. The Lightning Book. New York, Doubleday, 1961.
Waller, M., "Power Quality: A Growing Priority for Plant Engineers," AIPE Facilities, May/June 1993.
Willis, E.M. Scheduling Construction Projects. New York: John Wiley & Sons, 1986.
Wiswell, P., "Surge Suppressors," PC Magazine, May 27, 1986.
Woodall, L.M., "Is Your Power Dirty?", Electrical Contractor, November 1992.
Zanis, L., "Cogeneration: A Viable Option for Commercial Buildings," Building Operating Management, April 1992.
Alternating current (AC), 37, 39, 230-232
Apparent power, 9
fault-tree analysis, 18
network method, 17
simulation method, 18
Batteries, 181 - 182
Cable tray, 127
termination and splicing, 115-116
Capacity, 1-2, 3
air magnetic, 79-81
sulfur hexafluoride, 83-85
Circuit-interruption mechanism, 77
optical fiber, 122
twisted pair, 121-122
inter-building 123, 128-130
intra-building, 123-127, 130
transmission path establishment, 119-120
Conduit systems, 107-110
cost-plus contracts, 31
lump-sum contracts, 30-31
soliciting bids, 31
Coordination of protective devices, 171-172
Cost savings, 5, 33
Critical path method, 30
role of facility manager, 24-25
sizing circuits for motors, 24-25
sizing, generally, 27
sizing wiring for lighting systems, 27
sizing wiring for other resistive circuits, 27
Demand charge, 194-195
Demand side management (DSM), 203-204
Direct current (DC), 37, 39, 230
Dirty power, 151 - 152
Electric motors, 32
Electrical failures, 172-180
Electrical noise, 145-146
electrical properties of material, 222-226
units of measurement, 218-222
Emergency power (see Standby Power)
Energy savings, 5, 33
Fuel adjustment charge, 196
Fuses, 162-167, 189
DC generators, 39, 42-44
fossil-fuel based, 38
operating characteristics, 40-41
harmonic distortion, 132
measurement, 133, 148-150
methods of reducing harmonic effect, 138-143
total harmonic distortion (THD), 133, 152
High potential test, 117
High-voltage conductors, 110-118
High-voltage interrupters, 77-85
Independent power producers, 206
current transformers, 94-95
harmonic measurement, 133, 148-150
instrument transformers, 92-93
potential transformers, 93-94
network systems, 63-64
primary looped systems, 62-62
primary selective systems, 61-62
radial systems, 58-59
secondary selective systems, 60-61
system topology, 57-64
Insulation value, 225
Internal rate of return (IRR), 35
Kirchoff's Current Law, 229
Kirchoff's Voltage Law, 229
Life cycle costing, 33-35
Low-voltage circuit interrupters, 86
generally, 16, 19-23
preventive (PM), 16, 19
Management aspects, 48-53
Mean time between failures (MTBF), 14-15, 18
National Electrical Code (NEC), 3
National Energy Policy Act (EPACT), 207
Needs assessment, 29
Net present value (NPV), 34-35
Ohm's Law, 228-229
Parallel circuits, 228
Parallel system, 13
Peak-shaving, 45, 214
Polarity test, 97
disturbances, 143-148, 150
factor charge, 195-195
quality, 131 - 155
Project management, 28-35
Raised floor distribution, 125
Ratchet clause, 195
Rate structures, 193-216
Real power, 9
A Guide For Facility Managers
By Mohammad Qayoumi, PhD, PE
One Penn Plaza, 10th Floor New York, NY 10119
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.