You are on page 1of 124

Electrical Testing &

Measurement

Handbook
Volume 7

Electrical Testing
and Measurement
Handbook Vol. 7
Published by The Electricity Forum

The Electricity Forum Inc.


One Franklin Square, Suite 402
Geneva, New York 14456
Tel: (315) 789-8323 Fax: (315) 789 8940
E-mail: forum@capital.net

The Electricity Forum


215 -1885 Clements Road
Pickering, Ontario L1W 3V4
Tel: (905) 686-1040 Fax: (905) 686 1078
E-mail: hq@electricityforum.com

Visit our website at

w w w. e l e c t r i c i t y f o r u m . c o m

ELECTRICAL TESTING
AND
MEASUREMENT HANDBOOK
VOLUME 7
Randolph W. Hurst
Publisher & Executive Editor
Khaled Nigim
Editor
Cover Design
Don Horne
Layout
Ann Dunbar
Handbook Sales
Lisa Kassmann
Advertising Sales
Carol Gardner
Tammy Williams

Printed in Canada

The Electricity Forum


A Division of the Hurst Communications Group Inc.
All rights reserved. No part of this book may be reproduced without
the written permission of the publisher.
ISBN-978-0-9782763-2-4
The Electricity Forum
215 - 1885 Clements Road, Pickering, ON L1W 3V4

The Electricity Forum 2007

Electrical Testing and Measurement Handbook Vol. 7

TABLE OF CONTENTS
ELECTRICAL MEASUREMENT AND TESTING CONTACT-LESS SENSING AND
THE AUTO-DETECT INFRASTRUCTURE
Forward - Khaled Nigim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
DONT RISK IT: USE CORRECT ELECTRICAL MEASUREMENT TOOLS AND PROCEDURES TO
MINIMIZE RISK AND LIABILITY
Larry Eccleston . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
ISOLATION TECHNOLOGIES FOR RELIABLE INDUSTRIAL MEASUREMENTS
National Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
RESISTANCE MEASUREMENTS, THREE- AND FOUR-POINT METHOD
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
CLAMP-ON GROUND RESISTANCE TESTER, MODELS 3711 & 3731 STEP-BY-STEP USAGE
Chauvin Arnoux, Inc. and AEMC Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
MEASURING MAGNETIC FIELDS, ELECTRIC AND |MAGNETIC FIELDS
Australian Radiation Protection and Nuclear Agency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
ELECTRIC AND MAGNETIC FIELDS, MEASUREMENTS AND POSSIBLE EFFECT ON HUMAN HEALTH,
WHAT WE KNOW AND WHAT WE DONT KNOW IN 2000
California Department of Health Services and the Public Health Institute
California Electric and Magnetic Fields Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
A NEW APPROACH TO QUICK, ACCURATE, AFFORDABLE FLOATING MEASUREMENTS
Tektronix IsolatedChannel Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31
HIGH-VOLTAGE MEASUREMENTS AND ISOLATION -GENERAL ANALOG CONCEPTS
NI Analog Resource Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
STANDARD MEASUREMENTS: ELECTRIC FIELDS DUE TO HIGH VOLTAGE EQUIPMENT
Ralf Mller and Hans-Joachim Frster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39
IDENTIFICATION OF CLOSED LOOP SYSTEMS
NI Analog Resource Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43
SELECTING AND USING TRANSDUCERS FOR TRANSFORMERS FOR ELECTRICAL MEASUREMENTS
William D. Walden . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
HOW TO TROUBLESHOOT LIKE AN EXPERT, A SYSTEMATIC APPROACH
Warren Rhude, Simutech Multimedia Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
ELECTRICAL INDUSTRIAL TROUBLESHOOTING
Larry Bush . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
THE ART OF MEASURING, LOW RESISTANCE
Tee Sheffer and Paul Lantz, Signametrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59
STANDARDS FOR SUPERCONDUCTOR AND MAGNETIC MEASUREMENTS
National Institute of Standards and Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
MULTI CHANNEL CURRENT TRANSDUCER SYSTEMS
DANFYSIK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67
FALL-OF-POTENTIAL GROUND TESTING, CLAMP-ON GROUND TESTING COMPARISON
Chauvin Arnoux, Inc.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
AN INTRODUCTION TO ANTENNA TEST RANGES, MEASUREMENTS AND INSTRUMENTATION
Jeffrey A. Fordham Microwave Instrumentation Technologies, LLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71

Electrical Testing and Measurement Handbook Vol. 7

DERIVING MODEL PARAMETERS FROM FIELD TEST MEASUREMENTS


J.W. Feltes, S. Orero, B. Fardanesh,E. Uzunovic, S. Zelingher, N. Abi-Samra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79
TESTING ELECTRIC STREETLIGHT COMPONENTS WITH LABVIEW-CONTROLLED
VIRTUAL INSTRUMENTATION
Ahmad Sultan, Computer Solutions, Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85
ASSET MANAGEMENT, THE PATH TO MAINTENANCE EXCELLENCE
Mike Sondalini, Feed Forward UP-TIME Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87
THINK SYNCHRONIZATION FIRST TO OPTIMIZE AUTOMATED TEST
ni.com . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89
USING NATIONAL INSTRUMENTS SYSTEM IDENTIFICATION, CONTROL DESIGN AND SIMULATION PRODUCTS
FOR DESIGNING AND TESTING A CONTROLLER FOR AN UNIDENTIFIED SYSTEM
ni.com . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93
MAGNETO-MECHANICAL MEASUREMENTS FOR HIGH CURRENT APPLICATIONS
Jack Ekin, NIST- Electromagnetic Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
A BASIC GUIDE TO THERMOGRAPHY
Land Instruments International Infrared Temperature Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105

Electrical Testing and Measurement Handbook Vol. 7

ELECTRICAL MEASUREMENT AND TESTING CONTACT-LESS


SENSING AND THE AUTO-DETECT INFRASTRUCTURE
Forward by Khaled Nigim
Maintaining a highly functional electric system is dependent on the operational and maintenance level of the integrated
components that are geared together to serve the customer. An
effective preventive maintenance setup is dependent on the reliability of the sensing devices and relaying instrumentation as well
as on the operators understanding of the process functionality.
Early measuring devices were designed and based on
electromechanical indicating instrumentation. Their solo operability necessitated around the clock operator attention. Such
devices were accurate but provided limited adaptability for interfacing with todays centralized centers.
As the semi-conducting integrated circuits devices start to
invade the market, many instruments are now inter-actable with
each other and some can be used to sense and record data from
various sensing elements in a sequential manner and generate
their own diagnostic reports within a very brief time. Todays
sensors are built around plug-and-play infrastructure which is
based on the IEEE 1451.4 standard that brings plug-and-play
capabilities to the world of transducers. With plug-and-play technology, the operator stores a Transducer Electronic Datasheet
(TEDS) directly on a sensor. The sensor identifies itself with all
needed information once and is hooked to a data bus. TEDScompatible measurement systems can auto-detect and automatically configure these smart sensors for measurement, reducing
setup time and eliminating transcription errors that commonly
occur during sensor configuration. This enables the operator to
focus on overall system operation rather than on individual component operation.
Furthermore, measuring relaying units and associated
sensing elements technologies has advanced rapidly over the
past 20 years. A particular advancement is noted in the contactless measuring sensors and measured data handling capability.
This progression in the testing and measurement field provides a
wider scope of applications and shorter time for interrupting
early failure signals. As an example, the cases where infra-red
imaging techniques are used are now part of the routine maintenance of distribution transformers. The infrared image indicates
the hottest spot and temperature distribution inside a large distribution transformer without the need of embedding sensors.
Earlier techniques for measuring temperature were based on collecting data from various temperature sensors entrenched inside
the transformer windings. If one or more sensors were faulty, the
gathered data would be incomplete and the transformer has to be
taken out of service. Replacing the sensors is a timely and costly procedure. Todays data handling and processors that either
control the data flow from one or more sensors or part of the
human machine interface supervisory system, have the capability to run self-diagnostics routines to alert the operator to any
abnormal behavior from the various sensing elements, and generate a check list to help figure out any culprits.
This edition of the Electrical Testing and Measurement

Handbook introduces the fundamental applications of electrical


testing and instrumentation and guidelines on the correct procedures, and how to interpret and diagnose measured reports that
enable the operator to maintain a high degree of functionality of
the system with minimum interruption.
This handbook addresses various practical aspects of
todays electrical engineering infrastructure through selected
articles available for scientific sharing.
The articles are grouped into 4 sections. Section 1 addresses the basics and fundamentals of electric testing techniques
using various measuring sensors normally incorporated in many
of today measuring instruments. Section 2 addresses safe operation, procedures and handling of instruments. Section 3 introduces various sensing and measuring devices that can be used in
a wide area of application. And finally, section 4 showcases field
applications of instrumentation in various parts of the electrical
engineering industry.
The Electricity Forum endeavors to provide correct and
timely information for their readers in their handbook series. We
welcome readers suggestions and constructive feedback, and
contributions. Please submit your technical articles that show
case your experience in testing and measurement tools and systems directly to the handbook editors desk (HB2007@electrcityforum.com).

Electrical Testing and Measurement Handbook Vol. 7

Electrical Testing and Measurement Handbook Vol. 7

DONT RISK IT: USE CORRECT ELECTRICAL MEASUREMENT


TOOLS AND PROCEDURES TO MINIMIZE RISK AND LIABILITY
Larry Eccleston, Product Testing Manager, Fluke Corporation, Member, IEC Standards Committee

Between five and ten times on any given day, arc flash
explosions sufficient to send a burn victim to a special burn center take place in the U.S. These incidents and other less serious
electrical accidents result in injury sometimes death lost
work time, medical costs and insurance claims, downtime, the
list goes on. The cost to both the victim, the victims family and
the company involved, are high. Yet many of these accidents can
be prevented. The combination of training, good measurement
technique, and the use of proper tools can significantly reduce
the chance of an accident occurring.

electrical accidents: disruption of operations, higher insurance


costs, litigation and, most importantly, human suffering.
In todays society, where medical costs are escalating and
lawsuits are common, wise managers will take every step to
reduce the level of risk, help increase employee safety and minimize the organizations operational and financial exposure. This
means that management must ensure that employees use appropriate personal protective equipment, including new-generation test
tools independently tested to help ensure that they perform up to
specification. And employees must use that equipment correctly,
and receive training in safe electrical measurement procedures.

IS YOUR COMPANY AT RISK? HOW WOULD YOU ANSWER


THE FOLLOWING QUESTIONS?

2. INTRODUCTION: MANAGING HAZARDS IN THE


ELECTRICAL ENVIRONMENT

1. EXECUTIVE SUMMARY

1. Do you have a documented electrical measurement


safety program?
2. Do you regularly inspect your electrical measurement
equipment for damage that could imperil safety?
3. Do your workers involved in taking electrical measurements receive annual, intensive training on how to work
safely?
4. Does your organization insure that only properly rated
test instruments are used in your facility?
If you answered yes to three of the questions above, congratulations youre doing a better job than most employers to
reduce the chance of accidents associated with taking electrical
measurements. But theres still room to do more. This resource
kit was designed to help you develop an electrical measurement
safety program that significantly reduces your risk.
The high-energy electrical systems common in todays
workplace bring not only increased efficiency, but increased levels of hazard and risk for electrical workers and their employers.
Workers taking electrical measurements on high-energy
systems frequently work close to potentially lethal electrical currents. This danger can significantly increase due to the presence
of transient voltage spikes. Transient spikes riding on these powerful
industrial currents can produce the conditions that cause the extremely
hazardous phenomenon of arc flash.
To help manage the risks inherent in high-energy electrical systems, national and international standards bodies have
developed rules that categorize electrical environments according
to their potential danger. Personal protective equipment, including
test instruments, is categorized according to the NFPA-70E
Standard for Electrical Safety Requirements for Employee Workplaces, related to the incident energy levels and arc flash boundary distances.
To help ensure safety in todays high-energy, high-hazard
environments, leading manufacturers have re-engineered their
test instruments to enhance both reliability and safety. Such tools
can help companies avoid the many perils caused by high-energy

Todays industrial and business electrical supply systems


deliver high levels of electrical energy up to 480 volts in the
United States, and up to 600 volts in Canada. Such high-energy
circuits can create significant hazard and risk.
Another characteristic of most high-energy electrical supply
systems is the presence of short-duration voltage kickback spikes,
called transients.
When such spikes occur while measurements are being
made, they can cause a plasma arc to form inside the measurement
tool, or outside. The high fault current available in 480-volt and 600volt systems can make the resulting arc flash extremely hazardous.
Mitigating such risks requires the use of Personal Protective
Equipment (PPE) including test instruments engineered and tested
to meet appropriate standards, adherence to safe measurement procedures, and proper inspection and maintenance of test instruments.
In this paper we will cover:
Understanding the High-Energy Environment
Voltage Transients
The Danger of Arc Flash
Measurement Categories CAT I, CAT II, CAT III and
CAT IV
Measurement Tools as Part of Personal Protective Equipment
Safety Requirements for Measurement Tools
Test Tool Inspection and Maintenance
Safe Measurement Processes and Procedures
Conclusions and Recommendations

3. UNDERSTANDING THE HIGH-ENERGY ENVIRONMENT


Businesses simply could not survive without large
amounts of electrical power. Manufacturing operations and office
heating, ventilation and air conditioning systems require large
amounts of power, and computer systems have now become
major power users.
The need to supply large amounts of power in the most
cost-effective way has led firms to choose higher-energy, highervoltage supply systems, which cost less to install.

8
As a result of these trends, industrial and business operations today incorporate higher levels of electrical energy, which
can lead to increased hazard and risk for those who build and
maintain these systems. It is common for industrial and commercial maintenance workers and electricians to work with high levels
of energy. In the U.S., 480-volt, three-phase electrical supply
systems are commonplace. In Canada, systems use up to 600 volts.
Although classified as low voltage, both 480-volt and 600-volt
systems can easily deliver potentially lethal amounts of current
sufficient to fuel an arc flash an extremely hazardous occurrence.

4. VOLTAGE TRANSIENTS: DANGER IN A MICROSECOND


The presence of voltage kickback spikes, called transients, is another characteristic of electrical supply systems that
adds to the potential danger encountered when taking electrical
measurements.
Transients are present in almost every electrical supply
system. In industrial settings, they may be caused by the switching
of inductive loads, and by lightning strikes. Though such transients
may last only tens of microseconds, they may carry thousands of
amps of energy from the installation. For anyone taking measurements on electrical equipment, the consequences can be devastating.
When such spikes occur while measurements are being
made, they can cause a plasma arc to form inside the measurement tool, or outside. The high fault current available in 480-volt
and 600-volt systems can generate an extremely hazardous condition called arc flash.

5. UNDERSTANDING ARC FLASH


How can such a problem develop? A transient of sufficient magnitude can cause an arc to form between conductors
within an instrument, or across test leads. Once an arc occurs, the
total available fault current similar to the bolted current can feed
the arc and cause an explosion.
The result may be an arc flash, which can cause a plasma
fireball fueled by the energy in the electrical system. Temperatures
can reach about 6,000 degrees Celsius, or 10,000 degrees
Fahrenheit.
Transients are not the only source of arc-flash hazard. A
very common misuse of handheld multimeter can trigger a similar chain of events.
If the multimeter user leaves the test leads in the amps
input terminals and connects the meter leads across a voltage
source, that user has just created a short through the meter. While
the voltage terminals have a high impedance, the amps terminals
have a very low impedance. This is why a meters amps circuit
must be protected with fuses.
Another common and dangerous misuse of test equipment
is measuring ohms or continuity on a live circuit. These measurements should be made only on circuits that are not energized.

6. ARC FLASH AS A SAFETY ISSUE


Detailed information on the frequency and cost of arc flash
accidents is difficult to find. Accident reports may not distinguish
arc flash from electric shock. In addition, employers may be
reluctant to discuss or report incidents that can be so dangerous
and costly.
Dr. Mary Capelli-Schellpfeffer of the University of
Chicago provides the most authoritative estimates of arc flash frequency. Her firm, CapSchell, Inc., a Chicago-based research and
consulting firm, estimates that between five and ten times a day,
arc flash explosions sufficient to send a burn victim to a special

Electrical Testing and Measurement Handbook Vol. 7


burn center take place in the U.S.

7. MEASUREMENT CATEGORIES: CAT I, CAT II,


CAT III AND CAT IV
To provide improved protection for users, industry standards organizations have taken steps to clarify the hazards present in electrical supply environments. The American National
Standards Institute (ANSI), the Canadian Standards Association
(CSA), and the International Electro-Technical Commission
(IEC) have created more stringent standards for voltage test
equipment used in environments of up to 1000 volts.
The pertinent standards include ANSI S82.02, CSA 22.21010.1 and IEC 61010. These standards cover systems of 1000
volts or less, including 480-volt and 600-volt, three-phase circuits. For the first time, these standards differentiate the transient
hazard by location and potential for harm, as well as the voltage
level.
ANSI, CSA and IEC define four measurement categories
of over-voltage transient impulses. The rule of thumb is that the
closer the technician is working to the power source, the greater
the danger and the higher the measurement category number.
Lower category installations usually have greater impedance,
which dampens transients and helps limit the fault current that
can feed an arc.
CAT (Category) IV is associated with the origin of
installation. This refers to power lines at the utility connection, but also includes any overhead and underground outside cable runs, since both may be affected by
lightning.
CAT III covers distribution level wiring. This includes
480-volt and 600-volt circuits such as 3-phase bus and
feeder circuits, motor control centers, load centers and
distribution panels. Permanently installed loads are also
classed as CAT III. CAT III includes large loads that can
generate their own transients. At this level, the trend to
using higher voltage levels in modern buildings has
changed the picture and increased the potential hazards.
CAT II covers the receptacle circuit level and plug-in
loads.
CAT I refers to protected electronic circuits.
Some installed equipment may include multiple categories.
A motor drive panel, for example, may be CAT III on the 480-volt
power side, and CAT I on the control side.

8. MEASUREMENT TOOLS PART OF PERSONAL


PROTECTIVE EQUIPMENT
Another organization playing an important role in establishing safety standards for electrical workers is the National Fire
Protection Association (NFPA). NFPA establishes guidelines for
electrical measurement tools in its standard 70E, Standard for
Electrical Safety Requirements for Employee Workplaces, 2004
Edition.
Standard 70E also includes important requirements
regarding the use of other Personal Protective Equipment (PPE)
in various environments and installation/maintenance activities.
The NFPA standard makes it clear that test instruments
and accessories must be matched to the environment where they
will be used. These are the pertinent sections:
Test instruments, equipment, and their accessories shall
be rated for circuits and equipment to which they will be
connected. (Part II, Chapter 3, Paragraph 3-4.10.1)
Test instruments, equipment, and their accessories shall

Electrical Testing and Measurement Handbook Vol. 7


be designed for the environment to which they will be
exposed, and for the manner in which they will be used.
(Part II, Chapter 3, Paragraph 3-4.10.2)
A table included in NFPA Standard 70E, Table 3-3.9,
Hazard Risk Category Classifications, provides additional
guidance regarding the personal protective equipment recommended for use in work on a variety of equipment types at various voltage levels.i

9. SAFETY REQUIREMENTS FOR MEASUREMENT TOOLS


Management must ensure that, in compliance with NFPA
70E, test tools meet the standards for the environment where
they are used. The entire testing system, including the meter
and its internal fusing system, as well as the test leads and attachments, must comply with regulations for measurement environment and hazard level.
In addition, tools must be included as an integral part of
the Personal Protective Equipment that technicians are required
to use when working on high-energy systems.
Beyond these requirements, however, management must
ensure that the measurement tools in use are designed, certified
and maintained so that they will meet the more advanced and
stringent safety requirements of today. Management must account
for three factors when assessing test tool safety: Category rating
(older, unrated tools were not made for todays electrical environment), independent testing and certification, and regular inspection
and maintenance. It is important to note that the category rating for
personnel protective equipment has no relationship to the CAT
ratings identified as part of the markings of test and measurement equipment.
Category rating for PPE Testers should be rated for
the electrical environment in which they will be used. For example,
a 220-volt, three-phase system requires a tester rated CAT III or
IV. Old, unrated test instruments do not meet IEC guidelines for
required PPE. While they may be perfectly accurate and appear
to perform well, even the best meters of yesterday were designed
for a world where working conditions and safety standards were
far different. Such test tools may not meet contemporary standards.
Independent Testing and Certification Even in the
vital area of safety, some tools may not perform as promised by
the manufacturer. Measuring devices rated for a high-energy
environment may not actually deliver the safety protections,
such as adequate fusing, claimed on their specification sheets.

THE CRUCIAL DIFFERENCE BETWEEN DESIGNED AND TESTED


It is important to understand that standards bodies such as
ANSI, CSA and IEC are not responsible for enforcing their standards. This means that a meter designed to a standard may not
actually have been tested and proven to meet that standard. It is
not uncommon for meters under test to fail before achieving the
performance their manufacturers claim for them.
The best assurance for users and their employers is to
select test instruments that have been tested and certified to perform up to specification by independent testing laboratories. To
provide an extra measure of confidence, select test tools labeled
to show that they have been certified to meet the appropriate
contemporary standards by two or more independent labs. This
ensures that test instruments have passed the most rigorous tests
and meet every applicable standard. Such independent testing
labs include Underwriters Laboratories (UL) in the United
States, Canadian Standards Association (CSA) in Canada and
TUV Product Service in Europe.ii

10. TEST TOOL INSPECTION AND MAINTENANCE


Regular Inspection and Maintenance To perform
accurately and safely, test tools must be regularly inspected and
maintained. The need for inspection is clearly recognized by the
National Fire Protection Association. NFPA Standard 70E lays
out the requirement that test tools must be visually inspected frequently to help detect damage and ensure proper operation. Part
II, Chapter 4, Paragraph 4-1.1 provides the details:
Visual Inspection. Test instruments and equipment and
all associated test leads, cables, power cords, probes, and connectors shall be visually inspected for external defects and damage before the equipment is used on any shift. If there is a defect
or evidence of damage that might expose an employee to injury,
the defective or damaged item shall be removed from service,
and no employee shall use it until repairs and tests necessary to
render the equipment safe have been made.iii
Visual inspection alone, however, may not detect all possible test instrument problems. To help ensure the highest level
of safety and performance, additional inspection and testing
should be conducted:
Additional Visual Inspection Test tools should be
checked for the following points:
Look for the 1000-volt, CAT III or 600-volt, CAT IV rating on the front of meters and testers, and a double
insulated symbol on the back.
Look for approval symbols from two or more independent
testing agencies, such as UL, CSA, CE, TUV or CTICK.
Make sure that the amperage and voltage of meter fuses
is correct. Fuse voltage must be as high or higher than
the meters voltage rating. The second edition of
IEC/ANSI/CSA standards states that test equipment
must perform properly in the presence of impulses on
volts and amps measurement functions. Ohms and continuity functions are required to handle the full meter
voltage rating without becoming a hazard.
Check the instruments manual to determine whether the
ohms and continuity circuits are protected to the same
level as the voltage test circuit. If the manual does not
indicate, your supplier should be able to determine
whether the meter passed the second edition of IEC61010
or ANSI S82.02.
Check the overall condition of the meter or tester, looking for such problems as a broken case, worn test leads
or a faded display.
Use the meters own test capability to determine
whether fuses are in place and functioning properly.
Step 1: Plug test lead in V/ input. Select .
Step 2: Insert probe tip into mA input. Read value.
Step 3: Insert probe tip into A input. Read value.
Typically a fuse in good condition should show
mvalue of close to zero, but you should always
check your meter owners manual for the specified reading.
Inspecting Test Leads and Probes As integral components of the test tool system, test leads, probes and attachments
must meet the requirements of the testing environment and be
designed to minimize hazard. Test leads must be certified to a
category that equals or exceeds that of the meter or tester.
Examine test leads for such features as shrouded connectors, finger guards, CAT ratings that equal or exceed
those of the meter, and double insulation.

10

Electrical Testing and Measurement Handbook Vol. 7


Visually inspect for frayed or broken wires. The length
of exposed metal on test probe tips should be minimal.
Test leads can fail internally, creating a hazard that cannot be detected through visual inspection. But it is possible to use the meters own continuity testing function
to check for internal breaks.
Step 1: Insert leads in V/ and COM inputs
Step 2: Select , touch probe tips. Good leads are 0.1
0.3 .

11. SAFE MEASUREMENT PROCESSES AND PROCEDURES


In addition to the consistent use of safe, correctly rated
and inspected test tools discussed in the preceding sections, safe
electrical measurement requires adherence to correct measurement procedures. Safety training programs should incorporate
both elements of safe measurement equipment and procedures.
In addition to equipment inspection (detailed in Section
10 above), safe measurement procedures include:
Lockout/Tagout procedures NFPA provides extensive information and guidance on lockout/tagout practices and devices in Part II, Chapter 5 of NFPA 70E.iv
Three-step test procedure Before making the determination that a measured circuit is dead, it is important to
verify that test instruments are operating correctly. To do
so, the technician should use a three-step test procedure.
First, check for correct test tool operation by using the
tool to test a circuit known to be live. Then, test the target circuit.
Finally, as a double check on test tool operation, test the original
known circuit once again. This procedure provides the user a
strong measure of confidence that the test tool is operating correctly, and that the target circuit is performing as measured.
Neutral first and last The user should attach the test
lead to a neutral contact first, then attach a lead to a hot
contact to conduct the test. In detaching test leads, first
remove the hot contact, then remove the neutral test lead.
One hand only When possible, it is good practice to
follow the old electricians advice and keep one hand in
a pocket when testing. But common sense must rule.
Conditions at the test location may make it impractical
to use this technique.

12. CONCLUSIONS AND RECOMMENDATIONS


Unlike some other important safety initiatives, the measures
required to bolster the safety of electrical measurement tools and
procedures are not difficult or costly. Yet these steps can provide
important benefits by improving worker safety, avoiding the disruption of business operations, reducing risk and avoiding possible
increases in insurance costs.
Employers should begin by ensuring that technicians are
fully trained in correct use of all personal protective equipment,
including test instruments.
As a companion measure, make sure the required PPE is
readily available, meets todays standards, and is inspected to
ensure it is in optimum condition.
Test instruments are an essential component of PPE.
Employers should inspect all test instruments to ensure they are
rated, tested and certified by independent testing agencies to
meet safety requirements for the environments where they are
used. Replace test instruments that do not meet current standards, because they may create extra hazard, risk and liability.
Finally, personnel should be trained in the correct procedures for taking safe measurements, including methods for personally inspecting and testing their instruments to ensure they
are in good condition and function correctly.
NFPA 70E Standard for Electrical Safety Requirements for
Employee Workplaces, 2000 Edition, pages 55 through 58.
2000 NFPA

For more information on these testing organizations, visit their


websites:
http://www.ul.com/
http://www.csa.ca/Default.asp?language=English
http://www.tuvamerica.com/services/electrical/lowvolt.cfm

ii

NFPA 70E Standard for Electrical Safety Requirements for


Employee Workplaces, 2000 Edition, page 63. 2000 NFPA
iii

iv

Ibid, pp 64-66.

Electrical Testing and Measurement Handbook Vol. 7

11

ISOLATION TECHNOLOGIES FOR RELIABLE


INDUSTRIAL MEASUREMENTS
National Instruments
OVERVIEW

NEED FOR ISOLATION

Voltage, current, temperature, pressure, strain, and flow


measurements are an integral part of industrial and process control applications. Often these applications involve environments
with hazardous voltages, transient signals, common-mode voltages, and fluctuating ground potentials capable of damaging
measurement systems and ruining measurement accuracy. To
overcome these challenges, measurement systems designed for
industrial applications make use of electrical isolation. This
white paper focuses on isolation for analog measurements,
provides answers to common isolation questions, and includes
information on different isolation implementation technologies.

Consider isolation for measurement systems that involve


any of the following:
Vicinity to hazardous voltages
Industrial environments with possibility of transient
voltages
Environments with common mode voltage or fluctuating ground potentials
Electrically noisy environments such as those with
industrial motors
Transient sensitive applications where it is imperative
to prevent voltage spikes from being transmitted through
the measurement system
Industrial measurement, process control, and automotive
test are examples of applications where common-mode voltages,
high-voltage transients, and electrical noise are common.
Measurement equipment with isolation can offer reliable measurements in these harsh environments. For medical equipment in
direct contact with patients, isolation is useful in preventing power
line transients from being transmitted through the equipment.
Based on your voltage and data rate requirements, you
have several options for making isolated measurements. You
can use plug-in boards for laptops, desktop PCs, industrial PCs,
PXI, Panel PCs, and Compact PCI with the option of built-in
isolation or external signal conditioning. Isolated measurements
can also be made using programmable automation controllers
(PACs) and measurement systems for USB.

UNDERSTANDING ISOLATION
Isolation electrically separates the sensor signals, which
can be exposed to hazardous voltages1, from the measurement
systems low-voltage backplane. Isolation offers many benefits
including:
Protection for expensive equipment, the user, and data
from transient voltages
Improved noise immunity
Ground loop removal
Increased common-mode voltage rejection
Isolated measurement systems provide separate ground
planes for the analog front end and the system backplane to separate the sensor measurements from the rest of the system. The
ground connection of the isolated front end is a floating pin that
can operate at a different potential than the earth ground. Figure 1
represents an analog voltage measurement device. Any commonmode voltage that exists between the sensor ground and the measurement system ground is rejected. This prevents ground loops
from forming and removes any noise on the sensor lines.

Figure 1. Bank Isolated Analog Input Circuitry


Hazardous Voltages are greater than 30 Vrms, 42.4 Vpk or 60 VDC

Figure 2. Isolated Data Acquisition Systems

12

METHODS OF IMPLEMENTING ISOLATION


Isolation requires signals to be transmitted across an isolation barrier without any direct electrical contact. Light emitting
diodes (LEDs), capacitors, and inductors are three commonly
available components that allow electrical signal transmission
without any direct contact. The principles on which these devices
are based form the core of the three most common technologies
for isolation optical, capacitive, and inductive coupling.

Electrical Testing and Measurement Handbook Vol. 7


second coil by placing it in close vicinity of the changing magnetic field from the first coil. The voltage and current induced in
the second coil depend on the rate of current change through the
first. This principle is called mutual induction and forms the
basis of inductive isolation.

OPTICAL COUPLING
LEDs produce light when a voltage is applied across
them. Optical isolation uses an LED along with a photo-detector
device to transmit signals across an isolation barrier using light
as the method of data translation. A photo-detector receives the light
transmitted by the LED and converts it back to the original signal.

Figure 3. Optical Coupling

Figure 5. Inductive Coupling

Inductive isolation uses a pair of coils separated by a


layer of insulation. Insulation prevents any physical signal
transmission. Signals can be transmitted by varying current
flowing through one of the coils, which causes a similar current
to be induced in the second coil across the insulation barrier.
Inductive isolation can provide high-speed transmission similar
to capacitive techniques. Because inductive coupling involves
the use of magnetic fields for data transmission, it can be susceptible to interference from external magnetic fields.

ANALOG ISOLATION AND DIGITAL ISOLATION


Optical isolation is one of the most commonly used methods
for isolation. One benefit of using optical isolation is its immunity
to electrical and magnetic noise. Some of the disadvantages
include transmission speed, which is restricted by the LED
switching speed, high-power dissipation, and LED wear.

CAPACITIVE COUPLING
Capacitive isolation is based on an electric field that
changes based on the level of charge on a capacitor plate. This
charge is detected across an isolation barrier and is proportional
to the level of the measured signal.
One advantage of capacitive isolation is its immunity to
magnetic noise. Compared to optical isolation, capacitive isolation can support faster data transmission rates because there are
no LEDs that need to be switched. Since capacitive coupling
involves the use of electric fields for data transmission, it can be
susceptible to interference from external electric fields.

Several commercial off-the-shelf (COTS) components


are available today, many of which incorporate one of the above
technologies to provide isolation. For analog input/output channels, isolation can be implemented either in the analog section
of the board, before the analog-to-digital converter (ADC) has
digitized the signal (analog isolation) or after the ADC has
digitized the signal (digital isolation). Different circuitry needs
to be designed around one of these techniques based on the location in the circuit where isolation is being implementing. You can
choose analog or digital isolation based on your data acquisition
system performance, cost, and physical requirements. Figure 6
shows the different stages of implementing isolation.

Figure 6a. Analog Isolation

Figure 4. Capacitive Isolation

INDUCTIVE COUPLING
In the early 1800s, Hans Oersted, a Danish physicist, discovered that current through a coil of wire produces a magnetic
field. It was later discovered that current can be induced in a

Figure 6b. Digital Isolation

Electrical Testing and Measurement Handbook Vol. 7

13

The following sections cover analog and digital isolation


in more detail and explore the different techniques for implementing each.

ANALOG ISOLATION
The isolation amplifier is generally used to provide isolation
in the analog front end of data acquisition devices. ISO Amp
in Figure 6a represents an isolation amplifier. The isolation
amplifier in most circuits is one of the first components of the
analog circuitry. The analog signal from a sensor is passed to the
isolation amplifier which provides isolation and passes the signal
to the analog-to-digital conversion circuitry. Figure 7 represents
the general layout of an isolation amplifier.
Figure 8. Use of Isolation Amplifiers in Flexible Signal Conditioning Hardware

DIGITAL ISOLATION

Figure 7. Isolation Amplifier

In an ideal isolation amplifier, the analog output signal is


the same as the analog input signal. The section labeled isolation
in Figure 7 uses one of the techniques discussed in the previous
section (optical, capacitive, or inductive coupling) to pass the
signal across the isolation barrier. The modulator circuit prepares the signal for the isolation circuitry. For optical methods,
this signal needs to be digitized or translated into varying light
intensities. For capacitive and inductive methods, the signal is
translated into varying electric or magnetic fields. The demodulator circuit then reads the isolation circuit output and converts
it back into the original analog signal.
Because analog isolation is performed before the signal is
digitized, it is the best method to apply when designing external
signal conditioning for use with existing non-isolated data acquisition devices. In this case, the data acquisition device performs
the analog-to-digital conversion and the external circuitry provides
isolation. With the data acquisition device and external signal conditioning combination, measurement system vendors can develop
general-purpose data acquisition devices and sensor-specific signal
conditioning. Figure 8 shows analog isolation being implemented
with flexible signal conditioning that uses isolation amplifiers.
Another benefit to isolation in the analog front end is protection for
the ADC and other analog circuitry from voltage spikes.
There are several options available on the market for
measurement products that use a general-purpose data acquisition device and external signal conditioning. For example, the
National Instruments M Series includes several non-isolated, general-purpose multifunction data acquisition devices that provide
high-performance analog I/O and digital I/O. For applications
that need isolation, you can use the NI M Series devices with
external signal conditioning, such as the National Instruments
SCXI or SCC modules. These signal conditioning platforms
deliver the isolation and specialized signal conditioning needed
for direct connection to industrial sensors such as load cells, strain
gages, pH sensors, and others.

Analog-to-digital converters are one of the key components of any analog input data acquisition device. For best
performance, the input signal to the analog-to-digital converter
should be as close to the original analog signal as possible.
Analog isolation can add errors such as gain, non-linearity and
offset before the signal reaches the ADC. Placing the ADC closer to the signal source can lead to better performance. Analog
isolation components are also costly and can suffer from long
settling times. Despite better performance of digital isolation,
one of the reasons for using analog isolation in the past was to
provide protection for the expensive analog-to-digital converters. As the ADCs prices have significantly declined, measurement equipment vendors are choosing to trade ADC protection
for better performance and lower cost offered by digital isolators (see Figure 9).

Figure 9. Declining Price of 16-Bit Analog-to-Digital Converters


Graph Source: National Instruments and a Leading ADC Supplier

Compared to isolation amplifiers, digital isolation components are lower in cost and offer higher data transfer speeds. Digital
isolation techniques also give analog designers more flexibility to
choose components and develop optimal analog front ends for
measurement devices. Products with digital isolation use currentand voltage-limiting circuits to provide ADC protection. Digital
isolation components follow the same fundamental principles of
optical, capacitive, and inductive coupling that form the basis of
analog isolation.

14
Leading digital isolation component vendors such as
Avago Technologies (www.avagotech.com), Texas Instruments
(www.ti.com), and Analog Devices (www.analog.com) have
developed their isolation technologies around one of these basic
principles. Avago Technologies offers digital isolators based on
optical coupling, Texas instruments bases its isolators on capacitive coupling, and Analog Devices isolators use inductive coupling.

OPTOCOUPLERS
Optocouplers, digital isolators based on the optical coupling principles, are one of the oldest and most commonly used
methods for digital isolation. They can withstand high voltages
and offer high immunity to electrical and magnetic noise.
Optocouplers are often used on industrial digital I/O products,
such as the National Instruments PXI-6514 isolated digital
input/output board (see Figure 10) and National Instruments
PCI-7390 industrial motion controller.

Electrical Testing and Measurement Handbook Vol. 7


encoding and converts rising and falling edges on the digital
lines to 1 ns pulses. These pulses are transmitted across the isolation barrier using the transformer and decoded on the other
side by the receiver circuitry (see Figure 11). The small size of
the transformers, about three-tenths of a millimeter, makes them
practically impervious to external magnetic noise. iCouplers
can also lower measurement hardware cost by integrating up to
four isolated channels per integrated circuit (IC) and, compared
to optocouplers, they require fewer external components.

Figure 11. Introduction Coupling-Based iCoupler Technology from Analog Devices


Source: Analog Devices (www.analog.com/iCoupler)

Measurement hardware vendors are using iCouplers


to offer high-performance data acquisition systems at lower
costs. National Instruments industrial data acquisition
devices intended for high-speed measurements, such as the
isolated M Series multifunction data acquisition devices,
use iCoupler digital isolators (see Figure 12). These devices
provide 60 VDC continuous isolation and 1,400 Vrms/1,900
VDC channel-to-bus isolation withstand for 5 s on multiple
analog and digital channels and support sampling rates up to
250 kS/s. National Instruments C Series modules used in the NI
PAC platform, NI CompactRIO, NI CompactDAQ, and other
high-speed NI USB devices also use the iCoupler technology.
Figure 10. Industrial Digital I/O Products Optpcouplers

For high-speed analog measurements, optocouplers,


however, suffer from speed, power dissipation, and LED ware
limitations associated with optical coupling. Digital isolators
based on capacitive and inductive coupling can alleviate many
optocoupler limitations.

CAPACITIVE ISOLATION
Texas Instruments offers digital isolation components
based on capacitive coupling. These isolators provide high data
transfer rates and high transient immunity. Compared to capacitive and optical isolation methods inductive isolation offers
lower power consumption.

INDUCTIVE ISOLATION
iCoupler technology, introduced by Analog Devices in
2001 (www.analog.com/iCoupler), uses inductive coupling to
offer digital isolation for high-speed and high-channel-count
applications. iCouplers can provide 100 Mb/s data transfer rates
with 2,500 V isolation withstand; for a 16-bit analog measurement system that implies sampling rates in the mega hertz
range. Compared to optocouplers, iCouplers offer other benefits
such as reduced power consumption, high operating temperature
range up to 125 C, and high transient immunity up to 25 kV/ms.
iCoupler technology is based on small, chip-scale transformers. An iCoupler has three main parts a transmitter, transformers, and a receiver. The transmitter circuit uses edge trigger

Figure 12. National Instruments Isolated M Series Multifuntion DAQ Uses

SUMMARY
Isolated data acquisition systems can provide reliable
measurements for harsh industrial environments with hazardous
voltages and transients. Your need for isolation is based on your
measurement application and surrounding environments.
Applications that require connectivity to different specialty sensors using a single, general-purpose data acquisition device can
benefit from external signal conditioning with analog isolation.
Where as applications needing lower-cost, high-performance
analog inputs benefit from measurement systems with digital
isolation technologies.

Electrical Testing and Measurement Handbook Vol. 7

15

RESISTANCE MEASUREMENTS
THREE- AND FOUR-POINT METHOD
FOUR-POINT RESISTANCE MEASUREMENTS
Ohmmeter measurements are normally made with just a
two-point measurement method. However, when measuring very
low values of ohms, in the milli- or micro-ohm range, the two-point
method is not satisfactory because test lead resistance becomes a
significant factor.
A similar problem occurs when making ground mat resistance tests, because long lead lengths of up to 1000 feet are used.
Here also, the lead resistance, due to long lead length, will affect
the measurement results.
The four-point resistance measurement method eliminates
lead resistance. Instruments based on the four-point measurement work on the following principle:
Two current leads, C1 and C2, comprise a two-wire current source that circulates current through the resistance
under test.
Two potential leads, P1 and P2, provide a two-wire voltage measurement circuit that measures the voltage drop
across the resistance under test.
The instrument computes the value of resistance from
the measured values of current and voltage.

only three test terminals. The three-point method for ground system testing is considered adequate by most individuals in the
electrical industry and is employed on the TPI MFT5010 and the
TPI ERT1500.
The four-point method is required to measure soil resistivity.
This process requires a soil cup of specific dimensions into which
a representative sample of earth is placed. This process is not often
employed in testing electrical ground systems although it may be
part of an initial engineering study.

PURPOSE/TPI INSTRUMENT FEATURES


PURPOSE
The purpose of electrical ground testing is to determine
the effectiveness of the grounding medium with respect to true
earth. Most electrical systems do not rely on the earth to carry
load current (this is done by the system conductors) but the earth
may provide the return path for fault currents, and for safety, all
electrical equipment frames are connected to ground.
The resistivity of the earth is usually negligible because
there so much of it available to carry current. The limiting factor
in electrical grounding systems is how well the grounding electrodes contact the earth, which is known as the
soil/ground rod interface. This interface resistance component, along with the resistance of the grounding conductors and the connections, must be measured by the
ground test.
In general, the lower the ground resistance, the
safer the system is considered to be. There are different
regulations which set forth the maximum allowable
ground resistance, for example: the National Electrical
Code specifies 25 ohms or less; MSHA is more stringent, requiring the ground to be 4 ohms or better; electric
utilities construct their ground systems so that the
resistance at a large station will be no more than a few
tenths of one ohm.

Figure 1

TPI GROUND TEST INSTRUMENT CHARACTERISTICS

THREE-POINT RESISTANCE MEASUREMENTS


The three-point method, a variation of the four-point
method, is usually used when making ground (earth) resistance
measurements. With the three-point method, the C1 and P1 terminals
are tied together at the instrument and connected with a short
lead to the ground system being tested. This simplifies the test in
that only three leads are required instead of four. Because this
common lead is kept short, when compared to the length of the
C2 and P2 leads, its effect is negligible. Some ground testers are
only capable of the three-point method, so are equipped with

To avoid errors due to galvanic currents in the earth, TPI


ground test instruments use an AC current source.
A frequency other than 60 hertz is used to eliminate the
possibility of interference with stray 60 hertz currents
flowing through the earth.
The three-point measurement technique is utilized to
eliminate the effect of lead length.
The test procedure, known as the Fall-of-Potential
Method, is described on the following page.

16

Figure 2

THREE-POINT FALL-OF-POTENTIAL TEST PROCEDURE


GROUND TEST PROCEDURE
In the Fall-of-Potential Method, two small ground rods
often referred to as ground spikes or probes about 12" long are
utilized. These probes are pushed or driven into the earth far
enough to make good contact with the earth (8" 10" is usually
adequate). One of these probes, referred to as the remote current
probe, is used to inject the test current into the earth and is placed
some distance (often 100') away from the grounding medium
being tested . The second probe, known as the potential probe, is
inserted at intervals within the current path and measures the
voltage drop produced by the test current flowing through the
resistance of the earth.
In the example shown on the following page, the remote
current probe C2 is located at a distance of 100 feet from the
ground system being tested. The P2 potential probe is taken out
toward the remote current probe C2 and driven into the earth at
ten-foot increments.
Based on empirical data (data determined by experiment and
observation rather than being scientifically derived), the ohmic value
measured at 62% of the distance from the ground-under-test to the
remote current probe, is taken as the system ground resistance.
The remote current probe must be placed out of the influence of the field of the ground system under test. With all but the
largest ground systems, a spacing of 100 feet between the groundunder-test and the remote
current electrode is adequate.
When adequate spacing
between electrodes exists, a
plateau will be developed on
the test graph. Note: A remote
current probe distance of less
than 100 feet may be adequate on small ground systems.

Electrical Testing and Measurement Handbook Vol. 7


When making a test where sufficient spacing exists, the
instrument will read zero or very near zero when the P2 potential probe is placed near the ground-under-test. As the electrode
is moved out toward the remote electrode, a plateau will be
reached where a number of readings is approximately the same
value (the actual ground resistance is that which is measured at
62% of the distance between the ground mat being tested and the
remote current electrode). Finally, as the potential probe
approaches the remote current electrode, the resistance reading will
rise dramatically.
It is not absolutely necessary to make a number of measurements as described above and to construct a graph of the readings.
However, we recommend this as it provides valuable data for future
reference and, once you are setup, it takes only a few minutes to
take a series of readings.
The electrical fields associated with the ground grid and
the remote electrodes are illustrated on AN0009-5. An actual
ground test is detailed on AN0009-6, and a sample Ground Test
Form is provided on AN0009-7. See AN0009-8 for a simple
shop-built wire reel assembly for testing large ground systems.

SHORT-CUT METHOD
The short cut method described here determines the
ground resistance value and verifies sufficient electrode spacing
and it does save time. This procedure uses the 65' leads supplied
with the TPI instruments.
Connect the T1 instrument jack with the 15' green lead
to the ground system being tested.
Connect the T3 instrument jack with the red lead to the
remote current electrode (spike) placed at distance of 65'
(full length of conductor) from the ground grid being
tested.
Connect the T2 instrument jack with the black lead to
the potential probe placed at 40 feet (62% of the 65' distance) from the ground grid being tested and measure
the ground resistance.
Move the P2 potential probe 6' (10% of the total distance) to either side of the 40' point and take readings at
each of these points. If the readings at these two points
are essentially the same as that taken at the 40' point, a
measurement plateau exists and the 40' reading is valid.
A substantial variation between readings indicates insufficient spacing.

THREE-POINT FALL-OF-POTENTIAL METHOD


INSTRUMENT SET-UP

Figure 3

Electrical Testing and Measurement Handbook Vol. 7

17

A NOTE ON INSTRUMENT LABELING CONVENTIONS


The TPI MFT5010 and TPI ERT1500 use
the terminal designations T1 (C1/P1), T2 (P2), and
T3 (C2).
The corresponding lead designations on the
MFT5010 are E (Earth), S & H.
The corresponding lead designations on the
ERT1500 are E (Earth), P (Potential), C (Current).

TEST CURRENT PATH


Test Current (AC ) flows from instrument
T3 to remote current probe C2 on the red
lead.
Test Current flows from remote current
probe C2 back through the earth to the
ground being tested as shown by dashed
blue line.
Test current flows out of ground grid back
to instrument T1 on the short green lead.
Black potential lead P1 is connected to instrument
T2 and is taken out at 10' increments. It measures
voltage drop produced by the test current flowing
through the earth. (P1 to P2 potential)

Figure 4

EQUAL-POTENTIAL PLANES
THE EXISTENCE OF EQUAL-POTENTIAL PLANES
When current flows through the earth from a remote test
electrode (in the case of a ground test) or remote fault, the voltage drop which results from the flow of current through the
resistance of the earth can be illustrated by equal-potential
planes. The equal-potential planes are represented in the dashed
lines in drawings below where the spacing between concentric
lines represents some fixed value of voltage.
The concentration of the voltage surrounding a grounding element is greatest immediately adjacent to that ground. This
is shown by the close proximity of lines at the point where the
current enters the earth and again at the point where the current
leaves the earth and returns to the station ground mat.

Figure 5

In order to achieve a proper test using the Fall-of-Potential


Ground Test Method, sufficient spacing must exist between the
station ground mat being tested and the remote current electrode
such that the equal-potential lines do not overlap. As shown by the
black line in the Sample Plot, adequate electrode spacing will
result in the occurrence of a plateau on the resistance plot. This
plateau must exist at 62% of the distance between the ground mat
and the remote electrode for the test to be valid. Insufficient spacing results in an overlap of these equal-potential planes, as illustrated at the bottom of this page and by the red line on the Sample
Plot.
See the Safety Note on AN0009-6 for information on the
hazards of Step and Touch-Potentials.

18

Electrical Testing and Measurement Handbook Vol. 7

Figure 6

ACTUAL FIELD TEST


This actual ground test was conducted on a pad-mount
transformer in a rural mountain area. The single-phase transformer is supplied by a 12470/7200 volt grounded wye primary
and the transformer is grounded by its own ground rod as well as
being tied to the system neutral which is grounded at multiple
points along the line. The distribution line is overhead with just
the dip to the transformer being underground.

Ground Test Data


Remote Current Probe C2 @ 100 Feet
P2 Distance from Transformer in Feet

Instrument Reading in Ohms

10

1.83

20

3.59

30

3.85

40

3.95

50

4.0

60

4.25

62*

4.3

70

4.5

80

5.4

90

7.3

100

25.02

* Actual Ground resistance.

TEST PROCEDURE
Terminal T1 of the TPI MFT5010 tester was connected to
the transformer case ground with the short green lead. The
remote Current Probe C2 was driven in the ground at a location
100 feet from the transformer and connected to Terminal T3 of
the instrument with the red test lead.

Terminal T2 of the tester was connected, using the 100'


black lead, to the P2 potential probe. This ground stake was inserted
into the ground at 10' intervals and a resistance measurement was
made at each location and recorded in the table above.
The relatively constant readings in the 4 ohm range between
40 and 70 feet are a definite plateau that indicates sufficient lead

Electrical Testing and Measurement Handbook Vol. 7


spacing. The initial readings close to the transformer are lower, and
there is a pronounced tip-up as the P2 probe approaches the
remote current electrode C2.

19
The measured ground resistance at 62 feet (62% of the
distance) was 4.3 ohms and is taken as the system ground resistance. This is an excellent value for this type of an installation.

20

SAFETY NOTE POSSIBLE EXISTENCE OF HAZARDOUS


STEP AND TOUCH POTENTIALS
It is recommended that rubber gloves be worn when driving
the ground rods and connecting the instrument leads.
The possibility of a system fault occurring at the time the
ground test is being conducted is extremely remote.
However, such a fault could result in enough current flow
through the earth to cause a possible hazardous step potential
between a probe and where the electrician is standing, or hazardous
touch potential between the probes and the system ground. The
larger the system, in terms of available fault current, the greater the
possible risk.

REEL ASSEMBLY
A SHOP-BUILT GROUND TEST WIRE REEL ASSEMBLY
This simple, low-cost, and easy-to-build wire reel assembly
is handy for making Ground (Earth) Resistance measurements on
large ground systems. The unit shown below has 500 feet of wire
for testing medium-to-large ground fields typical of those found in
industrial plants and substations. For testing even larger systems,
such as those installed for power generating plants, wire lengths of
1000 feet can be used. Wrap-on wire markers are installed every
ten feet on the current lead to simplify placement of the remote
current and potential probes. Your electrical distributor will probably have empty surplus reels available for the asking the ones
shown below are about 12 inches in diameter. The conductor is
standard #12 THHN. Even though the TPI ERT1500 and the
MFT5010 use an AC test signal, the test results are unaffected by
the inductance of any wire left on the reels.

Electrical Testing and Measurement Handbook Vol. 7

Electrical Testing and Measurement Handbook Vol. 7

21

CLAMP-ON GROUND RESISTANCE TESTER


MODELS 3711 & 3731
STEP-BY-STEP USAGE

Chauvin Arnoux, Inc. AEMC Instruments


4. Observe instrument reading the reading should be
within 1.0W of gauge specification (25W). If reading
is correct, proceed to step 5. If not, clean instrument
and repeat steps 3 and 4. If you are not able to get the
instrument to read within 1.0W after cleaning instrument, do not proceed. Have the instrument repaired.
5. Remove instrument from gauge. Observe instrument
reading with nothing in the clamps. The reading
should be greater than 1000W OR read. If either of
these conditions is observed, continue to step 6. If not,
clean instrument (see instructions below) and repeat
steps 3 through 5. If, after cleaning instrument, you are
still unable to get the instrument to perform as
described in steps 4 and 5, open the jaws approximately 1/2 inch and let them snap shut. Make sure that the
jaws close properly. If the unit still does not perform
properly, do not proceed. Have the instrument
repaired.
1. Turn instrument on by pressing the green ON/OFF
button (far right). Continue holding the green button
down until the battery life indicator comes on.
2. Check battery life indicator make sure at least 20
percent remains.

6. Switch instrument to Current Mode. (Press button


labeled A for Amps)
7. Clamp instrument around the ground wire or rod.
8. Observe reading if less than 1.0A, proceed to step 9.
If between 1.0 and 5.0A, make note of reading and
continue to step 9. If greater than 5A, terminate test
and remove instrument from the ground wire or rod
and correct the problem before re testing.
9. Switch instrument to Resistance (W) Mode. (Press
button labeled with Ohm (W) symbol)

step 2

3. Check calibration locate the 25W calibration gauge


supplied with the tester and clamp the meter around
any leg of the gauge.

10. Wait for reading to stabilize and record reading. Lock


reading by pressing HOLD.
11. Remove instrument from ground wire or rod and
reclamp to gauge.
12. Observe reading the reading should be within 1.0W
of gauge value. If reading is OK measurement is
valid. If reading is wrong, clean instrument (see
instructions below) and repeat from step 4.

CLEANING THE HEADS

step 3

To ensure optimum performance, it is important to keep


the probe jaw mating surfaces clean at all times. Failure to do so
may result in erroneous readings. To clean the probe jaws, use a
very fine sandpaper (600 grit) to avoid scratching the surface,
then gently clean with a soft cloth. Make sure that the instru-

22
ment is oriented such that no debris or filings will fall into the
unit while cleaning. Check with your finger afterwards to be
sure that no foreign material remains on the jaw surfaces (both
top and bottom).

CLAMP-ON GROUND RESISTANCE TESTING


The clamp-on ground resistance testing technique offers
the ability to measure the resistance without disconnecting the
ground. This type of measurement also offers the advantage of
including the bonding to ground and the overall grounding connection resistances.

PRINCIPLES OF OPERATION
Usually, a common distribution line grounded system can
be simulated as a simple basic circuit as shown in Figure A or
an equivalent circuit, shown in Figure B. If voltage E is applied
to any measured grounding system. Rx through a special transformer (used in Models 3711 and 3731), current I flows through

the circuit, thereby establishing the following equation.


Therefore, E/I = Rx is established. If it is detected with E
kept constant, measured grounding resistance can be obtained.
Refer again to Figures A and B. Current is fed to a special transformer via a power amplifier from a 2.3 kHz constant voltage
oscillator. This current is detected by a detection CT. Only the 2.3
kHz signal frequency is amplified by a filter amplifier. This occurs
before the A/D conversion and after synchronous rectification. It
is then displayed on the LCD of the Model 3711/3731 meter.
The filter amplifier is used to cut off both earth current at
commercial frequency and high-frequency noise. Voltage is
detected by coils wound around the injection CT, which is then
amplified, rectified, and compared by a level comparator. If the
clamp is not closed properly, an open jaw annunciator appears

Electrical Testing and Measurement Handbook Vol. 7


on the LCD.
The important points to consider for proper use of the
clamp-on ground tester are:
1. There is a series-parallel resistance path down stream
from the measurement point that is lower in resistance
than the point being measured.
2. That the earth is the return path to the point where the
clamp-on meter is connected and not wire or other
metal structures (see Figure C).
3. If the measurement point is not connected to a seriesparallel low resistance network (such as the case with
a single rod), a temporary path may be created by connecting a jumper cable from the measurement point to
a low resistance like a pole ground (see Figure D).

Electrical Testing and Measurement Handbook Vol. 7

23

MEASURING MAGNETIC FIELDS


ELECTRIC AND MAGNETIC FIELDS
Australian Radiation Protection and Nuclear Agency
Every thing electrical from a toaster to a high-voltage
power line produces electric and magnetic fields. Both the electric
and magnetic fields are strong close to an operating source. The
strength of the electric field depends on the voltage and is present
in any live wire whether an electrical appliance is being used or not.
Magnetic fields, on the other hand, are produced by electric currents and are only present when an appliance is operating i.e. there
is no magnetic field when an electrical appliance is turned off.

HEALTH EFFECTS
Currently there is no evidence that exposure to electric
fields is a health hazard (excluding electric shock). Whether
exposure to magnetic fields is equally harmless remains an open
question. A large number of scientific studies performed on animals and cells have not found a health risk. Some epidemiological
studies, however, have suggested a weak link between intense and
prolonged exposure to magnetic fields and childhood leukaemia.

MAGNETIC FIELD UNITS


The strength of the magnetic field is expressed in units of
Tesla (T) or microtesla (T). Another unit, which is commonly
used is the Gauss (G) or milligauss (mG), where 1 G is equivalent to 10-4 T (or 1 mG = 0.1 T).

THE GAUSS METER


There is a range of different instruments that can measure
the magnetic field strength. The gauss meter is a hand-held
device that provides a simple way of performing such measurements. ARPANSA has two different gauss meter models available
for hire, which are a Teslatronics Model 70 and a Sypris Model
4080. Both these instruments operate in a similar manner and they
are shown in the figure below.

Both gauss meters measure alternating fields from 25 Hz


to 1000 Hz in units of mG. They do not measure and will give
false readings from mobile phones. Readings taken very close (a
few cm) to other electronic devices (as distinct from electrical
devices such as heaters, washing machines etc) may also give
false readings. Shaking or vibrating either unit may also give
false readings. Since the meters only measure varying magnetic
fields, they will not measure the earths magnetic field which is
static and has a value of approximately 500 mG.
When either meter is turned on, it will perform an initial
self-diagnostic test by showing all available readouts on its digital
display. Following the initial test, the meter will display the magnetic field intensity at the location where it is held or placed and
the intensity will change if moved accordingly. If the negative sign
is still showing after the initial test, that indicates that the meter is
running low on power and the battery needs to be replaced.

PERFORMING MEASUREMENTS
Measurements of the magnetic field in the home are generally taken in the middle of the room at about one metre from the ground
or in locations where people spend a significant amount of time, for
example, the bed. Measurements should also be performed several
times over the course of a day. This is to allow for possible variations
to electricity demand which presumably would peak during the
evening at about 7.00 pm. Measurements can also be made at any
other locations of interest.
It is important to remember that, as mentioned earlier,
research suggests that if any health effects exist, they are associated with prolonged magnetic field exposure. Measurements taken
with the gauss meter are instantaneous (i.e. measured at one point
in time) and do not accurately reflect prolonged exposure levels.

TYPICAL MAGNETIC FIELD STRENGTHS


Magnetic fields within homes can vary at different locations
and also over time. The actual strength of the field at a given location depends upon the number and kinds of sources and their distance from the location of measurement. Typical values measured
in areas away from electrical appliances are of the order of 2 mG.
Magnetic fields from individual appliances can vary considerably as well, depending on the way they were designed and
manufactured. One brand of hair dryer, for example, may generate a stronger magnetic field than another. In general, appliances,
which use a high current (such as those which have an electric
motor) will lead to relatively high readings. It should also be
noted that different body parts will be exposed to different magnetic field levels from the same appliance, depending on how far
that part of the body is from the appliance when in use. Typical
values of magnetic fields measured at normal user distance from
some common domestic electrical appliances are listed in the
following table.

24

HOMES NEAR POWER LINES


The power lines that are present in typical neighbourhoods are called distribution lines and they usually carry less
voltage than transmission lines, which carry very high voltages.
As stated earlier, however, it is the current and not the voltage that
is associated with the strength of the magnetic field. Therefore,
proximity to high voltage lines will not necessarily give a high
reading unless those lines are also carrying a large current.
Typical values of magnetic fields measured near power lines and
substations are listed in the table below.

Electrical Testing and Measurement Handbook Vol. 7

Electricity Testing and Measurement Handbook Vol. 7

25

ELECTRIC AND MAGNETIC FIELDS

MEASUREMENTS AND POSSIBLE EFFECT ON HUMAN HEALTH


WHAT WE KNOW AND WHAT WE DONT KNOW
California Department of Health Services and the Public Health Institute
California Electric and Magnetic Fields Program
INTRODUCTION

EMF: INVISIBLE LINES OF FORCE

Our daily use of electricity is taken for granted, yet scientific and public concern has arisen about possible health effects
from electric and magnetic fields (EMF) that are created by the
use of electricity. Because of this concern, the California Public
Utilities Commission authorized a statewide research, education
and technical assistance program on the health aspects of exposure to magnetic fields and asked the Department of Health
Services to manage it. Even though both electric and magnetic
fields are present with the use of electrical power, interest and
research have focused on the effects of 50 and 60 Hertz (Hz)
magnetic fields, called power frequency fields, from sources
such as power lines, appliances and wiring in buildings. This is
because it is known that magnetic fields are difficult to shield
and because early scientific studies showed a possible relationship
between human exposure to certain magnetic field sources and
increased rates of cancer.
Even now, scientists are not sure if there are health risks
from exposure to 50 and 60 Hz magnetic fields, or if so what is
a safe or unsafe level of exposure. People frequently ask about
EMF risk when they are choosing where to live. This choice
should include consideration of proven risks of the location, such
as the possibility of earthquake, flooding, or fire, or the presence
of traffic, radon, or air pollution. To some people even limited
evidence for a possible EMF risk weighs heavily in their decisions. For others, different considerations take precedence. There
really is no one right answer to these questions because each situation is unique.
The California EMF Program developed this fact sheet to
give an overview of the present state of knowledge and provide
a basis for understanding the current limitations on the ability of
science to resolve questions about the possible health risks of
magnetic field exposure. This paper describes electric and magnetic
fields, high field sources and how to interpret field measurements
once they are made. It includes discussions of the controversy about
possible health effects, as well as current California state policy and
what the government is doing to address public concern.

Wherever there is electricity, there are also electric and


magnetic fields, invisible lines of force created by the electric
charges. Electric fields result from the strength of the charge
while magnetic fields result from the motion of the charge, or the
current. Electric fields are easily shielded: they may be weakened, distorted or blocked by conducting objects such as earth,
trees, and buildings, but magnetic fields are not as readily
blocked. Electric charges with opposite signs (positive and negative) attract each other, while charges with the same sign repel
each other. The forces of attraction and repulsion create electric
fields whose strength is related to voltage (electrical pressure).
These forces of attraction or repulsion are carried through space
from charge to charge by the electric field. The electric field is
measured in volts per meter (V/m) or in kilovolts per meter
(kV/m). A group of charges moving in the same direction is
called an electric current. When charges move they create
additional forces known as a magnetic field. The strength of a
magnetic field is measured in gauss (G) or tesla (T), while
the electric current is measured in amperes (amps). The
strength of both electric and magnetic fields decrease as one
moves away from the source of these fields.

WHAT ARE ELECTRIC AND MAGNETIC FIELDS OR EMF


Before man-made electricity, humans were exposed only
to the magnetic field of the earth, electric fields caused by
charges in the clouds or by the static electricity of two objects
rubbing together, or the sudden electric and magnetic fields
caused by lightning. Since the advent of commercial electricity
in the last century we have been increasingly surrounded by
man-made EMF generated by our power grid (composed of powerlines, other electrical equipment, electrical wiring in buildings,
power tools, and appliances) as well as by higher frequency
sources such as radio and television waves and, more recently,
cellular telephone antennas.

FIELDS VARY IN TIME


An important feature of electric and magnetic fields is the
way they vary in time. Fields that are steady with respect to
direction, rate of flow, and strength are called direct current
(DC) fields. Others, called alternating current (AC) fields,
change their direction, rate of flow, and strength regularly over
time. The magnetic field of the earth is DC because it changes so
little in one year that it can be considered constant. However, the
most commonly used type of electricity found in power lines and
in our homes and work places is the AC field. AC current does
not flow steadily in one direction, but moves back and forth. In
the U.S. electrical distribution system it reverses direction 120
times per second or cycles 60 times per second (the direction
reverses twice in one complete cycle). The rate at which the AC
current flow changes direction is expressed in cycles per second or Hertz (Hz). The power systems in the Untied States
operate at 60 Hz, while 50 Hz is commonplace elsewhere. This
fact sheet focuses on power frequency 60 Hz fields and not the
higher frequency fields generated by sources such as cellular
phone antennas.

DESCRIBING MAGNETIC FIELDS


The concentration of a chemical in water can be described
by citing a single number. Unlike chemicals, alternating electric
and magnetic fields have wave-like properties and can be
described in several different ways, like sound. A sound can be
loud or soft (strength), high or low-pitched (frequency), have

26

Electricity Testing and Measurement Handbook Vol. 7

periods of sudden loudness or a constant tone, and can be pure or


jarring. Similarly, magnetic fields can be strong or weak, be of
high frequency (radio waves) or low frequency (powerline waves),
have sudden increases (transients) or a constant strength, consist
of one pure frequency or a single dominant frequency with some
distortion of other higher frequencies (harmonics). It is also
important to describe the direction of magnetic fields in relation to
the flow of current. For instance, if a magnetic field oscillates back
and forth in a line it is linearly polarized. It may also be important to describe how a fields direction relates to other physical
conditions such as the earths static magnetic fields.

Table 1. Examples of magnetic field strengths at particular distances from


appliance surfaces.

MEASURING MAGNETIC FIELDS AND IDENTIFYING THE


SOURCES OF ELEVATED FIELDS
MEASURING MAGNETIC FIELD STRENGTH
The strength or intensity of magnetic fields is commonly
measured in a unit called a Gauss or Tesla by magnetic field
meters called gaussmeters. A milligauss (mG) is a thousandth
of a gauss, and a microtesla (uT) is a millionth of a tesla (one
milligauss is the same as 0.1 microtesla). The magnetic field
strength in the middle of a typical living room measures about
0.7 milligauss or 0.07 microtesla. As noted above, the strength of
the magnetic field is only one component of the mixture that
characterizes the field in a particular area. Measuring only magnetic field strength may not capture all the relevant information
any more than the decibel volume of the music you are playing
captures the musics full impact. The main health studies to date
have only measured magnetic field strength directly or indirectly and assessed its association with disease. Some scientists wonder if the weak association between measured magnetic fields
and cancer in these studies might appear stronger if we knew
which aspect of the EMF mixture to measure. Other scientists
wonder if any such aspect exists.

WHERE ARE WE EXPOSED TO 60 HZ EMF?


There are power frequency electric and magnetic fields
almost everywhere we go because 60 Hz electric power is so
widely used. Exposure to magnetic fields comes from many
sources, like high voltage transmission lines (usually on metal
towers) carrying electricity from generating plants to communities
and distribution lines (usually on wooden poles) bringing electricity to our homes, schools, and work places. Other sources of
exposure are internal wiring in buildings, currents in grounding
paths (where low voltage electricity returns to the system in
plumbing pipes), and electric appliances such as TV monitors,
radios, hair dryers and electric blankets. Sources with high voltage
produce strong electric fields, while sources with strong currents
produce strong magnetic fields. The strength of both electric and
magnetic fields weakens with increasing distance from the source
(table 1). Magnetic field strength falls off more rapidly with distance
from point sources such as appliances than from line sources
(power lines). The magnetic field is down to background level
(supposed to be no greater than that found in nature) 3-4 feet from
an appliance, while it reaches background level around 60-200 feet
from a distribution line and 300-1000 feet from a transmission line.
Fields and currents that occur at the same place can interact to
strengthen or weaken the total effect. Hence, the strength of the
fields depends not only on the distance of the source but also the distance and location of other nearby sources.

IDENTIFYING SOURCES OF ELEVATED MAGNETIC FIELDS


Sometimes fairly simple measurements can identify the
external or internal sources creating elevated magnetic fields.
For example, turning off the main power switch of the house can
rule out sources from use of power indoors. Magnetic field measurements made at different distances from power lines can help
pinpoint them as sources of elevated residential magnetic fields.
Often, however, it takes some detective work to find the major
sources of elevated magnetic fields in or near a home. Currents
in grounding paths (where low voltage electricity returns to the
system in plumbing pipes) and some common wiring errors can
lead to situations in which source identification is difficult and
requires a trained technician. It is almost always possible to find
and correct the sources of elevated magnetic fields when they are
due to faulty electrical wiring, grounding problems, or appliances such as lighting fixtures.

60 HZ MAGNETIC FIELD EXPOSURE DURING A TYPICAL DAY


Exposure assessment studies of adults who wore measurement meters for a 24- to 48-hour period suggest that the average magnetic field level encountered during a typical 24 hours is
about 1 mG. About 40% of magnetic field exposures found in
homes come from nearby power lines, while 60% come from
other sources such as stray currents running back to the electrical system through the grounding on plumbing and cables, current loops due to incorrect internal wiring in the home, and
brief exposure to appliances and electrical tools.

Electricity Testing and Measurement Handbook Vol. 7

27

MAGNETIC FIELD SURVEY OF HOMES IN THE SAN FRANCISCO BAY AREA

DOSE-RESPONSE RELATIONSHIP

The California Department of Health Services surveyed


homes in the San Francisco Bay Area in the mid-1990s. In this
study, magnetic field measurements were taken in the middle of
the bedroom, family room and kitchen and at the front door of
these homes under normal power conditions (any appliances or
electrical devices turned on at the onset of the measurement period
were left on). As shown in table 2, about half the houses in the
Bay Area had an average level below 0.71 mG and 90 percent
had average levels below 1.58 mG.

A special problem in the study of health effects of environmental factors is how to measure exposure in a way that adequately reflects the true amount of the persons exposure to the
substance being studied. This true amount is called the dose.
With cigarette smoke and toxic chemicals, there is a positive
relationship between the size (or strength) of the dose and the
adverse health effect it produces: the higher the dose, the greater
the effect. With magnetic fields, however, some laboratory evidence suggests that this is not always the case, and very confusing relationships have been seen. Biological effects or changes
appear at strengths of certain levels, disappear at higher levels,
only to appear again at still higher levels. Varying the frequency
(speed of alternation), for example from 60 Hz to 120 Hz, shows
similar effect windows of magnetic fields. To complicate
things further, some laboratory experiments have shown an
effect with intermittent (pulsed) exposures, others with
spikes or transients, and still others with continuous exposure.
There is some evidence that the orientation of alternating fields
in relation to the direction of the earths static magnetic field is
also important in making a biological effect. Generally, the
effects observed are only biological changes that may or may not
translate into true health effects.

Table 2. Distribution of average magnetic field strength of San Francisco Bay Area homes.

LIMITATIONS OF DIRECT MAGNETIC FIELD MEASUREMENTS


MAGNETIC FIELDS GENERATED BY CURRENT FLOWING THROUGH WIRES CAN
BE REDUCED
Two wires with current flowing in opposite directions create
magnetic fields going in opposite directions. If the wires are
placed close together and have currents of similar magnitude the
magnetic fields cancel each other. This principle is often used to
lower magnetic fields. For example, an underground distribution
cable has a hot line (carrying current to the user) and a neutral
line (carrying it away) that generate low magnetic fields when they
are placed close together. The underground cables can be placed
close together because it is possible to insulate them heavily to prevent arcing. Overhead power lines cannot be placed this close
together because of the weight of the needed insulation and the
need for worker safety. For most distribution and transmission
lines, however, California utilities use three-wire or four-wire systems. The current in these lines alternates in strength and direction
in slightly different phases (not alternating completely together). It
is sometimes possible to optimize these phase differences so that
the magnetic fields from the wires cancel each other.

WHAT CAN WE SAY ABOUT A MEASUREMENT ONCE WE HAVE IT?


A concerned person would like to know if the measurements found in his or her home are safe or unsafe. Right now,
most scientists do not feel that the data are solid enough to make
predictions about the health risks of magnetic field strength.
When magnetic field exposure (or its estimate) increases there is
no evident orderly increase of a health risk. The highest level of
magnetic field strength measured in homes is below the intensity
found in almost all the cellular and animal experiments that have
produced subtle biological effects. This makes scientists and policy makers reluctant to set health-based standards for magnetic
field exposures. However, it is possible to find out how measurements in your home compare to other homes and if these measurements are typical or not. The information in tables 1 and 2
may be helpful in deciding if your home is typical.

Those human health studies investigating the relationship


of magnetic field exposure and cancer measured magnetic fields
using one-time, short-term measures (i.e., for 24 hours) of one
area such as the bedroom, or one-time spot measurements (i.e.,
for one minute) in several different rooms of the participants
homes. It was assumed that these home measurements adequately estimate a persons total exposure. However, these measures
can not be used to assess the biological importance of the length
of exposure, the number of times there are high exposures, or the
presence of other components of the field such as harmonics.
Also, field intensity (strength) varies at different times of day
and different seasons, depending on electricity use. Dinnertime
readings are often higher than readings in the middle of the night.
In addition, an area measure may not reflect a personal exposure
that is dependent on the amount of time a person spends in the
area measured.

CONTROVERSY ABOUT POSSIBLE HEALTH EFFECTS


The controversy about EMF health effects derives from:
1) the fact that many scientists believe power line magnetic
fields emit little energy and are therefore too weak to have any
effect on cells; 2) the inconclusive nature of laboratory experiments; and 3) the fact that epidemiological studies of people
exposed to high EMF are inconclusive.

1. WEAK FIELDS MAY HAVE TOO LITTLE ENERGY TO CAUSE BIOLOGICAL EFFECTS
The electromagnetic spectrum covers a large range of frequencies (expressed in cycles per second or Hertz). The higher
the frequency, the greater the amount of energy in the field. Xrays have very high frequencies, and are able to ionize molecules
and break chemical bonds, which damages genetic material and
can eventually result in cancer and other health disorders. High
frequency microwave fields have less energy than x-rays, but
still enough to be absorbed by water in body tissues, heating
them and possibly resulting in burns. Radio frequency fields
from radio and TV transmitters are another step weaker than microwaves. Although they alternate millions of times per second, they

28

Electricity Testing and Measurement Handbook Vol. 7

cant ionize molecules and can only heat tissues close to the
transmitter. Electric power fields (50 and 60 Hz) have much
lower frequencies than even radio waves and hence emit very
low energy levels that do not cause heating or breakage of bonds.
They do create electrical currents in the body, but in most cases
these currents are much weaker than those normally existing in
living organisms. For these reasons, many scientists argue that it
is unlikely that 60 Hz power frequency magnetic fields at the
strengths commonly found in the environment have any physical
or biological effects on the body.

leukemia and residential proximity to this type of power lines.1


For these studies, a childs exposure to magnetic fields was estimated three ways. First, the type and proximity of power lines
(wire codes) near the childs home was assessed. Those houses
with lines nearby with the potential to carry high current were
classified as high current configuration and were assumed to
have higher magnetic field levels (due to higher current) than
houses near lower current configuration power lines (figure 1).
Second, exposure was estimated by measurements of magnetic
fields taken in the childs home at the time of the study often
many years after diagnosis of their cancer. And third, exposure
was approximated by estimating what the home magnetic field
levels were right after the children were diagnosed, using line
distance from the house and past utility records of current flow
in the lines during the appropriate time period.
The NRC made a statistical summary and comparison of
these eleven studies. They concluded that children living in high
current configuration houses are 1.5 times as likely to develop
childhood leukemia than children in other homes. Despite this
conclusion, the NRC was a unable to explain this elevated risk
and recommended that more research be done to help clarify the
issue. One reason for this uncertainty is that wire-code classification assumes that houses with high wire-codes have higher
magnetic field levels than low wire-code houses, but high wirecodes may also be a proxy for some type of exposure besides
magnetic fields that is not yet understood. For example, high
wire-code houses tend to have higher traffic density nearby,
resulting in higher air pollution levels. However, traffic density
seems to be an unlikely explanation for the wire-code association
found in these studies.
In 1997, the NRC statement seemed to be contradicted by
the findings of Dr. M. S. Linet of the National Cancer Institute in
a large epidemiological study1i. Her researchers estimated exposure to magnetic fields in two ways, wire-codes as defined above
(based on distance of different types of power lines near the
home) and home area measurements. The study found no association between living in high wire-code houses and childhood
leukemia. On the other hand, the study found that children living
in houses with high average magnetic field levels did have higher
rates of cancer in general.

2. INCONSISTENT LABORATORY RESULTS


As stated above, 60 Hz power frequency magnetic fields do
create weak electric currents in the bodies of people and animals.
In the mid-1970s a variety of laboratory studies in cell cultures and
whole animals demonstrated that these fields produce biological
changes when applied in intensities of hundreds or thousands of
milligauss. Some scientists observed effects at lower strengths, but
average daily personal exposure is only about 1 mG. Biological
effects that seem to be attributable to magnetic fields are subtle and
difficult to reproduce. These studies are continuing in an effort to
understand how magnetic fields affect living tissue. Some laboratory scientists have found that magnetic fields can produce
changes in the levels of specific chemicals the human body makes
(such as the hormone melatonin), as well as changes in the functioning of nerve cells and nervous systems of other animals.
However, the jury is still out as to whether this type of change can
lead to any increased risk to human health.
In the mid-1990s, scientists conducted a series of EMf
animal studies. Most of these studies showed little or no association between EMF and cancer or adverse reproductive effects.
This convinced some scientists that EMFs were harmless.
However, others pointed out that the animals EMF exposures in
these studies might not adequately capture some aspect of EMF
exposure that could have biological effects on humans.

3. INCONCLUSIVE EPIDEMIOLOGICAL STUDIES


Epidemiology examines the health of groups of people,
and epidemiological studies make statistical comparisons about
how often diseases occur in exposed and nonexposed
groups. Studies in which the disease rate is higher for the
exposed group than nonexposed (said to have positive results)
do not necessarily show a direct cause for disease, but rather
indicate that there is some sort of relationship between exposure
and disease. Most epidemiological studies of magnetic fields
have been of two types. One kind focused on children with cancer
to see whether their home magnetic field measurements were
higher or if they were more likely to live in homes with overhead
powerlines carrying high current than a comparable group of
children without cancer. The other type of study looked at rates
of death and disease of adults assumed to be heavily exposed to
magnetic fields at work, with exposure often indirectly assessed
by using job titles, to determine if their rates were higher than
adults assumed to be working in low magnetic field environments.

CHILDHOOD CANCER STUDIES


Public concern has arisen because of media reports about
epidemiological studies that showed an association between
childhood cancer and proximity to high current-carrying overhead power lines. In 1996, a special committee of the National
Research Council (NRC) made a careful review of 11 epidemiological studies examining the relationship between childhood

Figure 1. Summary of results of power line distance(wire code)


and childhood leukemia studies.

Electricity Testing and Measurement Handbook Vol. 7

29

THE EMF RAPID PROGRAM WORKING GROUP STATEMENT ON CHILDHOOD LEUKEMIA

magnetic fields? First, no magnetic field epidemiological study


has found an association with disease that is as strong as that
implicating a two-pack-a-day smoking habit. The strength of the
association found for leukemia in electric train engineers, who
are exposed to magnetic fields of hundreds of milligauss all day
long, is no stronger than the strength of the association relating
residential magnetic field levels (generally less than 10 mG) to
childhood leukemia. Second, there is no laboratory evidence
about magnetic field exposure that is as convincing as that for
lung cancer and smoking magnetic field animal studies have
been inconsistent. These differences make scientists much more
cautious about interpreting the magnetic field epidemiology as
dangerous than the environmental tobacco smoke epidemiology.

In 1998, a working group of experts gathered by the federal EMF RAPID program (see Governmental Regulation, below)
reviewed the research on the possible health risks associated with
EMF. A majority felt that the epidemiology studies of childhood
leukemia provide enough evidence to classify EMF as a possible
human carcinogen, meaning they think it might cause cancer.
This does not mean that it definitely causes cancer, however. The
working groups findings are published in a report posted on the
programs Web site (see address below).

IF REAL, HOW IMPORTANT WOULD THIS RISK OF CHILDHOOD LEUKEMIA BE?


Each year an average of six cases of leukemia are diagnosed per 100,000 children. Six percent of American houses are
near high-current-carrying power lines.2 If the epidemiological
association is correct that means that in such houses there would
be three additional cases of leukemia among 100,000 children
due to the effects of EMF from the nearby power lines. (This is
almost the increased risk of lung cancer of an adult nonsmoker
who lives in a smoking household.) Among the 500,000 children
in California who live nearest high-current-carrying power lines
there could be a theoretical 15 extra cases of leukemia each year
compared to the number of cases if they lived further away. In
California, we regulate chemicals whose typical exposures generate a theoretical life-time risk of one per 100,000. An added
risk of three sick children per 100,000 per year is larger than this.
From an individuals point of view, this risk, if real, would be
small: 99,991 out of 100,000 children would not get leukemia
each year.

OCCUPATIONAL STUDIES
The occupational studies looking at magnetic field exposure and various health outcomes show mixed results.
Occupations assumed to have higher than normal magnetic field
levels included electricians, telephone linemen, electric welders,
electronic technicians, utility workers, electrical engineers and
sewing machine operators. In general, but not always, workers of
these occupations were more likely to have higher rates of brain
tumors, leukemia, testicular tumors and male breast cancer than
expected. A particular brain tumor (astrocytoma) occurred more
often among men who worked for many years in jobs with high
estimated exposure levels such as electricians, linemen, and electrical engineers.3 A large study of Canadian and French utility
workers found an association between estimated high magnetic
field exposures based on area measures of certain occupations
and myeloid leukemia, a rare type of blood cancer.4 On the other
hand, another large study found no increase in mortality from
brain tumors, leukemia or other cancers among electrical workers with estimated high magnetic field exposure over many
years.5 Differences among study results may exist simply
because the studies used different study populations and methods
for estimating high occupational magnetic field exposure. Also,
these surrogate measures estimating high occupational magnetic
field levels could be proxies for other types of exposure at work
besides magnetic fields.

COMPARING THE SCIENTIFIC EVIDENCE ON MAGNETIC FIELDS TO THAT OF


ENVIRONMENTAL TOBACCO SMOKE
There are regulations in place protecting us from environmental tobacco smoke. They are based on the strength of its association with disease and the consistent epidemiological evidence
for it. Whats the difference between this evidence and that for

GOVERNMENTAL REGULATION
STATE REGULATIONS
Lack of understanding has kept scientists from recommending any health-based regulations. Despite this, several states
have adopted regulations governing transmission line-generated
magnetic fields at the edge of the right-of-way (ROW, the area
immediately surrounding power lines left clear for access for
maintenance and repairs) because of concern about the risk of
electric shock from strong electric fields present in these areas
(table 3). All current regulations relate to transmission lines; none
govern distribution lines, substations, appliances or other sources
of electric and magnetic fields.
The California Department of Education requires minimum
distances between new schools and the edge of transmission line
rights-of-way. The setback guidelines are: 100 feet for 50-133 kV
lines, 150 feet for 220-230 kV lines, and 350 feet for 500-550 kV
lines. Once again, these were not based on specific biological
evidence, but on the rationale that the electric field drops to
background levels at the specified distances.
Table 3. Transmission line EMF standards and guide-lines adopted by certain states for
utilities rights-of-way (ROW).

The California Public Utilities Commission (CPUC),


upon the recommendation of a Consensus Group composed of
citizens, utility representatives, union representatives, and public
officials, recommended that the states investor-owned utilities
carry out no and low cost EMF avoidance measures in construction of new and upgraded utility projects. This means that
4% of the total project cost is allocated to mitigation measures if
these measures will reduce magnetic field strength by at least
15%. The strategy is to address public concern and cope with

30

Electricity Testing and Measurement Handbook Vol. 7

potential but uncertain risks until a policy based on scientific fact


can be developed. The CPUC also followed the Consensus
Groups recommendation to establish the research, education and
technical assistance programs of the California EMF Program
under the guidance of the California Department of Health
Services. It is expected to provide information that will be useful
to those responsible for making public policy in the future.

computer monitor. Table 1 shows how quickly fields fall off as


one moves away from appliances they virtually disappear at 35 feet. You might stop using an electric appliance you do not really need. You may also consider home testing, which can identify
faulty electrical wiring that can produce shock hazards and current code violations as well as elevated magnetic fields. In
California, the investor-owned utilities are required by the CPUC
to provide magnetic field measurement at no charge to their customers. So far, in the absence of conclusive scientific evidence,
there is no sufficient basis for enacting laws or regulations to limit
peoples exposure to EMF, so it is up to individuals to decide what
avoidance measures to take, based on the information available.

FEDERAL EFFORTS
At the Federal level, the Federal Energy Policy Act of 1992
included a five-year program of electric and magnetic field (EMF)
Research and Public Information Dissemination (EMF-RAPID).
The EMF-RAPID Program asked these questions: Does exposure
to EMF produced by power generation, transmission, and use of
electric energy pose a risk to human health? If so, how significant
is the risk, who is at risk, and how can the risk be reduced?
In 1998, a working group of experts gathered by the
EMF-RAPID Program met to review the research that has been
done on the possible health risks associated with EMF. This
group reviewed all of the studies that have been done on the subject, and then voted on whether they believed that exposure to
EMF might be a health risk. They then published a report
describing their findings. A majority of the scientists on this
working group voted that the epidemiology studies of childhood
leukemia and residential EMF exposures provide enough evidence to classify EMF as a possible human carcinogen.6 This
means that, based on the evidence, these researchers believe that
it is possible that EMF causes childhood leukemia, but they are
not sure. About half of the groups members thought that there is
also some evidence that workplace exposure to EMF is associated with chronic lymphocytic leukemia in adults. The group also
concluded that there was not enough evidence to determine
whether EMF exposure might cause other diseases.6
The EMF-RAPID Program released its final report to
Congress in 1999. This report explains the programs findings,
including the results of its working group and many research
projects. The final report states that the NIEHS believes that
there is weak evidence for possible health effects from [power
frequency] ELF-EMF exposures, and until stronger evidence
changes this opinion, inexpensive and safe reductions should be
encouraged.7 (page 38) The report specifically suggests educating power companies and individuals about ways to reduce EMF
exposure, and encouraging companies to reduce the fields created by appliances that they make, when they can do so inexpensively7 (page 38). For more information on the EMF-RAPID
program or to look at these reports, contact the EMF-RAPID
Program, National Institute of Environmental Health Sciences,
National Institutes of Health, P.O. Box 12233, Research Triangle
Park, North Carolina27709, or visit their Web site at
http://www.niehs.nih.gov/ emfrapid. When ordering a copy of
the final report, refer to NIH publication number 99-4493.

CONCLUSION
Public concern about possible health hazards from the
delivery and use of electric power is based on data that give
cause for concern, but which are still incomplete and inconclusive and in some cases contradictory. A good deal of research is
underway to resolve these questions and uncertainties. Until we
have more information, you can use no and low cost avoidance
by limiting exposure when this can be done at reasonable cost
and with reasonable effort, like moving an electric clock a few
feet away from a bedside table or sitting further away from the

REFERENCES
1. a) Wertheimer N et al. Electrical wiring configurations and
childhood cancer. American Journal of Epidemiology.
1979; 109:273-84.
b) Fulton JP et al. Electrical wiring configurations and childhood leukemia in Rhode Island. American Journal of
Epidemiology. 1979; 111:292-96.
c) Savitz DA et al. Case control study of childhood cancer and
exposure to 60-Hz magnetic fields. American Journal of
Epidemiology. 1988; 128:21-38.
d) Coleman M et al. Leukaemia and residence near electricity
transmission equipment: A case-control study. British
Journal of Cancer. 1989; 60:793-98.
e) London SJ et al. Exposure to residential electric and magnetic fields and risk of childhood leukemia. American
Journal of Epidemiology. 1991; 134:923-37.
f) Feychting M. et al. Magnetic fields and cancer in children
residing near Swedish high-voltage power lines. American
Journal of Epidemiology. 1993; 138:467-81.
g) Fajardo-Gutierrez AJ et al. Residence close to high-tension
electric power lines and its association with leukemia in children (Spanish). Biol Med Hosp Infant Mex. 1993; 50:32-38.
h) Petridou ED et al. Age of exposure to infections and risk of
childhood leukaemia. British Medical Journal. 1993; 307:774.
i) Linet MS et al. Residential exposure to magnetic fields and
acute lymphoblastic leukemia in children. New England
Journal of Medicine. 1997; 337:1-7.
2. Zaffanella L. Survey of residential magnetic sources. EPRI
Final Report. 1993; No. TR 102759-v1. No. TR 102759-v2.
3. Savitz DA et al. Magnetic field exposure in relation to
leukemiaand brain cancer mortality and electric utility workers.
American Journal of Epidemiology. 1995; 141: 1-12.
4. Theriault G et al. Cancer risk associated with occupationalexposure to magnetic fields among utility workers in Ontario and
Quebec, Canada and France. American Journal of Epidemiology.
1994; 139: 550-572.
5. Sahl JD et al. Cohort and nested case-control studies of
hematopoietic cancers and brain cancer among electric utility
workers. Epidemiology. 1993; 4: 104-114.
6. National Institute of Environmental Health Sciences.
Assessment of health effects from exposure to power-line frequency electric and magnetic fields. NIEH Working Group
Report. 1998.
7. National Institute of Environmental Health Sciences. Health
effects from exposure to power-line frequency electric and magnetic fields. NIEH Final Report ot Congress. 1998.

Electricity Testing and Measurement Handbook Vol. 7

31

A NEW APPROACH TO QUICK, ACCURATE,


AFFORDABLE FLOATING MEASUREMENTS
Tektronix IsolatedChannel Technology
Engineers and technicians often need to make floating
measurements where neither point of the measurement is at
ground (earth) potential. This measurement is often referred to as
a differential measurement. Signal common may be elevated to
hundreds of volts from earth.
In addition, many of these differential measurements
require the rejection of high common-mode signals*1 in order to
evaluate low-level differential signals. Unwanted ground currents
can also add bothersome hum and ground loops. Too often, users
resort to the use of potentially dangerous measurement techniques
to overcome these problems.
The TPS2000 Series oscilloscopes use innovative Isolated
Channel technology to deliver the worlds first 4-isolated-channel,
battery-operated oscilloscope to allow engineers and technicians
to make multi-channel isolated measurements quickly, accurately
and affordably all designed with your safety in mind.

FLOATING AN OSCILLOSCOPE: A DEFINITION


Floating a ground-referenced oscilloscope is the technique of defeating the oscilloscopes protective grounding system
disconnecting signal common from earth, by either defeating the
grounding system or using an isolation transformer. This technique
allows accessible parts of the instrument such as chassis, cabinet,
and connectors to assume the potential of the probe ground lead
connection point. This technique is dangerous, not only from the
standpoint of elevated voltages present on the oscilloscope (a
shock hazard to the operator), but also due to cumulative stresses on the oscilloscopes power transformer insulation. This stress
may not cause immediate failure, but may lead to future dangerous failures (a shock and fire hazard), even after returning the
oscilloscope to properly grounded operation.
Not only is floating a ground-referenced oscilloscope dangerous, but the measurements are often inaccurate. This potential
inaccuracy results from the total capacitance of the oscilloscope
chassis being directly connected to the circuit-under-test at the
point where the ground lead is connected.
*1 A common-mode signal is defined as a signal that is present at both points
in a circuit. Typically referenced to ground, it is identical in amplitude, frequency,
and phase. Making a floating measurement between two points requires rejecting
the common-mode signal so the difference signal can be displayed.

A GUIDE TO MAKING QUICK, ACCURATE AND


AFFORDABLE FLOATING MEASUREMENTS
There are several products that enable you to make floating measurements, but they may lack the versatility, accuracy or
affordability that you need. In addition, there are four key measurement considerations that a user needs to take into account
when selecting the right product to make an accurate floating or
differential measurement:

Management and Safety in the Workplace


While the subject of this technical note is floating measurements, some definitions of terms and general precautions must be understood before proceeding.
Historically, floating measurements have been made by knowingly defeating the
built-in safety ground features of oscilloscopes or measurement instruments in
various manners.

THIS IS AN UNSAFE AND DANGEROUS PRACTICE


AND SHOULD NEVER BE DONE!
Instead, this technical note describes instruments, accessories, and practices that
can make these measurements safely as long as standard safety practices and
precautions are observed.
When making measurements on instruments or circuits that are capable of delivering dangerously high-voltage, high-current power, measurement technicians
should always treat exposed circuits, bus-bars, etc., as being potentially live,
even when circuits have been shut off or disconnected. This is particularly true
when connecting or disconnecting probes or test leads.

1 What is the differential measurement range?


2 What is the common mode measurement range?
3 What are the loading characteristics of the probe?
Are they balanced or unbalanced?
4 What is the Common Mode Rejection Ratio (CMRR)
over the measurement frequency range?

TRADITIONAL OSCILLOSCOPES
Traditional oscilloscopes are limited to making groundreferenced measurements. Lets examine why:
Most oscilloscopes have their signal common terminal
connected to the protective grounding system, commonly
referred to as earth ground or just ground. This is done so
that all signals applied to, or supplied from, the oscilloscope
have a common connection point. This common connection
point is usually the oscilloscope chassis and is held at (or very
near to) zero volts by virtue of the third-wire ground in the power
cord for AC-powered equipment. It also means that, with few
exceptions, all measurements must be made with respect to earth
ground. This constrains the typical oscilloscope (at least in a single
measurement) from being used to measure potential differences
between two points where neither point is at earth ground.
A common, but risky, practice is to disconnect the oscilloscopes AC main power cord ground and attach the probe ground
lead to one of the test points. Tektronix strongly recommends
against this unsafe measurement practice. Unfortunately, this
practice puts the instrument chassis, which is no longer grounded
to earth, at the same voltage as the test point that the probe
ground lead is connected to. The user touching the instrument

32

Electricity Testing and Measurement Handbook Vol. 7

becomes the shortest path to earth ground. Figure 1 illustrates


this dangerous situation. V1 is the offset voltage above true
ground, and VMeas is the voltage to be measured.
Depending upon the unit-under-test (UUT), V1 may be
hundreds of volts, while VMeas might be a fraction of a volt.

complexity to the measurement apparatus. They may require an


independent power supply, and their gain and offset characteristics
must be factored into every measurement. Differential probeequipped oscilloscopes emphasize performance and safety (bandwidth, isolation), trading off form-factor benefits such as portability
and cost.

SIGNAL FIDELITY BEGINS AT THE PROBE TIP

Figure 1: A floating measurement in which dangerous voltages occur on the oscilloscope


chassis. V1 may be hundreds of volts.

Floating the chassis ground in this manner threatens the


user, the UUT, and the instrument. In addition, it violates industrial health and safety regulations, and yields poor measurement
results. Moreover, line-powered instruments exhibit a large parasitic capacitance when floated above earth ground. As a result,
floating measurements will be corrupted by ringing, as shown in
Figure 2.

An oscilloscope is actually a measurement system consisting of preamplifiers, acquisition/measurement circuits, displays,


and probes. The role of the probe is sometimes overlooked.
Nevertheless, improper probes or probing techniques can affect
the measurement outcome. Obviously, its essential to use compatible probes that match the instruments bandwidth and impedance.
Less understood is the effect of ground-lead inductance. As
lead length increases, parasitic inductance increases (Lparasitic in
Figure A). Lparasitic is in the signal path and forms a resonant LC
circuit with the inherent parasitic capacitance of the oscilloscope
(Cparasitic). As Lparasitic increases, the resonant frequency
decreases, causing ringing (see Figure 2) that visibly interferes
with the measured signal. Simply stated, the common lead must be
as short as physical constraints of the circuit-under-test will allow.

Figure 2: Parasitic inductance and capacitance can affect measurement quality

Figure 2: Ringing caused by parasitic inductance and capacitance distorts the signal and
invalidates measurements

Battery-operated oscilloscopes, such as the TDS3000B


Series oscilloscopes, when operated from AC line power using a
standard power cord, exhibit the same limitations as traditional
oscilloscopes. However, AC power is not always available where
you want to make oscilloscope measurements. In the case of the
TDS3000B Series oscilloscopes, the optional battery pack
(TDS3BATB) allows you to operate the oscilloscope without the
need for AC power. However, it can only make safe floating
measurements up to 30 VRMS.
Traditional oscilloscopes emphasize performance (bandwidth,
versatility), trading off the ability to make floating measurements.

DIFFERENTIAL OR ISOLATED PROBES


Differential or isolated probes offer a safe and reliable way
to adapt a grounded oscilloscope to make floating measurements.
Neither of the two probe contacts need be at earth ground and the
probe system as a whole is isolated from the oscilloscopes chassis
ground.
Differential probes offer a balanced impedance load to the
device-under-test (DUT). However, they add a layer of cost and

In regard to capacitance, even isolated, battery-powered


oscilloscopes exhibit capacitance with respect to earth ground. In
Figure A, Cparasitic describes the oscilloscopes parasitic capacitance from its ground reference (through the isolated housing) to
earth ground. Like parasitic inductance, Cparasitic must be kept to a
minimum in order to force the resonant frequency of the LC circuit
as high as possible. If Cparasitic is large, ringing may occur within
the test frequency range, hampering the measurement.
An instruments parasitic capacitance to ground is dictated by its internal design. The physical environment can also
prompt ringing. Holding the instrument or placing it on a large
conductive surface during measurements can actually increase
Cparasitic and lead to ringing. For extremely sensitive measurements, it might even be necessary to suspend the oscilloscope in
mid-air!

A NEW APPROACH TO QUICK, ACCURATE, AFFORDABLE


FLOATING MEASUREMENTS
The most common method of isolation in a wide bandwidth
oscilloscope system in use today is a two-path approach in which
the input signal is broken up into two signals: low frequency and
high frequency. This approach requires expensive optocouplers
and wideband linear transformers for each input channel.
The TPS2000 Series uses an innovative approach,
Isolated Channel technology, which eliminates the two-path
method and uses only one wideband signal path for each input
channel from DC to the bandwidth of the oscilloscope. This

Electricity Testing and Measurement Handbook Vol. 7

33

patent-pending technology enables Tektronix to offer the worlds


first four-input Isolated Channel, low-cost, battery-operated
oscilloscope, featuring eight hours of continuous battery operation. The TPS2000 Series oscilloscopes are ideal for engineers
and technicians who need to make four-channel isolated measurements and need the performance and ease-of- use of a low-cost,
battery operated oscilloscope.
The TPS2000 Series four Isolated Channel input architecture provides true and complete channel-to-channel isolation for
both the positive input and the negative reference leads,
including the external trigger input. Figure 3 illustrates the Isolated
Channel concept.
The most demanding floating measurement requirements
are found in power control circuits, such as motor controllers and
uninterruptible power supplies, and industrial equipment. In such
application areas, voltages and currents may be large enough to
present a threat to users and test equipment.
Isolated Channel technology is the preferred solution for
measurement quality and is designed with your safety in mind.*2
The TPS2000 oscilloscopes offer an ideal solution when a large
common mode signal is present. True channel-to-channel isolation
minimizes parasitic effects; the smaller mass of the measurement
system is less prone to interaction with the environment.

quency information, such as glitches and edge anomalies that


eludes other oscilloscopes in its class, so that you can be sure to get
a complete view of your signal to speed debug and characterization.

Figure 3: TPS2000 Series oscilloscopes Isolated Channel architecture provides complete


isolation from dangerous voltages

A properly isolated battery-powered instrument doesnt


concern itself with earth ground. Each of its probes has a Negative
Reference lead that is isolated from the instruments chassis, rather
than a fixed ground lead. Moreover, the Negative Reference lead
of each input channel is isolated from that of all other channels.
This is the best insurance against dangerous short circuits. It also
minimizes the signal degrading impedance that hampers measurement quality in single-point grounded instruments.
The TPS2000 Series oscilloscope inputs are always floating whether operated from battery power or connected to AC
power through an AC power adapter. Thus, these oscilloscopes
do not exhibit the same limitations as traditional oscilloscopes.

SPEED DEBUG AND CHARACTERIZATION WITH DRT


SAMPLING TECHNOLOGY (TIP)
The TPS2000 Series oscilloscopes offer digital real-time
(DRT) acquisition technology that allows you to characterize a
wide range of signal types on up to four channels simultaneously. Up to 2 GS/s real-time sample rate is the key to the extraordinary bandwidth 200 MHz in the TPS2024. This bandwidth/
sample rate combination makes it easy to capture the high-fre-

MAKING QUICK, ACCURATE FLOATING MEASUREMENTS


WITH TPS2000 SERIES OSCILLOSCOPES
POWER CONTROL CIRCUITS:
Power control technologies use both high-power silicon
components and low-power logic circuits. The switching transistors at the heart of most power control circuits require measurements not referenced to ground. Moreover, the power circuit may
have a different ground point (and therefore a different ground
level) than the logic circuit, yet the two often must be measured
simultaneously.
*2 Do not float the P2220 probe common lead to > 30 VRMS. Use the P5120
probe (floatable to 600 VRMS CAT II or 300 VRMS CAT III) or a similarly rated
passive high-voltage probe, or an appropriately rated high-voltage differential
probe when floating the common lead above 30 VRMS, subject to the ratings of
such high-voltage probe.

The channel-to-channel isolation of the TPS2000 Series


provides a real-world measurement advantage in addition to its
obvious safety benefits. Figure 4 is a screen image depicting
waveforms taken at two different points in a power control circuit. Notice that the lower waveforms are about 200 A p-p, while
the upper trace is about 5 V p-p. Because each of the TPS channels is fully isolated from the other (including the negative reference leads), and equipped with its own uncompromised Digital
Real Time digitizer, theres no cross-talk between the two signals. Were the oscilloscope channels not adequately isolated,
there might be misleading artifacts coupled from the 200 A signal to the smaller waveform; these might be misinterpreted as a
circuit problem when in reality its an instrument problem. The
ability of the TPS Series to discretely capture two waveforms of
vastly differing amplitudes reduces guesswork and improves
productivity.

Figure 4: The 4-channel TPS2024oscilloscopes channel-to-channel isolation


eliminates cross-talk effects when large and small signals are captured simultaneously

HARMONICS MEASUREMENTS REVEAL UNSEEN


POWER PROBLEMS
An understanding of the harmonics within a power grid is
essential to the safe and cost-effective use of electrical power.
Line harmonics are a growing problem in a world moving
increasingly toward nonlinear power supplies for most types of
electronic equipment. Nonlinear loads, such as switching power
supplies, tend to draw non-sinusoidal currents. Their impedance
varies over the course of each cycle, creating sharp positive and
negative current peaks rather than the steady curve of a sine

34

Electricity Testing and Measurement Handbook Vol. 7

wave. The rapid changes in impedance and current in turn affect


the voltage waveform on the power grid. As a result, the line
voltage is corrupted by harmonics; the normally sinusoidal shape
of the voltage waveform may be flattened or distorted.
Theres a limit to the amount of harmonic distortion that
equipment can tolerate. Load-induced harmonics can cause motor
and transformer overheating, mechanical resonances, and dangerously high currents in the neutral wires of three phase equipment.
In addition, line distortions may violate regulatory standards in
some countries.
The TPS2024s comprehensive, four-channel capability,
along with its optional power analysis software, enables connection to all three conductors of a three-phase system to measure
and analyze line harmonics. Its Harmonics mode invoked
with a single buttoncaptures the fundamental frequency plus
harmonics 2 through 50. Using only the oscilloscopes standard
voltage probe, its possible to execute a harmonic voltage measurement. An optional current probe acquires current harmonics
with the same ease.
Figure 5 illustrates a current harmonic measurement. The
amplitudes are computed by the instruments internal DFT
(Discrete Fourier Transform) algorithm. In this case the bar graph
reveals a very strong fifth harmonic level. Excessive fifth harmonic
levels (along with certain other odd harmonics) are a classic cause
of neutral-wire currents in three-phase systems.

require a current probe (or its


equivalent) and a voltage probe
working in tandem. All of these
measurements employ the instruments one-button application
function.
Figure 6: TPS Series instantaneous
power analysis

Figure 7: TPS Series waveform


analysis
Figure 8: TPS Series dv/dt and
di/dt cursors (dv/dt cursors shown)

MEASURING SWITCHING LOSS TO


IMPROVE PRODUCT EFFICIENCY
Todays power designers face increasing pressure to
improve the efficiency of their power designs. A major factor
affecting the efficiency is the power loss occurring in the switching
section of the design. Optimizing this factor can prove complex.
The TPS Series allows the designer to look at switching
losses in their design through the instruments one-button application function. The switching loss will be characterized as turnon loss, turn-off loss, conduction loss and total device loss.
Figure 9 is a TPS Series screen image showing the switching loss
measurements.

Figure 5: Harmonic distortion measurements

POWER READINGS MORE THAN JUST WATTS


Voltage and current measurements are by nature straightforward and absolute. A test point has only one voltage and one
current value at a given instant in time. In contrast, power measurements are voltage-, current-, time-, and phase-dependent.
Terms like reactive power and power factor, which were
devised to characterize this complex interaction, are not so much
measurements as computations.
The power factor is of particular interest in these computations. This is because many electrical power providers charge
a premium to users whose power factor is not sufficiently close
to 1.0, the ideal value. At a power factor of 1.0, voltage and current are in phase. Inductive loads especially large electric
motors and transformers cause voltage and current to shift phase
relative to each other, reducing the power factor. Some utility companies apply a surcharge in such cases because the inefficiency
causes energy loss in the form of heat in the power lines. There are
procedures to remedy power factor problems, but first the power
characteristics must be quantified.
The TPS Series embraces a full suite of power measurements. Among these are true power, reactive power, crest factor,
phase relationships, di/dt and dv/dt, and of course power factor.
Figures 6, 7 and 8 show TPS Series screen images summarizing
these and other power measurements. All of the measurements,
with the exception of waveform analysis and phase relationships,

Figure 9: TPS Series switching loss display showing turn-on, turn-off and conduction losses

CONCLUSION
Engineers and technicians confront high voltages and currents and must often make potentially hazardous floating measurements. Where other alternatives may lack the versatility, accuracy or
affordability to make floating measurements, the TPS2000 Series
employs unique IsolatedChannel technology to allow engineers and
technicians to make these measurements quickly, accurately and
affordably.

Electricity Testing and Measurement Handbook Vol. 7

35

HIGH-VOLTAGE MEASUREMENTS AND ISOLATION


GENERAL ANALOG CONCEPTS
NI Analog Resource Center
OVERVIEW
This tutorial is part of the NI Analog Resource Center.
Each tutorial will teach you a specific topic by explaining the
theory and giving practical examples. There are many issues to
consider when measuring high voltage. When specifying a data
acquisition system, the first question you should ask is whether
or not the system will be safe. Making high-voltage measurements can be hazardous to the equipment, to the unit under test,
and to you and your colleagues. To ensure that the system is safe,
you should provide an insulation barrier, using isolated measurement devices, between the user and hazardous voltages.

WHAT IS ISOLATION?
Isolation is a means of physically and electrically separating
two parts of a measurement device, and can be categorized into
electrical and safety isolation. Electrical isolation pertains to eliminating ground paths between two electrical systems. By providing
electrical isolation, you can break ground loops, increase the common-mode range of the data acquisition system, and level shift the
signal ground reference to a single system ground. Safety isolation
references standards have specific requirements for isolating
humans from contact with hazardous voltages. It also characterizes
the ability of an electrical system to prevent high voltages and transient voltages from transmitting across its boundary to other electrical systems with which you can come in contact.
Incorporating isolation into a data acquisition system has
three primary functions: preventing ground loops, rejecting common-mode voltage, and providing safety.

GROUND LOOPS
Ground loops are the most common source of noise in
data acquisition applications. They occur when two connected
terminals in a circuit are at different ground potentials, causing
current to flow between the two points. The local ground of the
system can be several volts above or below the ground of the
nearest building, and nearby lightning strikes can cause the difference to rise to several hundreds or thousands of volts. This
additional voltage itself can cause significant error in the measurement, but the current that causes it can couple voltages in
nearby wires as well. These errors can appear as transients or
periodic signals. For example, if a ground loop is formed with 60
Hz AC power lines, the unwanted AC signal appears as a periodic
voltage error in the measurement.
When a ground loop exists, the measured voltage, Vm, is
the sum of the signal voltage, Vs, and the potential difference, Vg,
which exists between the signal source ground and the measurement system ground, as shown in Figure 1. This potential is generally not a DC level; therefore, the result is a noisy measurement
system, often showing power-line frequency (60 Hz) components
in the readings.

Figure 1. A Grounded Signal Source

GROUND-REFERENCED SYSTEM INTRODUCES GROUND LOOP


To avoid ground loops, ensure that there is only one
ground reference in the measurement system, or use isolated
measurement hardware. Using isolated hardware eliminates the
path between the ground of the signal source and the measurement
device, therefore preventing any current from flowing between
multiple ground points.

COMMON-MODE VOLTAGE
An ideal differential measurement system responds only
to the potential difference between its two terminals, the (+) and
(-) inputs. The differential voltage across the circuit pair is the
desired signal, yet an unwanted signal can exist that is common
to both sides of a differential circuit pair. This voltage is known
as common-mode voltage. An ideal differential measurement
system completely rejects, rather than measures, the commonmode voltage. Practical devices however, have several limitations,
described by parameters such as common-mode voltage range and
common-mode rejection ratio (CMRR), which limit this ability to
reject the common-mode voltage.
The common-mode voltage range is defined as the maximum allowable voltage swing on each input with respect to the
measurement system ground. Violating this constraint results not
only in measurement error, but also in possible damage to components on the board.
Common-mode rejection ratio describes the ability of a
measurement system to reject common-mode voltages. Amplifiers
with higher common-mode rejection ratios are more effective at
rejecting common-mode voltages. The CMRR is defined as the
logarithmic ratio of differential gain to common-mode gain.
CMRR (dB) = 20 log (Differential Gain/Common-Mode Gain). (Equation 1)

Common-mode voltage is shown graphically in Figure 2.


In this circuit, CMRR in dB is measured as 20 log Vcm/Vout
where V-= Vcm.

36

Electricity Testing and Measurement Handbook Vol. 7


isolation barrier is typically an air gap. The light intensity is proportional to the measured signal. The light signal is transmitted
across the isolation barrier and detected by a photoconductive
element on the opposite side of the isolation barrier.

Figure 2. CMRR Measurement Circuit

In a non-isolated differential measurement system, an


electrical path still exists in the circuit between input and output.
Therefore, electrical characteristics of the amplifier limit the
common-mode signal level that can be applied to the input. With
the use of isolation amplifiers, the conductive electrical path is
eliminated and the common-mode rejection ratio is dramatically
increased.

ISOLATION CONSIDERATIONS
There are several terms with which to be familiar when
configuring an isolated system:
Installation Category: A grouping of operating parameters that
describe the maximum transients that an electrical system can
safely withstand. Installation categories are discussed in more
detail later.
Working Voltage: The maximum operating voltage at which the
system can be guaranteed to continuously safely operate without
compromising the insulation barrier.

Figure 3. Optical Isolation

ELECTROMAGNETIC ISOLATION
Electromagnetic isolation uses a transformer to couple a
signal across an isolation barrier by generating an electromagnetic field proportional to the electrical signal. The field is created
and detected by a pair of conductive coils. The physical barrier
can be air or some other form of non-conductive barrier.

Test Voltage: The level of voltage to which the product is subjected during testing to ensure conformance.
Transient Voltage (Over-voltage): A brief electrical pulse or
spike that can be seen in addition to the expected voltage level
being measured.
Breakdown Voltage: The voltage at which the isolation barrier
of a component breaks down. This voltage is much higher than
the working voltage, and often times is higher than the transient
voltage. A device cannot operate safely near this voltage for an
extended period of time.

Figure 4. Transformer

CAPACITIVE ISOLATION
Capacitive coupling is another form of isolation. An electromagnetic field changes the level of charge on the capacitor.
This charge is detected across the barrier and is proportional to
the level of the measured signal.

ISOLATION TYPES
Physical isolation is the most basic form of isolation,
meaning that there is a physical barrier between two electrical
systems. This can be in the form of insulation, an air gap, or any
non-conductive path between two electrical systems. With pure
physical isolation however, we imply that no signal transfer exists
between electrical systems. When dealing with isolated measurement systems, you must have a transfer, or coupling, of energy
across the isolation barrier.
There are three basic types of isolation that can be used in
a data acquisition system:

OPTICAL ISOLATION
Optical isolation is common in digital isolation systems.
The media for transmitting the signal is light and the physical

Figure 5. Capacitor

ISOLATION TOPOLOGIES
It is important to understand the isolation topology of a
device when configuring a measurement system. Different
topologies have several associated cost and speed considerations.

Electricity Testing and Measurement Handbook Vol. 7

37

CHANNEL-TO-CHANNEL

SAFETY AND ENVIRONMENTAL STANDARDS

The most robust isolation topology is channel-to-channel


isolation. In this topology, each channel is individually isolated
from one another and from other non-isolated system components.
In addition, each channel has its own isolated power supply.
In terms of speed, there are several architectures from
which to choose. Using an isolation amplifier with an analog to
digital converter (ADC) per channel is typically faster because you
can access all of the channels in parallel. A more cost-effective, but
slower architecture, involves multiplexing each isolated input
channel into a single ADC.
Another method of providing channel-to-channel isolation
is to use a common isolated power supply for all of the channels. In
this case, the common-mode range of the amplifiers is limited to
the supply rails of that power supply, unless front-end attenuators
are used.

When configuring a data acquisition system, you must take


the following steps to ensure that the product meets applicable
safety standards:
consider the operational environment, which includes
the working isolation voltage and installation category.
choose the method of isolation in the design based on
these operational and safety parameters.
choose the type of isolation based on the accuracy needed,
the desired frequency range, the working isolation voltage,
and the ability of the isolating components to withstand
transient voltages.
Not all isolation barriers are suitable for safety isolation.
Even though measurement products may have components rated
with high-voltage isolation barriers, the overall product design,
not just the components, dictates whether or not the device meets
high-voltage safety standards. Safety standards have specific
requirements for isolating humans from contact with hazardous
voltages. These requirements vary among different applications and
working voltage levels, but often specify two layers of protection
between hazardous voltages and human-accessible circuits or parts.
In addition, the standards for test and measurement equipment are not only concerned with dangerous voltage levels and
shock hazards, but also with environmental conditions, accessibility, fire hazards, and valid documentation for explaining the
use of equipment in preventing these hazards. They maintain
specific construction requirements of isolation equipment to
ensure that the integrity of the isolation barrier is maintained
with changes in temperature, humidity, aging, and variations in
manufacturing processes.
When dealing with safety standards, the European
Commission and Underwriters Laboratories, Inc. (UL) have outlined the standards that cover the design of high-voltage instruments. There are approximately 200 individual safety standards
harmonized (approved for use to demonstrate compliance) to the
Low Voltage Directive, which was the initial document that outlined the specifications for the voltage levels that require safety
consideration.
The relevant standard for instrument manufacturers is EN
61010 Safety Requirements for Electrical Equipment for
Measurement, Control, and Laboratory Use. EN 61010 states
that 30 Vrms or 60 VDC are dangerous voltages. In addition to
high-voltage design requirements, EN 61010 also includes other
safety design constraints (such as flammability and heat).
Instrument manufacturers must meet all the specifications in EN
61010 to receive the CE label.
There are two other standards very similar to EN 61010
IEC 1010 and UL 3111. IEC 1010, which was established by
the International Electrotechnical Commission, is the precursor
to EN 61010. The European Commission adopted it and
renamed it EN 61010. UL 3111 is also a child of IEC 1010. UL
took IEC 1010, made some modifications and adopted it as UL
3111. This new, strict UL standard replaces the older, more
lenient UL 1244 standard for measurement, control, and laboratory instruments. For new designs, instrument manufacturers
must meet all of the specifications in UL 3111 to receive a UL
listing.

Figure 6. Channel-to-Channel Multiplexed Topology

BANK
Another isolation topology involves banking, or grouping,
several channels together to share a single isolation amplifier. In
this topology, the common-mode voltage difference between
channels is limited, but the common-mode voltage between the
bank of channels and the non-isolated part of the measurement
system can be large. Individual channels are not isolated, but
banks of channels are isolated from other banks and from ground.
This topology is a lower-cost isolation solution because this
design shares a single isolation amplifier and power supply.

INSTALLATION CATEGORIES
Figure 7. Bank Topology

The IEC defined the term Installation Category (sometimes referred to as Over-voltage Category) to address transient
voltages. When working with transient voltages, there is a level

38

Electricity Testing and Measurement Handbook Vol. 7

of damping that applies to each category. This damping reduces


the transient voltages (over-voltages) that are present in the system.
As you move closer to power outlets and away from high-voltage
transmission lines, the amount of damping in the system increases.
The IEC has created four categories to partition circuits
with different levels of over-voltage transient conditions.
Installation Category IV Distribution Level (transmission lines)
Installation Category III Fixed Installation (fuse panels)
Installation Category II Equipment consuming energy
from a Category III fixed installation system. (wall outlets)
Installation Category I Equipment for connection to
circuits where transient over-voltages are limited to a
sufficiently low level by design.

FUEL CELL MEASUREMENT


Fuel cell test systems make a variety of measurements
that require signal conditioning before the raw signal is digitized
by the data acquisition system. An important feature for the testing of fuel cell stacks is isolation. Each individual cell can generate about 1 V, and a stack of cells can produce several kV. To
accurately measure the voltage of a single 1 V cell in a large fuel
cell stack requires a large common-mode range and high common-mode rejection ratio. Because adjacent cells have a similar
common-mode voltage, bank isolation is sometimes acceptable.

HIGH COMMON-MODE THERMOCOUPLE MEASUREMENT


Some thermocouple measurements involve high commonmode voltages. Typical applications include measuring temperature while a thermocouple is attached to a motor, or measuring the
temperature dissipation capabilities in a conductive coil. In these
cases, you are trying to measure small, millivolt changes with
several volts of common-mode voltage. It is therefore important
to use an isolated measurement system with good common-mode
rejection specifications.

SERIAL COMMUNICATION

Figure 8. Installation Categories

TYPICAL APPLICATIONS REQUIRING ISOLATION


SINGLE-PHASE AC MONITORING
To measure power consumption with 120/240 VAC power
measurements, you record instantaneous voltage and current values. The final measurement, however, may not be instantaneous
power, but average power over a period of time or cost information for the energy consumed. By making voltage and current
measurements, software can make power measurements or do
other analyses. To make high-voltage measurements you need
some type of voltage attenuator to adjust the range of the signal to
the input range of the measurement device. Current measurements
require a precision resistor. The voltage drop across the resistor is
measured, and Ohms Law (I = V/R) produces a current value.

Reliability is a number one concern when designing


equipment to be resistant to the interference inherent in a harsh
environment. Commercial and industrial applications such as
POS networks, ATMs, bank teller stations, and CNC-based production lines are susceptible to voltage spikes and noise.
Isolation reduces the possibility of damaging control systems
and ensure that systems can remain operational. Other applications that may require isolation are industrial process control,
factory automation, serial networking devices, high speed
modems, monitoring equipment, long distance communication
devices, printers and remote serial device control.

Electricity Testing and Measurement Handbook Vol. 7

39

STANDARD MEASUREMENTS: ELECTRIC FIELDS


DUE TO HIGH VOLTAGE EQUIPMENT
Ralf Mller and Hans-Joachim Frster
The standards for personal safety in electric and magnetic
fields have been tightened. Three-dimensional measurement of the
fields and the combination of these components into the equivalent
field strength is now required. Is this extra effort justified? As part
of a study project at the Fachhochschule Reutlingen, high voltage
lines, transformer stations and the working environment were
investigated. The results show that three-dimensional measurement is indeed necessary.

MEASUREMENT METHODS
An E field sensor basically consists of a pair of condenser
plates placed side by side, across which the dielectric current is
measured. The disadvantage of this simple arrangement is its
directional characteristic. To measure accurately, the direction of
the field lines has to be known and the sensor positioned accordingly. This is seldom possible in practice. As a result, the trade
association [1] requires the measurement to be made in each of
the three orthogonal spatial axes and the so-called equivalent
field strength calculated by summing the squares of the three
field components. This is theoretically possible with a simple
probe by making three consecutive measurements in the three
directions, assuming that the field remains constant over time.
The practical answer is to use a sensor that has a three dimensional structure. Modern measuring equipment uses sensors
made up from three plate condensers arranged at right angles to
each other, and calculate the equivalent field strength automatically. The isotropy, i.e. the actual non-directionality of the sensor,
is important in this context. This can be assessed by rotating the
sensor in an homogeneous field; the indicated field strength must
remain constant [3]. This is the only way to ensure that dangerous field strengths are not present.

SIMPLEST CASE: THE HIGH VOLTAGE LINE


Our first example is a high voltage line running across open
land. If the field is measured at the lowest point of the cable sag,
i.e. as far as possible from the masts, it can even be assumed that
the field lines are vertical. As expected, the measurement results of
a three dimensional (isotropic) and a one dimensional (so-called y
only measurement) differ only slightly from one another. The
maximum difference is below 5%. The slight unsymmetry in the
measurement curve is due to the terrain which showed a slight
upward slope from left to right. The phase relationships between
the conductors are of no consequence in this case, as the measurement distance from the conducting cables is too large.

Figure 1: High-voltage line. Results of electric field measurements in one and


three dimensions

MEASUREMENT CONDITIONS
Several factors must be observed if measurements are to
conform to relevant standards [1]:
No person should be present in the immediate vicinity of
the measurement.
Objects in the vicinity that distort the field, such as trees,
bushes, machinery, etc., must be noted.
Environmental effects such as air humidity, temperature,
type of terrain, etc., must also be noted.
No condensation may be present on the sensor or its supporting tripod as this will lead to measurement errors.
The persons operating the measuring instrument must
ensure that they do not stand between the field source and
the probe during the measurement.
These measures are required in order that comparable and
reproducible results can be obtained under varying operating
conditions.

MORE INTERESTING: LINE CROSSING


The second example shows the field profile in the area
where two lines cross. The measurement conditions were:
Voltages 110 kV and 220 kV
Three-phase conductors
Line 1 (top to bottom): 220 kV, christmas tree masts
approx. 40 m high
Line 2 (left to right): 110 kV single layer masts approx.
26 m high
The ambient conditions at the time of the measurement
were: Temperature 16C, average air humidity, very damp ground.
Figure 2 shows the basic measurement path. Some trees
and a number of small bushes were located in the immediate
vicinity of the measurement. The distorting effects of these
objects on the field profile are discussed in the evaluation.

40

Electricity Testing and Measurement Handbook Vol. 7

Figure 2: Measurement path beneath two high voltage


lines that cross. Green areas indicate bushes and trees

Figure 3 shows the field


strength profile that was measured.
The starting point of the measurement in the diagram is at the position
of the mast. The last measurement
was made at a distance of 60 m from
this point. The effect of the mast can
be clearly seen up to the area where
the lines cross. The crossing begins
30 m from the starting point. A field
strength maximum occurs at the 28
m point. This is due to the addition
of the field strengths of the two lines.
In the area of the crossing, the field
components of the upper line are
compensated, resulting in a minimum at this point. The field strength
increases again rapidly after the
crossing area, at 52 m. This is due to
the fact that the screening effect of
the mast is now reduced and the area
of the crossing has been left.

Figure 3: Electric field profile where two high voltage lines cross.

Figure 4 shows the relative


difference between the one-dimensional and three-dimensional measurements. The maxima are found at
the entry and exit of the crossing.
The difference is up to 13%. The
lower conductors in the crossing area
compensate out the field components of the upper conductors. The
variation in the field within the area
of the crossing in figure 3 is due to
the uneven terrain. This section is
therefore shown in more detail in
figure 5.

Figure 4: Relative difference between one-dimensional and three-dimensional measurements

Electricity Testing and Measurement Handbook Vol. 7

41

Figure 5: Zoomed representation of the crossing area from figure 3

Figure 5 clearly shows the effect of the


terrain on the measurement result. A threedimensional measurement is clearly to be preferred where the terrain is very uneven. Accurate
results are not given by a one-dimensional measurement or by a computer simulation.

COMPLEX: TRANSFORMER STATION


The Neckarwerke Esslingen AG kindly allowed us to
make measurements in a transformer station. A measurement
path was selected that included several conductor arrangements,
insulators and carriers. It is depicted in figure 6.
The ambient conditions at the time of the measurement
were: Temperature 5C, average air humidity, very damp ground.
The measurement results clearly show that significant differences in the results of one-dimensional and three-dimensional measurements occur in the vicinity of crossing conductors,
switching equipment, current busbars and the like. The relative
error is very dependent on the measurement position. Directly
beneath the conductors, it is small, but it can be as much as 60%
at points between the conductors. This difference cannot be
accepted when measurements are made for personal safety, especially where legal settlements are involved. The difference clearly shows that the indicated field strength is lower than the actual
field strength and hence the assumed safety is not given. This
exposes a weakness in IEC standard 833 [4] which exclusively
defines measurement in the vicinity of high voltage lines and is
therefore not applicable in cases where labor laws are involved.

Figure 7: Direct comparison of onedimensional and three-dimensional


measurement results.

View from above:

View from side:

Figure 6: Measurement path in a transformer station

42

Electricity Testing and Measurement Handbook Vol. 7

Figure 8: Relative error between onedimensional and three-dimensional


measurements

UNCLEAR: MOST WORKING ENVIRONMENTS

REFERENCES

The conditions of most working environments in industry


are far removed from the simple case of a high-voltage line;
switching equipment, transformer stations, induction heaters and
machinery may all play a part in the field profile. It is thus not
possible to predict the spatial field profile or its variation with
time. Further uncertainty results from the frequency spectrum.
Several standards specify different limit values for different frequencies. Broadband measurement equipment cannot, therefore,
be used if the frequency of the field is unknown or if several
fields are superimposed. As an example of this, an induction
heater emits radiation at the AC. line frequency of 50 or 60 Hz
and its harmonics and also at the frequency of the heating current. The latest test equipment copes with this situation by
employing built-in filters to detect the main radiation components and evaluate their frequencies. The use of three-dimensional measurement techniques coupled with filters is an absolute
must if personal safety measurements are to be made that are
reproducible and which conform to the relevant standards.

[1] Precision Engineering and Electrical Engineering Trade


Association: Rules for health and safety at work involving exposure to electric, magnetic or electromagnetic fields (in German)
[2] Electric and Magnetic Fields Everyday Electricity (in German)
Electricity Industry Information Center (Informationszentrale der
Elektrizittswirtschaft e.V.) 60596 Frankfurt
[3] Progress Report VDI Series 8: Measurement, Control and
Regulation Dipl.-Ing. Georg Bahmeier, Untermeitingen Field
probes for calibration and for determining the magnitude and
direction of electric field strength (in German)
[4] International Standard IEC 833: Measurement of power frequency electric fields
[5] German Standard VDE 0848 Part 1: Endangerment due to
electromagnetic fields. Measurement and calculation methods
(in German)
[6] German Standard VDE 0848 Part 4: Safety in electromagnetic fields. Field strength limit values for personal safety in the frequency range from 0 Hz to 30 kHz.

Electricity Testing and Measurement Handbook Vol. 7

43

IDENTIFICATION OF CLOSED LOOP SYSTEMS


NI Analog Resource Center

Often it is necessary to identify a system that must operate


in a closed-loop fashion under some type of feedback control.
This may be due to safety reasons, an unstable plant that requires
control, or the expense required to take a plant offline for test. In
these cases, it is necessary to perform closed-loop identification.
There are three basic approaches to closed-loop identification. These approaches are direct, indirect, and joint input-output. In this article we outline each approach and the system identification techniques that may be used to implement them.

INDIRECT
The second method of interest in closed-loop identification is the Indirect Approach as shown in Figure 2. In this
method we identify the closed loop system (Gcl) using measurements of the reference input r(t) and the output y(t) and retrieve
the plant model making use of a known regulator structure. The
transfer function for the open loop plant G, with regulator H, can
be retrieved from

DIRECT
The first method of interest is the Direct Approach. In this
method, we measure the output of the system y(t) and the input
to the plant u(t), ignoring any feedback and the reference signal,
to obtain the model. This is illustrated in Figure 1. This has the
advantage of requiring no knowledge about the feedback in the
system and becomes an open-loop identification problem.
The suggested system identification model structures
when using this method are ARX, ARMAX and state-space
models. Optimal accuracy occurs if the chosen model structure
contains the true system (including the noise properties) and the
main drawback to the method is that a poor noise model can
introduce bias into the model. This bias will be small when any
or all of the following hold
The noise model is representative of the actual noise
The feedback contribution to the input spectrum is small
The signal to noise ratio is high
Spectral analysis will not provide correct results in the
closed-loop case when using the direct approach so avoid nonparametric methods of identification such as impulse response
and bode response estimation.

The advantages in using the indirect approach are that any


method will work in determining the closed-loop transfer function Gcl and the need for an accurate representation of the noise
model is alleviated. The main disadvantage is that any error in H
(including deviations due to saturations or anti-windup logic)
will be imposed directly into G resulting in bias errors.

Figure 2 Indirect Approach to Closed-Loop System Identification.

JOINT INPUT-OUTPUT
The last method is the Joint Input-Output Approach. As
shown in Figure 3, we consider the plant input u(t) and the system output y(t) as outputs of the system. The inputs to the system are the reference signal r(t) and the noise signal v(t).

Figure 1 Direct Approach to Closed-Loop System Identification.

44

Electricity Testing and Measurement Handbook Vol. 7

CONCLUSION
It is often necessary to perform identification under
closed-loop conditions to increase safety or reduce the costs of the
modeling. The three approaches outlined in this article provide
accurate estimations of plant dynamics under feedback control
using simple measurements. Using the LabVIEW System
Identification Toolkit provides the necessary identification algorithms to aid in these closed-loop identification problems.

Figure 3 Joint Input-Output Approach to Closed-Loop System Identification.

This identification method results in a multidimensional


system of the form

Where the system matrix A is comprised of two models,


the closed-loop model Gcl and the model relating u(t) to r(t),
Gru. The plant model, G, is then estimated from the relation.

This approach is advantageous because the regulator


structure is not needed nor is an accurate noise model necessary.
It suffers from the disadvantages of requiring additional acquisition hardware (sensors) and requires acquiring a greater quantity
of data.
When using the indirect and joint input-output methods,
the reference signal r(t) should be as informative as possible.
This means it should provide good spectral coverage of the
domain of interest. This may be done by adjustments to the system set points (or adjustments to the regulator) as much as
allowed by the system being identified.

Electricity Testing and Measurement Handbook Vol. 7

45

SELECTING AND USING TRANSDUCERS FOR


TRANSFORMERS FOR ELECTRICAL MEASUREMENTS
William D. Walden
Transducers for electrical measurement are an essential
part of any monitoring, measuring, or controlling system where
electrical quantities are involved. In order to use these transducers, it is important to know what they do, what kind of signal
they provide, and how to connect them.
Part I provides an introduction in using voltage, current,
and power (watt) transducers along with using potential and current transformers.

POTENTIAL TRANSFORMERS
Most manufactures transducers accept up to a maximum
of 600 volts AC direct. For AC voltages greater than 600 volts,
potential transformers are required. Potential transformers are
precision transformers that step the voltage down to 120 volts
AC, a standard transducer input. These transformers, particularly
when used with power or watt transducers, must be instrument
grade transformers. They must not only be precise in stepping
down the voltage but in maintaining the phase or time relationship
of the voltage. This is very important. Do not attempt to save money
by using control class transformers.
Transducer and meter loads are connected in parallel to
the potential transformer. Take care not to exceed the transformer
burden rating. This burden is expressed in VA for volt-amperes
(the product of volts and amps).

These transformers are most often the donut type. The


current carrying conductor is passed through the opening or window of the donut. The secondary winding of the current transformer is wound by the manufacturer on the toroidal iron core
which makes the donut shape. On most North American manufactured current transformers, the secondary is wound to produce
5 amperes when rated current is passed through the window. The
turns ratio is expressed as 100:5 or 3500:5 (read as 100 to 5 and
3500 to 5). The first number represents the rated full-scale primary current. The primary winding consists of the single pass of
the current carrying conductor through the window. The second
number represents the full-scale secondary current in amperes. A
100:5 ratio current transformer steps the current from 100
amperes down to 5 amperes. The 3500:5 ratio current transformer steps the current from 3500 amperes down to 5 amperes.
As with potential transformers, only use instrument grade
current transformers with power measuring transducers.
Connect the loads on current transformers in series being
careful not to exceed the burden rating. The phase angle shift
introduced by current transformers is sensitive to the loading.
Therefore, keep the burden to a minimum by using adequate size
secondary leads and keeping secondary leads as short as possible.

CURRENT TRANSFORMERS
For AC applications, most manufacturers transducers
will not accept direct current input over 20 amperes. For higher
amperages, current transformers are utilized.

CAUTION: Current transformers can and will develop a


lethal voltage and possibly self destruct if the secondary is open
when primary current is present! People have been hurt and
equipment damaged when the secondary winding of a current
transformer was opened. Never disconnect the secondary or
leave it open when there is the possibility of primary current.
It is essential that experienced persons install current
transformers. If you must make a connection to the current transformer while it is in use, SHORT THE SECONDARY WINDING before doing anything. Some current transformers have a
shorting block for this purpose. Auxiliary shorting blocks are
available for this purpose too.

46

Electricity Testing and Measurement Handbook Vol. 7

Current transformers are rated for the voltage class for


which they are to be used. These classes are: 600 volts, 5000
volts, 8700 volts, 15 kilovolts, 25 kilovolts, and 34.5 kilovolts.
Make certain that the current transformers are rated for the voltage
with which they are working or that the conductor is insulated for
the class voltage. Current transformers being used on conductors
with voltages greater than 600 volts must have the secondary
grounded to an earth ground.

True RMS (Root Mean Square) measuring. These transducers calculate the RMS value of the voltage input and
provide a DC output directly proportional to the effective
value of the voltage input. This type should be used whenever the voltage is distorted.
Transducer models are available for nominal input voltages
of 69, 120, 240, and 480 volts. These typically have a measuring
range of 0 to 125% of the nominal input rating. Thus, a 120-volt
model has a range of 0 to 150 volts. For voltage input higher than
600 volts, one should use a potential transformer.

VOLTAGE TRANSDUCERS
Voltage transducers provide a DC current or voltage output directly proportional to the AC input voltage. AC voltage
transducers typically have a transformer input to isolate the
transducer from the voltage input. Following the transformer are
the electronics.

There are two types of AC voltage transducers.


Absolute average measuring, rms than 600 volts, one
should use a calibrated (or mean value measuring,
potential transformer. rms calibrated). These inexpensive
transducers simply convert the AC input to DC Current
transducers provide a DC current or and have the output
calibrated to voltage output directly proportional to the
represent the root mean square AC input current. AC
current transducers (RMS) value for sine wave input.
Typically have a transformer input to isolate. This type
is very adequate for the transducer from the current
input. This type is very adequate for situations in which
the voltage wave shape is not distorted. Any odd harmonic or discontinuity will introduce large error. Use
the true RMS measuring type when distortion of a sine
wave is present.
True RMS (root mean square) measuring. These transducers calculate the RMS value of the current input and
provide a DC output directly proportional to the effective
value of the current input. This type should be used
whenever the current is distorted. Ohio Semitronics, Inc.
has a wide range of models available for various situations. Models are available with or without current transformers, with current transformers built in, and with split
core current transformers.

CURRENT TRANSDUCERS
Current transducers provide a DC current or voltage output directly proportional to the AC input current. AC current
transducers typically have a transformer input to isolate the
transducer from the current input. Following the transformer are
the electronics.

There are two types of AC current transducers.


Absolute average measuring, rms calibrated (or mean
value measuring, rms calibrated). These inexpensive
transducers simply convert the AC input to DC and have
the output calibrated to represent the root mean square
(RMS) value for sine wave input. This type is very adequate for situations in which the current wave shape is
not distorted. Any odd harmonic or discontinuity will
introduce large error. Use the true RMS measuring type
when distortion of a sine wave is present.
True RMS (root mean square) measuring. These transducers calculate the RMS value of the current input and
provide a DC output directly proportional to the effective
value of the current input. This type should be used
whenever the current is distorted.
Ohio Semitronics, Inc. has a wide range of models available for various situations. Models are available with or without
current transformers, with current transformers built in, and with
split core current transformers.

Electricity Testing and Measurement Handbook Vol. 7

47

POWER OR WATT TRANSDUCERS

SINGLE-PHASE WATT TRANSDUCERS

A watt or power transducer measures true electrical power


delivered to a load and converts that measurement to a DC voltage
or current signal proportional to the power measured. To measure
power, the watt transducer must monitor both the voltage and current in a circuit. Further, it must be able to accurately determine the
phase relationship between the voltage and current. This is the angle
by which the current leads or lags the voltage. This measurement is
very important to accurately determine true power.

The most common application for a watt transducer is


monitoring a single-phase load such as a heater element or small
motor. This requires a single element watt transducer connected
directly between the power line and the load as illustrated below.

The watt transducer must also measure the power in each


of the branches of the circuit. Your house, apartment, or small
office is wired in what is often referred to as the Edison system.
This is a three-wire, single phase system with two power lines
and a neutral. The watt transducer must measure the power in
each of the power lines or mains. This circuit requires a two-element watt transducer. A two-element watt transducer has two-watt
transducers in the same case. The outputs of the two transducers or
multipliers are summed so that the output signal of the entire watt
transducer represents total power. One, two, and three element
watt transducers are discussed in Part II.

The single-phase watt transducer shown above has a single


multiplier or element inside the electronics package. Often the
combined loads of an entire house, apartment, or office are monitored with a watt transducer. This requires a two-element model
with current transformers. The two-element, single-phase watt
transducer is connected as shown below.

What type of watt transducer to use?


Analog watt transducers including Hall effect provide
good accuracy even with distorted wave shapes, discontinuity, or where there is poor frequency regulation.
The two-element watt transducer shown above has two
multipliers inside the electronics package. The output of these
two multipliers is summed to obtain the total power. The output
signal of this watt transducer thus represents the total power
being used.

THREE-PHASE WATT TRANSDUCERS

Electronic watt transducers with sampling or pulsewidth, pulse-height type multipliers provide excellent
accuracy but may have problems with discontinuity or
where there is poor frequency regulation. Before ordering
watt transducers, it is to your advantage to assess your
specific needs and conditions.

Most motors in industry are three-phase, three-wire


motors. These require two-element watt transducers. Do not
attempt to save money and use a single element transducer it
will not provide correct or useful information. Smaller threephase motors may be connected directly to the watt transducer.
Larger three-phase motors will require the use of current and/or
potential transformers. All three cases are shown in the three diagrams that follow.

48

Electricity Testing and Measurement Handbook Vol. 7


In special cases where a three-phase, four-wire load is
known to be balanced in load and voltage, a single element watt
transducer may be used to give an indication of total power by
multiplying the value represented by the transducer output by
three.

THE 2 1/2 ELEMENT WATT TRANSDUCER


Monitoring three-phase, four-wire systems frequently
involves using potential transformers. These transformers can
cost much more than the watt transducer. To reduce cost, two
potential transformers instead of three are used. The watt transducer from the two voltages can derive the third voltage.

Factories and large stores are typically supplied with


three-phase, four-wire power. Heavy loads such as motors
are connected line-to-line in a three-phase, three-wire configuration and lighter loads are connected line to neutral.
Three element watt transducers are required to monitor the
entire facility. This requires the use of current transformers.
The connections are shown below.

OUTPUT SIGNALS FROM TRANSDUCERS


The voltage, current, and watt transducers discussed above are available with DC current or voltage output. The least expensive and
simplest voltage and current transducers are
available only with a current output.
How are these outputs used?
The most common is for metering. The
transducer output is driving either an analog or
digital meter. The use of either is simple.

ANALOG METER
If you are using an analog meter, buy transducers that are
supplied with a 0 to 1 mADC output and a 0 to 1 mADC meter
movement. The meter supplier can scale the meter face to match
the transducer range. Some examples are shown on the next
page.

Electricity Testing and Measurement Handbook Vol. 7

49
In the first example the rated output of the transducer is
1000 watts. We would like the digital meter to read 1000. If we
supply 1 volt to the meter, it will read 1.000.
Digital meter manufacturers build their meters so that the
decimal point can be moved. This is done using wire jumpers on
the connection strip of the meter, by DIP switches on the meter,
or by wire jumpers or foil jumpers that the user cuts. Follow the
meter manufacturers instructions. In our example, set the meter
to display 1000 or 1000.0 when 1 volt is applied.

DIGITAL METER
Some digital meters allow the user to scale the meter to display to the transducer range. If you use one of these meters with a
watt transducer that has a 4 to 20 mADC output representing 0 to
960 kilowatts, simply adjust the meter to read 0 at 4 mADC and
960 at 20 mADC. If you are using a 0 to 2 volt DC input meter that
does not allow scaling, use a scaling resistor. Some examples are
shown below.

How did we get the 1-volt from the transducer?


Use Ohms law. The value of the resistor equals the
desired voltage divided by the current. Or for our example,
R=1/0.001 or 1000. Use of a 1000 resistor will provide 1 volt
at full scale of 1 mADC. Our meter will read 1000 for 1000
watts.
In the second example we used a twenty ampere current
transducer with a digital meter. We want the meter to read 19.99
at full scale to take advantage of the four digits. (A 3-_ digit
meter will read to 1.999 volts. Above this it will flash at you to
let you know that the meter is over ranging.)
How do we get 2 volts?
Again use Ohms Law. The value of the resistor will be
R=2/0.001 or 2000 . Set the decimal point so that the meter
will read 19.99 at 1.999 volts. Your meter is now scaled to match
the transducer.

OTHER OUTPUT SIGNALS AVAILABLE


Most transducers manufacturers have transducers available with 0 to 1 mADC, 0 to 5 volts DC, 0 to 10 volts DC, or 4
to 20 mADC outputs. The 0 to 10 volt and 0 to 5 volt outputs are
typically (but not exclusively) used with data acquisition equipment, strip chart recorders, analog input cards for computers, or
control interface devices.
The 4 to 20 mADC output is used with process control
equipment, for long (over 200 feet) transmission of the signal,
and frequently as a fail safe monitoring of the signal.
If the watt transducer output is 4 mADC, then one knows
that the power being monitored is zero. However, if the output
signal is zero, something is wrong the transducer may have
failed or it may have lost instrument power. The user can take corrective action.

CABLES FOR ANALOG SIGNALS

In both examples a 2-volt DC digital meter is being used.


By applying Ohms Law (R=E/I, the value of the resistor equals
the voltage divided by the current), one can determine the value
of the resistor required. Remember that the output of the transducer is in milliamperes, 1/1000 of an ampere.
How did we figure the value of the resistor?
Always base the resistor on the rated output of the transducer, the rated output is the wattage level or current value that is represented by 1 mADC.

Ohio Semitronics, Inc. recommends using a shielded


twisted pair of 22 gauge or larger wire to conduct an analog voltage or current signal from the transducer to the meter or instrument. If you are using a 1-mADC-output transducer and a load
resistor, we recommend putting the load resistor on the meter or
instrumentation package. Ground the shield at the receiving end
only. Do not ground at both ends. Doing so can cause severe
problems. I have known the shields to melt when a lightening
strike has occurred nearby.

EXAMPLES OF POWER, VOLTAGE, AND CURENT MONITORING


Monitoring voltage, current, and power delivered to a test
load. In this application a refrigerator is being examined.
Transducers used:
CT5-010A current transducer is wired in series with the
load.

50

VT-120A voltage transducer is wired in parallel with the


load.
PC5-010A is wired in series with the load for monitoring
the current and in parallel with the load for monitoring the
voltage. These are used as examples. Other transducers
that may be used include the multifunction board level
transducer PTB. This board provides analog outputs proportional to each phase of true RMS Current, each phase
of true RMS voltage, and total power.
This example uses digital meter which are scaled using
precision load resistors.

Electricity Testing and Measurement Handbook Vol. 7

Load resistors are selected as follows:


Remember Ohms Law: R=V/I where R is the resistance
in ohms ( ), V is the voltage that we want to apply to
the digital meter, and I is the current from the transducer.
The CT5-010A provides an output of 1 mADC at 10
amperes AC through terminals 3 and 4. 1 mADC represents 10 amperes AC. Adjust the decimal point of the
digital meter so that it displays 10.00 with 1 mADC
through a 1000 load resistor.
The VT-120A provides an output of 1 mADC at 150
volts AC applied to terminals 3 and 4. 1 mADC represents 150 volts. Adjust the decimal point of the digital
meter so that it displays 150.0 with 1 mADC through a
1500 load resistor.
The PC5-010A provides an output of 1 mADC at 1000
watts. 1 mADC represents 1000 watts. Adjust the decimal point on the digital meter so that it displays 1000
with 1 mADC through the 1000 resistor. Now all three
meters are scaled correctly and may be labeled amperes,
volts, and watts. Note that the power does not equal
volts times amperes. This is because the refrigerator has
a power factor of 0.866 that is normal for older refrigerators. For the single-phase situation, power factor may
be determined by dividing the power reading by the
product of volts and amperes.

POWER, VOLTAGE, CURRENT, AND POWER FACTOR


In one example, we are monitoring only one phase for
current, between two lines for voltage, and using a two-element
watt transducer that measures two lines of current and two lines
of voltage with respect to the third.

Electricity Testing and Measurement Handbook Vol. 7


We are assuming a balanced condition to compute power
factor given one current reading, one voltage reading, and total
power.
Power Factor:
PF= watts (apparent power in VA). Apparent power for a three-phase,
three-wire load may be calculated from the product of voltage, current,
and the square root of 3 (1.732) or

51

COMMENTS
A watt transducer monitoring a three-phase, three-wire
load must be a two-element watt transducer because the voltage,
as measured and the current are out of phase by 30 at unity
power factor, +30 on one leg and -30 on the other leg. Total
power measured by the watt transducer is as follows:
Ptotal = [Ia * Vac * Cos (_+ 30) +Ib * Vbc * Cos (_ - 30)]

PF = watts (V*I*1.732)
= 81,000 (479*231*1.732)
= 0.423

This is a low power factor and is very typical of some


lightly loaded induction motors.
Where does the 1.732 come from? It is the square root of
3 rounded to three decimal places. The square root of three
comes from the ratio of line to line voltage to line to neutral voltage
in the three-phase system. Please refer to POWER MONITORING
IN PART TWO of this brochure.

PITFALLS
Monitoring AC voltage and AC current is simple enough,
but in monitoring power, one must follow the connection diagrams exactly.
Watt transducers are polarity sensitive. They sense not
only the power but also the direction in which it is flowing. Should a current transformer be installed backwards,
the watt transducer will sense this as reverse power flow
and provide an output reversed in polarity, a negative
output
Watt transducers are also phase sensitive. If a current
transformer is installed on the wrong phase line, the watt
transducer will interpret this as a 120-degree phase angle
shift and give the wrong result.
The most frequent complaint I receive on three-phase watt
transducers is I am not getting the correct output. Conservatively
stated, 90% of the time, the watt transducer is not correctly connected a current transformer may be installed backwards or on
the wrong line, voltage connections may be cross phased, or voltage connections may reference the wrong line. The other 9.5% of
the time, the following gives the user trouble.
The electrical quantity WATT is a measure of the rate
at which work is being done. If an electric motor is not doing any
work or is doing very little work, it will not consume very much
power in watts even though the electric current is relatively high.
The power factor will be low and a watt transducer monitoring
this motor will have a low output. This is to be expected! The
output from a watt transducer reflects the rate at which the motor
is doing work.
If you encounter incorrect readings from a watt transducer, double check your connections against the connection diagram on the transducer case or connection sheet.

Where:
Ia is the current in leg A
Ib is the current in leg B
Vac is the voltage between leg A and C
Vbc is the voltage between leg B and C
is the phase angle shift between the voltage and current the power factor angle.
At a power factor of 0.867 one reading between two of the
legs will be double that between the other two legs. The sum of
the two is the correct total power.
At a power factor of 0.500 one reading between two of the
legs will be greater than 0 and the other will be 0. The total of the
two is the correct total power.
At a power factor of 0 the readings between the two sets of
legs will be the same but opposite in sign. Again, the total of the
two is the correct total power zero!

52

Electricity Testing and Measurement Handbook Vol. 7

Electricity Testing and Measurement Handbook Vol. 7

53

HOW TO TROUBLESHOOT LIKE AN EXPERT


A SYSTEMATIC APPROACH
Warren Rhude, Simutech Multimedia Inc.
To expertly troubleshoot electrical equipment, problems
must be solved by replacing only defective equipment or components in the least amount of time. One of the most important factors in doing this, is the approach used. An expert troubleshooter
uses a system or approach that allows them to logically and systematically analyze a circuit and determine exactly what is wrong.
The approach described here is a logical, systematic
approach called the 5 Step Troubleshooting Approach. It is a
proven process that is highly effective and reliable in helping to
solve electrical problems.
This approach differs from troubleshooting procedures in
that it does not tell you step by step how to troubleshoot a particular kind of circuit. It is more of a thinking process that is used
to analyze a circuits behavior and determine what component or
components are responsible for the faulty operation. This approach
is general in nature allowing it to be used on any type of electrical
circuit.
In fact, the principles covered in this approach can be
applied to many other types of problem solving scenarios, not
just electrical circuits.
The 5 Step Troubleshooting Approach consists of the following:
Preparation
Step 1 Observation
Step 2 Define Problem Area
Step 3 Identify Possible Causes
Step 4 Determine Most Probable Cause
Step 5 Test and Repair
Follow-up
Lets take a look at these in more detail.

PREPARATION
Before you begin to troubleshoot any piece of equipment,
you must be familiar with your organizations safety rules and
procedures for working on electrical equipment. These rules and
procedures govern the methods you can use to troubleshoot electrical equipment (including your lockout/tagout procedures, testing
procedures etc.) and must be followed while troubleshooting.
Next, you need to gather information regarding the equipment and the problem. Be sure you understand how the equipment
is designed to operate. It is much easier to analyze faulty operation
when you know how it should operate. Operation or equipment
manuals and drawings are great sources of information and are
helpful to have available. If there are equipment history records,
you should review them to see if there are any recurring problems.
You should also have on-hand any documentation describing the
problem. (i.e., a work order, trouble report, or even your notes
taken from a discussion with a customer.)

STEP 1 OBSERVE
Most faults provide obvious clues as to their cause.
Through careful observation and a little bit of reasoning, most
faults can be identified as to the actual component with very little testing. When observing malfunctioning equipment, look for
visual signs of mechanical damage such as indications of impact,
chafed wires, loose components or parts laying in the bottom of
the cabinet. Look for signs of overheating, especially on wiring,
relay coils, and printed circuit boards.
Dont forget to use your other senses when inspecting
equipment. The smell of burnt insulation is something you wont
miss. Listening to the sound of the equipment operating may
give you a clue to where the problem is located. Checking the
temperature of components can also help find problems but be
careful while doing this, some components may be alive or hot
enough to burn you.
Pay particular attention to areas that were identified either
by past history or by the person that reported the problem. A note
of caution here! Do not let these mislead you, past problems are
just that past problems, they are not necessarily the problem
you are looking for now. Also, do not take reported problems as
fact, always check for yourself if possible. The person reporting
the problem may not have described it properly or may have
made their own incorrect assumptions.
When faced with equipment which is not functioning
properly you should:
Be sure you understand how the equipment is designed
to operate. It makes it much easier to analyze faulty
operation when you know how it should operate;
Note the condition of the equipment as found. You
should look at the state of the relays (energized or not),
which lamps are lit, which auxiliary equipment is energized or running etc. This is the best time to give the
equipment a thorough inspection (using all your senses).
Look for signs of mechanical damage, overheating,
unusual sounds, smells etc.;
Test the operation of the equipment including all of its
features. Make note of any feature that is not operating
properly. Make sure you observe these operations very
carefully. This can give you a lot of valuable information
regarding all parts of the equipment.

STEP 2 DEFINE PROBLEM AREA


It is at this stage that you apply logic and reasoning to your
observations to determine the problem area of the malfunctioning
equipment. Often times when equipment malfunctions, certain
parts of the equipment will work properly while others not.
The key is to use your observations (from step 1) to rule
out parts of the equipment or circuitry that are operating properly
and not contributing to the cause of the malfunction. You should

54

Electricity Testing and Measurement Handbook Vol. 7

continue to do this until you are left with only the part(s) that, if
faulty, could cause the symptoms that the equipment is experiencing.
To help you define the problem area you should have a
schematic diagram of the circuit in addition to your noted observations.
Starting with the whole circuit as the problem area, take
each noted observation and ask yourself "what does this tell me
about the circuit operation?" If an observation indicates that a
section of the circuit appears to be operating properly, you can
then eliminate it from the problem area. As you eliminate each
part of the circuit from the problem area, make sure to identify
them on your schematic. This will help you keep track of all your
information.

cases the fault will be such that you cannot identify the problem
component by observation and analysis alone. In these circumstances, test instruments can be used to help narrow the problem
area and identify the problem component.
There are many types of test instruments used for troubleshooting. Some are specialized instruments designed to measure
various behaviors of specific equipment, while others, like the
multimeters, are more general in nature and can be used on most
electrical equipment. A typical multimeter can measure AC and
DC Voltages, Resistance, and Current.
A very important rule when taking meter readings is to
predict what the meter will read before taking the reading. Use
the circuit schematic to determine what the meter will read if the
circuit is operating normally. If the reading is anything other than
your predicted value, you know that this part of the circuit is
being affected by the fault.
Depending on the circuit and type of fault, the problem
area as defined by your observations, can include a large area of
the circuit creating a very large list of possible and probable
causes. Under such circumstances, you could use a divide and
eliminate testing approach to eliminate parts of the circuit from
the problem area. The results of each test provides information to
help you reduce the size of the problem area until the defective
component is identified.
Once you have determined the cause of the faulty operation of the circuit you can proceed to replace the defective component. Be sure the circuit is locked out and you follow all safety
procedures before disconnecting the component or any wires.
After replacing the component, you must test operate all
features of the circuit to be sure you have replaced the proper
component and that there are no other faults in the circuit. It can
be very embarrassing to tell the customer that you have repaired
the problem only to have him find another problem with the
equipment just after you leave.
Please note, Testing is a large topic and this article has
only touched on the highlights.

STEP 3 IDENTIFY POSSIBLE CAUSES


Once the problem area(s) have been defined, it is necessary to identify all the possible causes of the malfunction. This
typically involves every component in the problem area(s).
It is necessary to list (actually write down) every fault
which could cause the problem no matter how remote the possibility of it occurring. Use your initial observations to help you do
this. During the next step you will eliminate those which are not
likely to happen.

STEP 4 DETERMINE MOST PROBABLE CAUSE


Once the list of possible causes has been made, it is then
necessary to prioritize each item as to the probability of it being
the cause of the malfunction. The following are some rules of
thumb when prioritizing possible causes.
Although it could be possible for two components to fail
at the same time, it is not very likely. Start by looking for one
faulty component as the culprit.
The following list shows the order in which you should
check components based on the probability of them being defective:
First look for components which burn out or have a tendency to wear out, i.e. mechanical switches, fuses , relay
contacts, or light bulbs. (Remember, that in the case of
fuses, they burn out for a reason. You should find out
why before replacing them.)
The next most likely cause of failure are coils, motors,
transformers and other devices with windings. These
usually generate heat and, with time, can malfunction.
Connections should be your third choice, especially
screw type or bolted type. Over time these can loosen
and cause a high resistance. In some cases this resistance
will cause overheating and eventually will burn open.
Connections on equipment that is subject to vibration
are especially prone to coming loose.
Finally, you should look for is defective wiring. Pay particular attention to areas where the wire insulation could
be damaged causing short circuits. Dont rule out incorrect wiring, especially on a new piece of equipment.

STEP 5 TEST AND REPAIR


Testing electrical equipment can be hazardous. The electrical energy contained in many circuits can be enough to injure
or kill. Make sure you follow all your companies safety precautions, rules and procedures while troubleshooting.
Once you have determined the most probable cause, you
must either prove it to be the problem or rule it out. This can
sometimes be done by careful inspection. However, in many

FOLLOW UP
Although this is not an official step of the troubleshooting
process, it nevertheless should be done once the equipment has
been repaired and put back in service. You should try to determine the reason for the malfunction.
Did the component fail due to age?
Did the environment the equipment operates in cause
excessive corrosion?
Are there wear points that caused the wiring to short out?
Did it fail due to improper use?
Is there a design flaw that causes the same component to
fail repeatedly?
Through this process, further failures can be minimized.
Many organizations have their own follow-up documentation
and processes. Make sure you check your organizations procedures.
Adopting a logical and systematic approach such as the 5Step Troubleshooting Approach can help you to troubleshoot like
an expert!

Electricity Testing and Measurement Handbook Vol. 7

55

ELECTRICAL INDUSTRIAL TROUBLESHOOTING


Larry Bush

TROUBLESHOOTING IN THE FIELD MOTOR TESTING


MOTOR CONTROLLER PROGRAMMABLE LOGIC
CONTROLLERS (PLC)
A laptop computer with PLC programming, communication, and operating programs is a necessary tool in todays modern plant. Engineers, production supervisors, maintenance supervisors, maintenance technicians, electricians, instrument technicians, and maintenance mechanics all need to have PLC and
computer knowledge, training and skills in troubleshooting.
On-the-job training on PLCs is usually not very effective
until the person being trained has reached a certain level of
expertise in several areas. Knowledge and skills in electricity,
troubleshooting, and computer operation are necessary prerequisites to effectively assimilate basic PLC training. The author
found that long-term retention of material studied was higher
from a vocational course taken at a local junior college than from
a fast-paced, cram-course through a manufacturer.
The manufacturers course covered essentially the same
material as a course at the junior college (JC). The major differences were the amount of study time and shop time. The JC
course was four hours of class time per week for 15 weeks. There
were three hours of shop time doing actual hands-on work relating to the problems and material covered in the first hour.
Additional time was spent at home studying the manual and writing programs. Also, the JC was open at night for extra shop time
on PLCs and computers.
In contrast, the manufacturers course was five, eight hour
days. Class work was extremely fast and condensed in order to
cover the amount of material involved. The instructor was very
knowledgeable and covered the course material as we tried to
input the programs into desktop training equipment in order to
see how it worked. By the end of each day, our minds were
jammed with information. By the end of the week, we all passed
the course, but I had a hard time remembering what we had studied on the first day.
Basic troubleshooting techniques apply to every situation
and occupation. Positive identification of the problem(s) is absolutely
essential to solving the problems. Many times, the inexperienced
troubleshooter will mistake one or more of the symptoms for the
problems. Solving the symptom(s) will normally just postpone the
problems to a later date, by which time, the problems may have
grown to mountainous proportions.
An example is when a person experiences a headache and
takes a mild pain reliever, such as aspirin. The actual problem
might be any number of things: eyes need to be checked, medication or lack of medication, muscle strain, stress, tumor, blood
vessel blockage, or old war injury. The same thing occurs in
industry, a fuse in a circuit blows and the maintenance person
gets the replacement fuse and inserts it into the fuse holder.
There are many things that could have caused the fuse to blow,

depending on the complexity of the circuit.


Excess current caused the fuse to open (blow). Excess
current could have been caused by: overload on the load; short
circuit between the wires, grounded wires, short circuit in the
load, ground in the load, voltage spike, voltage droop, etc. If the
maintenance person does not troubleshoot the circuit prior to
replacing the fuse and restoring power, negative consequences
could arise.
It is not uncommon for a process to develop a number of
small problems and continue to function at a degraded level of
operational capability. Then, one more small problem occurs and
the whole process breaks down. Finding and correcting the last
problem will not necessarily restore the operational capability of
the process. The process continued operations with the small
problems, but the small problems may not allow the process to
restart from a dead stop. All the other small problems must be
identified and corrected before the process is restored to full
operational capability.
This situation arises in industry as well as a person. The
person can continue to function with a number of small problems, such as fatigue, blood pressure problems, hardening of the
arteries, artery blockage, but one more small blood clot in the
wrong place could easily cause the death of the person. Clearing
the blood clot does no good to the person. He/she will not be
restored to full operational capability.

TROUBLESHOOTING IN THE FIELD:


Unless prior experience dictates otherwise, always begin at
the beginning.
Ask questions of the Operator of the faulty equipment:
Was equipment running when the problem occurred?
Does the Operator know what caused the problem, and
if so, what, in their opinion, caused the problem?
Is the equipment out of sequence?
Check to ensure there is power
Turn on circuit breaker, ensure motor disconnect switch
is on, and operate start button/switch
Use voltmeter to check the following at incoming and
load side of circuit breaker(s) and/or fuses, ensure that voltages
are normal on all legs and read voltage to ground from each leg:
main power, usually 460 VAC between phases and 272
to ground
control & power, 208/240 between phases and 120 to
ground and 120 VAC to neutral on a grounded system
low voltage control power, usually 24 to 30 VAC and/or
VDC between phases and possibly to ground, usually
negative is connected to ground
Check controlling sensors in area of problem, then make
complete check of all sensors, limit switches to ensure they are
in correct position, have power, are programmed, set, and are
functioning correctly.

56

Electricity Testing and Measurement Handbook Vol. 7

If and when a problem is found, whether electrical or


mechanical, the problem should be corrected and the fault-finding begun anew, a seemingly unrelated fault or defect could be
the cause of the problem.
When there is more than one fault, the troubleshooting is
exponentially more difficult. Do not assume that all problems are
solved after completing one. Always test the circuit and operation prior to returning the equipment to service.
If available, check wiring diagrams and PLC programs to
isolate problem.
Variable Frequency Drive (VFD) can be reset by turning
power off. Wait until screen is blank and restore power; on some
VFDs, press Stop/Reset then press Start.
Check that wiring is complete and that wires and connections are tight with no copper strands crossing from one terminal
to another or to ground.
Ensure that the neutral reading is good and that the neutral is complete and not open.

MOTOR TESTING IN SHOP:


Prior to connecting a motor:
Move motor to electric shop motor test and repair station
Connect motor leads for 460 volt operation and wrap
connections with black electrical tape
Check motor windings with an ohmmeter, each reading
between phases should be within one or two ohms of
each other; A to B, B to C, A to C
Use megohmmeter to check insulation resistance to
ground of motor windings on 500 volt scale; minimum
reading is 1000 ohms of resistance per volt of incoming
power that motor will be connected to
Connect motor to power test leads and safety ground
after checking that test lead power is shut off; secure
motor to table to prevent motor from jumping when
started; turn disconnect on; press start button; check T
leads for motor amperage; check for abnormal sounds
and heat in bearings or windings; clean motor shaft; shut
down and disconnect

MOTOR TESTING IN FIELD:


When a motor overload or circuit breaker trips and/or
blows fuses, certain procedures and tests should be carried out:
Lockout and tagout main circuit breaker;
Test insulation resistance of motor wires and windings
by using megohmmeter between T1, T2, & T3 leads and
ground, then;
Test T leads to motor with ohmmeter for continuity
and ohmage of windings between A to B, B to C, A to C;
each resistance should be within 1 or 2 ohms of each
other; if the ohms readings are significantly different, or,
if there is no continuity; go to the motor disconnect box,
turn it off, perform the continuity and resistance test on
the T leads, again; if the readings are good, the problem is in the wires from the motor controller to the disconnect switch.
Check the three wires by disconnecting all three wires
from switch and twist together; go to controller and
check for continuity between A to C, B to C, A to C; one
or more wires will be open or grounded.
Correct solution is to pull all new wires in from controller to motor disconnect switch, whatever caused the
problem may have damaged the other wires, also,

replace all wires.


If problem is on motor side of disconnect switch, open
motor connection box and disconnect motor;
Check motor for resistance to ground with megohmmeter. If reading is below 500,000 ohms, motor is grounded and must be replaced.
Test motor windings for ohms between phases with
ohmmeter A to B, B to C, A to C. Readings should be
within 1 or 2 ohms of each other. If readings indicate
open or a significant ohmage difference, replace motor;
If motor test readings are good, test the motor leads
between the disconnect switch and the motor connection
box for continuity and ground resistance. If readings are
not good, replace wires.
If all readings are OK, reconnect motor, remove lockout,
and restore to service. The problem could have been
mechanical in nature; an overload on motor caused by
the chain, belt, bad bearings, faulty gearbox, or power
glitch.

MOTOR CONTROLLER:
Check motor Full Load Amps (FLA) at motor and check
setting on controller overload (OL) device; most newer
OL devices are adjustable between certain ranges, some
older OL devices use heaters for a given amperage.
If circuit disconnecting means in controller is a circuit
breaker, it should be sized correctly.
If the disconnecting means is a Motor Circuit Protector
(MCP), the MCP must be correctly sized for the motor
it is protecting and the MCP has a trip setting unit which
has to be correctly set based on the Full Load Amperage
of the motor. Using a small screwdriver, push in on the
screw head of the device and move to a multiple of thirteen of the FLA. Example: a motor FLA of 10 amps
would require that the MCP trip device be set to an
instantaneous trip point of 130 amps.
Fuses protecting the motor should be the dual element or
current limiting type and based on the motor FLA.

PROGRAMMABLE LOGIC CONTROLLERS (PLC):


Check to ensure main power is on( 120 VAC.
Check 24V power available.
Identify problem area.
Check sensor operation in problem area.
Check sensor Inputs to PLC.
Check on PLC that a change in sensor state causes the
corresponding Input LED on the PLC to go on or off.
Identify Output controlled by Input on PLC ladder diagram.
Ensure that Output LED is cycling on/off with Input.
Check that Output voltage is correct and cycling on/off
with Input.
Locate Output device and ensure that voltage is reaching
device and cycling with Input.
Ensure that Output device is working correctly (solenoid
coil, relay coil, contactor coil, etc.)
An Input or Output module can be defective in one area
or circuit and work correctly in all other circuits
If each field circuit is not fuse protected, the modular
internal circuit becomes a fuse and can be destroyed by
a field short circuit or any other over-current condition
Check modular circuit. If bad, module must be replaced

Electricity Testing and Measurement Handbook Vol. 7


after correcting field fault.
Shut down PLC prior to changing any module main
power and 24V power.
Locate fault in field circuit by disconnecting wires at
module and field device. Check between wires for short
circuit and to ground for short circuit. Replace wire is
short circuit found
Check device for ground, short circuit, mechanical and
electrical operation, even when problem found in wires.
Always check device for another fault. Problem in wires
can cause problem in device or vice versa. If device
defective, replace device and then check total circuit
before placing in operation and after restoring circuit,
check again to ensure circuit and module are operating
correctly.
Check power supply module. If no output, shut down
power and replace supply module.
Back plane can go bad, some of the modules with power
and others with no power. Replace backplane.
Sometimes, the PLC can be reset using the Reset key
switch. Ensure that turning the PLC off wont interrupt
other running sub-set programs, turn key switch to far
right. After 15 seconds, turn to far left, wait, then return
to middle position. This operation should reset program
and enable a restart.
The PLC program can have a latch relay with no reset
under certain conditions. The key switch reset may have
no effect on the latch. Try turning the power to the PLC
off and back on. This operation may reset the latch and
allow the program to be restarted.
The PLC is usually part of a control circuit supplied with
120VAC through a 460V/120V transformer as part of a
system with motors, controllers, safety circuits, and other
controls. Occasionally, cycling the main 480V power
off/on will be necessary to try to reset all the safety and
control circuits.
Possession and use of an up-to-date ladder diagram, elementary wiring diagram, manufacturers manuals & diagrams, troubleshooting skills, operators knowledge,
and time are all required to solve issues involved in
maintaining a modern manufacturing production line.

57

58

Electricity Testing and Measurement Handbook Vol. 7

Electricity Testing and Measurement Handbook Vol. 7

59

THE ART OF MEASURING LOW RESISTANCE


Tee Sheffer and Paul Lantz, Signametrics

Dont heap all the blame for a wrong measurement on the


DMM (Delayed Neutron Monitor). There can be several less
obvious sources of the errors.
Testing assemblies and components usually includes
checking the continuity of connectors, wires, traces, and lowvalue resistive elements. Such applications typically require both
a DMM and a switching system.
Many users select a DMM and switching cards based only
on the specifications of the DMM and later are surprised to find that
their measurements are an order of magnitude less accurate than
expected. Many users dont recognize the error as a system problem and conclude that the DMM is not meeting its specification.
Making accurate, stable, and repeatable resistance measurements is an art. There is plenty of technology involved, but
the art is an important part, especially when you are measuring
low resistance values.
To achieve your accuracy goal, you need to understand
the error sources in your application. It is important to start with
a good DMM. But, there are significant error sources outside the
DMM, some of which may not be obvious. Things may be more
complex than they seem, and some types of errors may be misinterpreted.

LIMITATIONS AND ERROR SOURCES


Not all materials are created equal. Most connectors and
test probes are made of beryllium-copper or phosphor-bronze
materials that closely match the electromotive force (EMF) of
copper. For this reason, they do not cause significant thermally
induced voltage errors.
However, relays and some other devices use nickel-iron
alloys that do not match the properties of copper. These can
cause significant thermal EMF errors. Thermal voltages are generated when there is a mismatch of materials combined with a
temperature difference.
This is the same principle that makes thermocouples work
as temperature sensors. If you expect readings that are accurate
within a few milliohms, this is a big issue. This error source also
affects higher value resistance measurements, but to a lesser
degree.
It is easy to overlook second-order specifications of a
DMM, such as current drive levels used for resistance measurement. These specs may be in small print or missing, but they are
important. For measuring low resistance, this spec tells what you
can expect from the DMM. The accuracy specs of the DMM
dont tell the whole story. For example, the Signametrics
SMX2064 PXI DMM uses a 10-mA current source, while most
other DMMs are limited to 1 mA or less. Remember Ohms Law:
V = I x R means that 10 times the current produces 10 times the
voltage being sensed across the resistor. This larger signal is less
susceptible to external errors and noise and provides more signal
to measure.

The larger signal almost always produces a more accurate


measurement. It is confusing to compare two DMMs having
similar specifications in ohms if one has 10 times the current
drive. The two are not the same. The one with the higher current
will perform better, especially in a system.
Good DMMs can measure signals down to a few microvolts. If you need to measure a resistance down to a few milliohms, a 1-mA test current only produces 1 V of signal per 1
m of resistance. In other words, you are operating right at the
resolution limit of the DMM.
With a 10-mA test current, there are 10 V of signal per 1
m of resistance. As a result, a DMM that uses 10x as much current for this test will give about 10x improvement in accuracy,
stability, and repeatability for very low values.
If your test has serious throughput requirements and you
need to make hundreds of measurements per second, having a
stronger signal combined with good noise performance in the
DMM makes a huge difference. Remember that the DMMs
accuracy at higher speeds may be much more important than its
best accuracy.

TWO-WIRE
Everyone knows how easy it is to measure resistance
using a two-wire connection. However, for low resistance, a two-

Figure 1. Two-Wire

wire connection has disadvantages (Figure 1).


Test leads frequently add >1 of resistance, and your test
probe may add another 0.1 of contact resistance to the measurement. These errors are significant if you are measuring 20 .
You can eliminate most of the test-lead errors from a twowire connection by shorting the leads and setting the RelativeOhms control. This enables the DMM to subtract the test-lead
resistance from the readings that follow. This is a very handy tool
when you are doing manual testing, but it is less useful in an
automated test.

FOUR-WIRE
A four-wire connection is the standard method for measuring low resistance. It eliminates the resistance of the test leads

60

Electricity Testing and Measurement Handbook Vol. 7

from the measurement. One pair of leads carries the test current
while the other pair of leads senses the voltage across the resistor under test (Figure 2).

resistance measurements? Two-wire resistance measurements certainly are attractive because you can fit twice as many two-wire
measurements onto a card as you can four-wire measurements.
The economics are attractive. Perhaps you can put a short
circuit on one of the inputs to the switching card and measure
that short to make a Relative-Ohms measurement? This line of
reasoning also might lead you to select the highest density switching card possible.
However, there are reasons to be careful. A typical switching card does not have the same resistance through all of its
channels. Channel 0 may add 0.2 to the reading while Channel
20 may add 0.8 . Consequently, measuring a short on one does
not give a good compensation for the other because they do not
have the same resistance.
Even if you could correct for the difference in channel-tochannel resistance, relays typically have about 50 m of contact
resistance that will shift around by 20 m from one reading to
the next. You might think that high-current relays will have
lower contact resistance, but it doesnt work that way. High-current relays usually have silver-plated contacts that give low
resistance for currents above 100 mA. Unfortunately, silver-plated contacts have a high and unpredictable contact resistance for
currents less than 50 mA.
Relays are made of nickel-iron materials, and they all
have problems with thermal EMFs. Frequently, this error source
is not specified for high-density switching cards. If not, the thermal voltages probably are around 100 V. If your DMM uses 10
mA to make this measurement, the switching card adds 10 m
of error to the measurement. If your DMM uses only 1 mA, the
switch will add 100 m of error to the measurement.
Keep in mind that this error voltage is made up of all of
the closed relay contacts connected to the sense lines of the
DMM. The more complex the switching system is, the higher the
error will be.

Figure 2. Four-Wire

The resistance of the current-carrying leads doesnt matter because they are not in the measurement path. The resistance
of the sensing leads doesnt matter since they dont carry any
current.
A four-wire connection is not immune to thermal EMF
errors caused by mismatched materials. This usually is not
important in manual testing situations, but it is a major issue in
automated systems where a relay switch is used.

SIX-WIRE
What if the resistor you want to measure is in a circuit
with other components or resistors as in networks or on a loaded
circuit board? Then you need a six-wire guarded connection.
This method makes it possible to measure resistance in situations
where it would be impossible otherwise. The SMX2064 DMM
offers this capability (Figure 3).

FOUR-WIRE WITH A SWITCHING CARD


Using a four-wire connection through your switching card
takes care of the resistance issues associated with the switching
card. This accuracy improvement happens at the expense of
reduced channel density. However, it does not take care of the
thermal EMF problems that come with some switching cards
(Figure 4).
Figure 3. Six-Wire Guarded In-Circuit Measurement

A guard amplifier drives the junction of parallel components to a voltage level that prevents any of the test current from
leaking away from the resistor under test. This is a standard
method used by large ATE in-circuit test systems. With the right
DMM, you can implement it too.

MEASURING THROUGH A SWITCH MATRIX


Many applications are for production test. In these cases,
it is almost always necessary to perform multiple tests and measure multiple points. You usually do this by putting a switching
card or a matrix ahead of the DMM. It is important to note that
the switching card can be a major source of error, particularly
when measuring low resistance.

TWO-WIRE WITH A SWITCHING CARD


How does adding a switching card affect your two-wire

THERMAL VOLTAGE ERRORS, NOT CORRECTED


Figure 4. Four-Wire Switching Card

One way to reduce this error is to use a DMM with the


Offset-Ohms function. However, this method is very slow, it
adds noise, and it is limited in its capability to reduce the error.
For best results, start with a high-quality switching card that is
specified for low thermal EMF error.
How big a problem are thermal EMF voltages in relay
switches? A high-quality switching card has about 10 V while

Electricity Testing and Measurement Handbook Vol. 7


a typical one has >100 V of thermal voltages. There are a few
instrumentation quality switches that exhibit 1V or less.
Take a look at Figure 5 to see the effect. The yellow plots
depict the specs of two similar DMMs. One of the DMMs uses
1-mA excitation current while the other uses 10 mA. There are
some things to note:

61

SIX-WIRE WITH A SWITCHING CARD


A six-wire resistance connection works just fine with a
switching card as long as the card is organized to support it.
Remember that a six-wire connection does not increase the accuracy of your measurement unless other resistors in the circuit need to
be guarded. This is still the only way to guard-out parallel resistors
that otherwise would make the measurement impossible.

EXAMPLES

Figure 5a. 1-mA Ohms Excitation

Figure 5b. 10-mA Ohms Excitation

Both DMMs have very similar specifications as shown


by the yellow lines.
As soon as you combine them with a relay card that has
10-V offset, the system error is considerably greater
than the DMM spec. For the DMM with 10-mA excitation, the system error is almost two times the DMM
spec. For the DMM with 1-mA excitation, the system
error is almost 10 times the DMM spec.
If you combine the DMMs with a relay card that has
100-V offset, the error becomes huge. For the DMM
with10-mA excitation, the system error is almost 10
times the DMM spec. For the DMM with 1-mA excitation, the system error is almost 100 times the DMM
spec.
The effect of the relay offset voltages overwhelms the
DMM specifications in both cases, but the DMM that
uses 10-mA excitation current produces a system spec
between five and 10 times better than the DMM that
uses 1-mA excitation.

A manufacturer of semiconductor protection devices uses


an SMX2064 on its low-resistance four-wire range to accurately
measure resistances around 20 before and after hitting the
device with a high test voltage. Because the SMX2064 can take
an accurate measurement in as little as 1 ms, test throughput is
high.
A manufacturer of hybrid circuits uses an SMX2064 to
measure resistance values of less than 100 m. In this case,
speed is not an issue, but getting a useable measurement is. Other
DMMs that use only 1-mA excitation current did not qualify to
do the job.

CONCLUSION
If you need to measure low resistance values, you benefit
by using a DMM that has a 10-mA excitation current. A 1-mA
source gives a much weaker signal to measure and presents system-level problems, particularly if there are switching cards
involved. If you expect a stable, accurate result, you almost certainly need to use a four-wire connection.
The accuracy spec of the DMM is important but not the
whole story. Remember that everything in the measurement path
affects the accuracy of the measurement, especially switching
cards. Your best bet is to combine a DMM with good ohms specifications and high test current and a switching card with a low
thermal EMF spec, preferably an instrumentation type.

62

Electricity Testing and Measurement Handbook Vol. 7

Electricity Testing and Measurement Handbook Vol. 7

63

STANDARDS FOR SUPERCONDUCTOR AND


MAGNETIC MEASUREMENTS
National Institute of Standards and Technology
GOALS
This project develops standard measurement techniques for
critical current, residual resistivity ratio, and magnetic hysteresis loss,
and provides quality assurance and reference data for commercial
high temperature and low-temperature superconductors. Applications
supported include magnetic-resonance imaging,
research magnets, magnets for fusion confinement, motors, generators,
transformers, high-quality-factor resonant cavities
for particle accelerators,
and
superconducting Probe for the measurement of the critical current of a
bearings. Superconductor superconductor wire as a function of temperature. The
applications specific to probe is inserted into the bore of a high field superthe electrical power conducting magnet.
industry include transmission lines, synchronous condensers,
magnetic energy storage, and fault-current limiters. Project
members assist in the creation and management of international
standards through the International Electrotechnical Commission
for superconductor characterization covering all commercial
applications, including electronics. The project is currently focusing on measurements of variable-temperature critical current,
residual resistivity ratio, magnetic hysteresis loss, critical current
of marginally stable superconductors, and the irreversible effects
of changes in magnetic field and temperature on critical current.

CUSTOMER NEEDS
This project serves the U.S. superconductor industry,
which consists of many small companies, in the development of
new metrology and standards, and in providing difficult and
unique measurements. We participate in projects sponsored by
other government agencies that involve industry, universities,
and national laboratories.
The potential impact of superconductivity on electric
power systems, alternative energy sources, and research magnets
makes this technology especially important. We focus on: (1)
developing new metrology needed for evolving, large-scale
superconductors, (2) providing unique databases of superconductor properties, (3) participating in interlaboratory comparisons needed to verify techniques and systems used by U.S.
industry, and (4) developing international standards for superconductivity needed for fair and open competition and improved
communication.
Electric power grid stability, power quality, and urban
power needs are pressing national problems. Superconductive
applications can address many of them in ways and with efficiencies that conventional materials cannot. Second-generation Y-Ba- Cu-O (YBCO) superconductors are approaching the

targets established by the U.S. Department of Energy. The demonstration of a superconductor synchronous condenser for reactive
power support was very successful and has drawn attention to the
promise of this technology. Previous demonstration projects had
involved first-generation materials, Bi-Sr-Ca-Cu-O (BSCCO).
Variable-temperature measurements of critical current and magnetic hysteresis loss will become more important with these AC
applications, and methods for reducing losses are expected to
evolve as second-generation materials become commercial.
Fusion energy is a potential, virtually inexhaustible energy
source for the future. It does not produce CO2 and is environmentally cleaner than fission energy. Superconductors are used to generate the ultrahigh magnetic fields that confine the plasma in
fusion energy research. We measure the magnetic hysteresis loss
and critical current of marginally stable, high-current Nb3Sn
superconductors for fusion and other research magnets. Because
of the way superconductors are used in magnets, variable-temperature critical-current measurements are needed for more complete
characterization.
The focus of high-energy research is to probe and understand nature at the most basic level, including dark matter and
dark energy. The particle accelerator and detector magnets needed for this fundamental science continue to push the limits of
superconductor technology. The next generation of Nb3Sn and
Nb-Ti wires is pushing towards higher current density, less stabilizer, larger wire diameter, and higher magnetic fields. The
resulting higherectronics and Electrical Engineering Laboratory
current required for critical-current measurements turns many
minor measurement problems into significant engineering challenges. For example, heating of the specimen, from many sources,
during the measurement can cause a wire to appear to be thermally
unstable. Newer MgB2 wires may be used for specialty magnets
that can safely operate at the higher temperatures caused by high
heat loads. We need to make sure that our measurements and the
measurements of other laboratories keep up with these challenges
and provide accurate results for conductor development, evaluation,
and application.
Possible spin-off applications of particle accelerators are
efficient, powerful light sources and free-electron lasers for biomedicine and nanoscale materials production. The heart of these
applications is a linear accelerator that uses high-efficiency, pure
Nb resonant cavities. We conduct research on a key materials
property measurement for this application, the residual resistivity ratio (RRR) of the pure Nb. This measurement is difficult
because it is performed on samples that have dimensions similar
to those of the application. Precise variable-temperature measurements are needed for accurate extrapolations.

TECHNICAL STRATEGY
International Standards With each significant advance
in superconductor technology, new procedures, interlaboratory

64

Electricity Testing and Measurement Handbook Vol. 7

comparisons, and standards are needed. International standards


for superconductivity are created through the International
Electrotechnical Commission (IEC), Technical Committee 90 (TC
90).
Critical Current Measurements One of the most
important performance parameters for large-scale superconductor
applications is the critical current. Critical current is difficult to
measure correctly and accurately; thus these measurements are
often subject to scrutiny and debate. The critical current is determined from a measurement of voltage versus current. Typical criteria are electric-field strength of 10 microvolts per meter and
resistivity of 1014 ohm-meters.

273 kelvins (0 degrees Celsius) and 4.2 kelvins (the boiling point
of liquid helium). The value of RRR indicates the purity and the
low-temperature thermal conductivity of a material, and is often
used as a materials specification for superconductors. The low
temperature resistivity of a sample that contains a superconductor
is defined at a temperature just above the transition temperature
or is defined as the normal-state value extrapolated to 4.2 kelvins.
For a composite superconducting wire, RRR is an indicator of
the quality of the stabilizer, which is usually copper or aluminum
that provides electrical and thermal conduction during conditions
where the local superconductor momentarily enters the normal
state. For pure Nb used in radio-frequency cavities of linear
accelerators, the low temperature resistivity is defined as the normal-state value extrapolated to 4.2 kelvins. This extrapolation
requires precise measurements. We have studied some fundamental questions concerning the best measurement of RRR and
the relative differences associated with different measurement
methods, model equations for the extrapolation, and magnetic
field orientations (when a field is used to drive the superconductor
into the normal state).
Magnetic Hysteresis Loss Measurements As part of
our program to characterize superconductors, we measure the
magnetic hysteresis loss of marginally stable, high-current Nb3Sn
superconductors for fusion and particle-accelerator magnets. We use
a magnetometer based on a superconducting quantum interference device (SQUID) to measure the magnetic hysteresis loss of
superconductors, which is the area of the magnetization-versusfield loop. In some cases, especially for marginally stable conductors, we use special techniques to obtain accurate results.
Measurement techniques developed at NIST have been adopted
by other laboratories.

Illustration of a superconductors voltage-current characteristic with two common criteria


applied.

ACCOMPLISHMENTS

Electric field versus current at temperatures from 7.0 to 8.3 kelvins in steps of 0.1 kelvins
for a Nb3Sn wire.These are typical curves for the determination of critical current.

Critical-current measurements at variable temperatures


are needed to determine the temperature margin for magnet
applications. The temperature margin is defined as the difference
between the operating temperature and the temperature at which
critical current Ic is equal to the operating current. When a magnet is operating, transient excursions in magnetic field H or current I are not expected; however, many events can cause transient excursions to higher temperatures T, such as wire motion,
AC losses, and radiation. Hence the temperature margin of a
wire is a key specification in the design of superconducting magnets. Variable-temperature critical-current measurements require
data acquisition with the sample in a flowing gas environment
rather than immersed in a liquid cryogen. Accurate high-current
(above 100 amperes) measurements in a flowing gas environment are very difficult to perform.
Residual Resistivity Ratio Measurements The RRR
is defined as the ratio of electrical resistivity at two temperatures:

Critical current versus temperature of a high-Tc Bi2Sr2CaCu2O8+x wire at various magnetic


fields. Such curves are used to determine the safe operating current at different temperatures and fields.

Superconductor Data Enables U.S. Company to


Offer Product to Korean Project New bismuth-based hightemperature superconductor wires are under active consideration
for a 600 kilojoule superconducting magnetic energy storage
(SMES) project lead by the Korea Electrotechnology Research
Institute. The purpose of the SMES system is to stabilize the
electric power grid. The magnet will be wound with 10-kiloampere superconducting cables composed of many round wires.
It will be cooled to 20 kelvins by cryocoolers. A U.S. company
turned to us for critical current measurements at 20 kelvins to

Electricity Testing and Measurement Handbook Vol. 7


determine whether its conductor could meet the projects specifications for critical current. Critical current, the largest current a
superconducting wire can carry, is a key performance and design
parameter. Critical current depends on temperature, magnetic
field and, in many cases, the angle of the magnetic field with
respect to the conductor.
We made variable-temperature critical-current measurements on three wire specimens in magnetic fields up to 8 teslas,
at various magnetic-field angles, and at temperatures from 4 to
30 kelvins. NIST has the only such multiparameter, high-current,
variable-temperature measurement capability in the U.S. The
largest current applied to the 0.81 millimeter diameter wire samples was 775 amperes.
The results showed that the angle dependence of critical
current for the wires was less than just 3 percent over the useful
range of field and temperature, and that the round wires could be
used at higher magnetic fields and temperatures than tape conductors. These data will be used to design the safe operating limits of the SMES magnet system.
Key Measurements for the International Thermonuclear
Experimental Reactor Superconducting magnets are used in
fusion energy projects such as the International Thermonuclear
Experimental Reactor (ITER), to confine and heat the plasma.
The superconductors for ITERs large magnet systems are all
cable-in-conduit conductors (CICC), which provide both
mechanical support for the large magnetic forces and a flow path
for the liquid helium required to cool the cable. The superconducting magnet must be operated below the critical current of the
cable, which is a function of magnetic field and temperature.
Temperature is an important variable, and the local temperature
of the conductor depends on the mass-flow rate of the coolant
and the distribution of the heat load along the CICC.
We designed and constructed a new variable temperature
probe that allows us to make measurements in our 52-millimeter
bore, 16-tesla magnet. This probe replaces one that was designed
for our 86-millimeter bore, 12-tesla magnet. Fitting everything
into the smaller bore was difficult, but the new probe performed
as expected and allows us to make measurements at the ITER
design field of 13 teslas. We made measurements up to 765 amperes
with a Nb3Sn sample in flowing helium gas. Measurements were
made at temperatures from 4 to 17 kelvins and magnetic fields from
0 to 14 teslas. Some measurements were made at 15 and 16 teslas
for temperatures from 4 to 5 kelvins; however, these magnetic fields
can be generated only when a sample is measured in liquid helium.

Critical current versus temperature at various magnetic fields for a Nb3Sn wire. These
curves show the current carrying limits for various combinations of temperature and magnetic field.

65

Electric field versus temperature at currents from 66 to 84 amperes in steps of 1.5 amperes
for a Nb3Sn wire. These are typical curves for the determination of temperature margin.

The results of our unique variable-temperature measurements provide a comprehensive characterization and form a basis for evaluating CICC and magnet performance. We used these data to generate
curves of electric field versus temperature at constant current and
magnetic field. In turn, these give a direct indication of the temperature safety margin of the conductor.
International Standards on Superconductivity
Many of the 14 published IEC/TC 90 standards on superconductivity contain precision and accuracy statements rather than
currently accepted statements of uncertainty. NIST has advocated that TC 90 adopt a more modern approach to uncertainty.
In collaboration with the Information Technology Laboratory,
we have developed a 50-page report on the possibility of changing statements of accuracy to statements of uncertainty in
IEC/TC 90 measurement standards, which was presented at TC
90 meetings in June 2006. They included proposed change sheets
for 13 of the 14 TC 90 document standards. Ultimately, all TC 90
delegates voted in favor of changing to uncertainty statements
during the maintenance cycle of existing standards and during
the development of new standards.
Current Ripple a Source of Measurement Errors
All high-current power supplies contain some current ripple and
spikes. New high-performance conductors have high critical currents that require current supplies over 1000 amperes. High-current power supplies with the lowest level of current ripple and
spikes are often more than a factor of ten times more expensive
than conventional supplies. In addition, current ripple and spikes
are a greater problem for short-sample critical current testing
than for magnet operation because of the smaller load inductance. Therefore, we need to understand the effects of ripple and
spikes on the measured critical current (Ic) and n-value, the
index of the shape of the electric field-current curve. We focused
on how ripple changes the n-value and showed that, in terms of
percentage change, the effect of ripple on n-value was about 7
times that on Ic Interlaboratory comparisons often show variations in n-value much larger than the variations in Ic. We examined models and use the measurements on simulators to attempt
to reproduce and understand the effects observed in measurements on superconductors. We believe that current ripple and
spikes are sources of differences in n-values measured at different laboratories.
New Method to Evaluate the Relative Stability of
Conductors We recently started measuring voltage versus
magnetic field (V-H) on Nb3Sn wires to assess their relative sta-

66

Electricity Testing and Measurement Handbook Vol. 7

bility. Voltage versus current (V-I) at constant field is usually


measured to determine Ic. Low-noise V-H measurements were
made at constant or ramping current with the same electronic
instruments, apparatus, and sample mount as used in Ic measurements. High-performance Nb3Sn wires exhibit flux-jump instabilities at low magnetic fields, and low-noise V-H curves on
these wires show indications of flux jumps. V-H measurements
also reveal that less stable wires will quench (abruptly and irreversibly transition to the normal state) at currents much smaller
than Ic at the lower magnetic fields. This new method needs to
be further understood and may be standardized to ensure that it
provides accurate and reliable data.

IEC 61788-5 Superconductivity Part 5: Matrix to Superconductor


Volume Ratio Measurement Copper to Superconductor Volume
Ratio of Cu/Nb-Ti Composite Superconductors

STANDARDS COMMITTEES
Loren Goodrich is the Chairman of IEC/TC 90, the U.S.
Technical Advisor to TC 90, the Convener of Working Group 2
(WG2) in TC 90, the primary U.S. Expert to WG4, WG5, WG6
and WG11, and the secondary U.S. Expert to WG1, WG3, and
WG7.
Ted Stauffer is Administrator of the U.S. Technical Advisory
Group to TC 90.

STANDARDS
In recent years, we have led in the creation and revision of
several IEC standards for superconductor characterization:
IEC 61788-1 Superconductivity Part 1: Critical Current
Measurement DC Critical Current of Cu/Nb-Ti Composite
Superconductors
IEC 61788-2 Superconductivity Part 2: Critical Current
Measurement DC Critical Current of Nb3Sn Composite
Superconductors
IEC 61788-3 Superconductivity Part 3: Critical Current
Measurement DC Critical Current of Ag-sheathed Bi-2212
and Bi-2223 Oxide Superconductor
IEC 61788-4 Superconductivity Part 4: Residual Resistance
Ratio Measurement Residual Resistance Ratio of Nb-Ti
Composite Superconductors Critical current vs. temperature of
a Bi-2212 tape at a magnetic field of 0.5 tesla and various magnetic field angles. Such curves are used to determine the safe
operating current at various temperatures and field angles.

IEC 61788-6 Superconductivity Part 6: Mechanical


Properties Measurement Room Temperature Tensile Test of
Cu/Nb-Ti Composite Superconductors
IEC 61788-7 Superconductivity Part 7: Electronic Characteristic
Measurements Surface Resistance of Superconductors at
Microwave Frequencies
IEC 61788-8 Superconductivity Part 8: AC Loss Measurements
Total AC loss Measurement of Cu/Nb-Ti Composite
Superconducting Wires Exposed to a Transverse Alternating
Magnetic Field by a Pickup Coil Method
IEC 61788-10 Superconductivity Part 10: Critical
Temperature Measurement Critical Temperature of Nb-Ti,
Nb3Sn, and Bi-System Oxide Composite Superconductors by
a Resistance Method
IEC 61788-11 Superconductivity Part 11: Residual
Resistance Ratio Measurement Residual Resistance Ratio of
Nb3Sn Composite Superconductors
IEC 61788-12 Superconductivity Part 12: Matrix to
Superconductor Volume Ratio Measurement Copper to NonCopper Volume Ratio of Nb3Sn Composite Superconducting
Wires
IEC 61788-13 Superconductivity Part 13: AC Loss
Measurements Magnetometer Methods for Hysteresis Loss in
Cu/Nb-Ti Multifilamentary Composites
IEC 60050-815 International Electrotechnical Vocabulary
Part 815: Superconductivity

Electricity Testing and Measurement Handbook Vol. 7

67

MULTI CHANNEL CURRENT TRANSDUCER SYSTEMS


DANFYSIK

MULTI CHANNEL SYSTEM MCS FROM DANFYSIK


The new multi channel systems MCS from DANFYSIK
combine highest accuracy and bandwidth with lowest phase shift
and common mode influence. The systems measure AC signals
as well as DC signals with a linearity in the ppm range and work
up to 1MHz. Different current transducer heads from 200Apk to
5000Apk are available as standard transducers.
Optimised systems for the needs of power electronics and
drives applications

Modular up to six channels


High linearity
Low offset
High bandwidth up to 1MHz
Extremely low phase shift
High CMR due to galvanic insulation
Transducer heads from 200A to 5000A
Current and voltage output
Optimised for power electronics needs

CURRENT ANALYSIS IN POWER ELECTRONICS


APPLICATIONS
To optimise power electronics components like inverters
or drives, the electrical signals current and voltage must be measured with very high accuracy. The quality of the measurement
results depends on linearity, offset, and width and phase shift of
used instruments and connected current and voltage sensors.
Standard current transformers have a limited bandwidth,
fast impulse transducers and very low accuracy. In addition these
transducers are not able to measure DC components in the signal
or do not work at all at DC since the iron core gets saturated.
Wideband-MHz-shunts are accurate and fast but do not allow a
galvanic isolated connection of the instrument from the power
electronics circuit. Also, high common mode signals result in
disturbances and inaccuracy.
The measurement of electrical power and the calculation of
losses are even more problematic. At low power factor linearity,
offset and phase shift of the transducers have a much higher influence on the resulting error than the instrument itself. Losses are
normally calculated by the subtraction of the output power from
the input power. The efficiencies of power electronics components
are quite high, and therefore the actual losses compared to the
measured input or output power are very small. Consequently, the
error of measured input or output power can easily be as high as
the loss itself if the sensor is not accurate enough.

The modularity of the MCS systems allows all types of


analyses in the power electronics field to be connected. Many of
the measurements are made on three-phase outputs of frequency
inverters or three phase sinus inputs of electric motors. To
analyse a complete frequency inverter, a six-phase system is necessary. Automotive applications normally measure the DC from
the battery in addition to the three-phase output to the load. The
MCS systems can be ordered from three to six channels. A threephase system can be updated with additional channels easily.

ZERO-FLUX-PRINCIPLE
The transducer consists of a transducer head and an electronic module. In the transducer head there are three iron cores
with a common secondary winding but with separate auxiliary
windings. The primary current lp, via the winding Lp, produces a
magnetic field in the three iron cores of the transducer. Thereby Lp
mostly consists of the primary conductor, which is lead through
the transducer. The compensation current lc compensates the magnetic field of the primary current and provides a steady zero-flux
in the iron core. This compensation current is driven by an operation amplifier to which both inputs are connected with a signal
which is proportional with the AC- and DC-component of the primary conductor current. The AC-component is thus induced into
the auxiliary winding Lh1. The DC-component and the very lowfrequency component comes
from the socalled Zero-FluxDetector (symmetry detector).
Via an oscillator
and the auxiliary
windings Lh2
and Lh3, the
other two iron
cores are driven
into saturation in
different directions.

68
Both iron cores and the auxiliary windings Lh2 and Lh3
are built identically. The currents via Lh2 and Lh3 are thus identical. In this case, the main core flux is zero.
A direct current via the primary conductor results in a flux
via the core. Therefore, both Zero-Flux-Detector cores can no
longer be driven into saturation identically, and the two currents
via Lh2 and Lh3 are no longer equal. The difference between the
currents is proportional with the DC component of the current lp.
The Zero-Flux-Detector processes this signal and leads it to the
DC-input of the operation amplifier which drives the compensation current. This way the DC-component of the primary current
can also be compensated. The compensation current is an accurate
reproduction of the primary current, and can be evaluated as a galvanic separated signal by all types of measuring instruments. The
burden resistor is only to be used if the measuring instrument only
has voltage inputs. The advantage of this technology is mostly the
high accuracy of the transducer. The sensitivity of the Zero-FluxDetector and of the iron cores allows the best possible ppm-accuracy. A transducer bandwidth of a few hundred kHz can easily be
obtained.

Electricity Testing and Measurement Handbook Vol. 7

Electricity Testing and Measurement Handbook Vol. 7

69

FALL-OF-POTENTIAL GROUND TESTING, CLAMP-ON


GROUND TESTING COMPARISON
Chauvin Arnoux, Inc.
On April 14, 2002, a ground resistance test was conducted to compare the results obtained from the Fall-of-Potential 3Point testing method to the clamp-on testing method. The
grounding system consisted of four copper clad rods installed in
an approximate 20 ft square. Three of the rods are 5/8" in diameter
and 10 ft in length. The fourth rod is 1/2" in diameter and 8 ft in
length. All rods were coupled together with 3-gauge aluminum
wire. Figure 1 shows the schematic of the system.

conductor through the rod and clamp. Readings on rod number


three ranged from 615 to 733m at each bonding point, indicating that all connections were good. See Figure 2 for full results.
Measurement Point

Resistance (Ohms)

A to B

713

C to B

615

A to C

733

Figure 2. Bonding resistance measurements

Figure 1. The Grounding System

The tests were conducted with the following equipment


manufactured by AEMC instrument:
Model 4500, 4-Point Ground Resistance Tester
Model 4630, 4-Point Ground Resistance Tester
Model 3731, Clamp-On Ground Resistance Tester.
Additionally, we used the AEMC Model 5600, a
micro-ohmmeter to verify the bonding of the aluminum
wire to the individual ground rods.
The soil conditions in the test area were predominately loam with some gravel. Conditions on the day of the
test were dry and sunny, some light rain had occurred the
previous day to the test. Therefore, the soil was somewhat
moist at the surface.
The AEMC Model 5600 Micro-Ohmmeter was used
to measure bonding resistance at each rod and was the first
test completed. Measurements from each conductor to the
rod were taken as well as measurements from conductor to

In the first test, the AEMC Model 4500 was used as 3Point ground tester. Rod number three was first disconnected
from the other rods in the system so that its individual resistance
could be measured. The X lead was attached to rod number three
(see Figure 3). The Z lead was attached to an auxiliary electrode
100 feet away and the Y lead was initially connected to the auxiliary electrode 60 feet away. Readings were taken with the Y
electrode at 90, 80, 70, 60, 50, 40, 30, 20 and 10 feet. Figure 3
shows the results of this test.

Figure 3. Three-Point test connection

70

Electricity Testing and Measurement Handbook Vol. 7

Figure 4. Model 4500 test results

The same test was repeated using the AEMC Model 4630
fall-of-potential ground tester. The results are shown in Figure 5.

Figure 5. Model 4630 test results

Finally, the AEMC Model 3731 was used to measure the resistance at rod number three with all other rods
detached from it. A temporary cable was installed between
rod number three and the municipal grounding system,
thus setting up the required parallel paths necessary for
accurate measurement using a clamp-on ground tester (see
Figure 6). Under these conditions, the reading was 84.5.
The results of these tests showed that the clamp-on
ground tester is indeed an effective tool in measuring
ground resistance when used under the proper conditions.
Readings between the clamp-on ground testing and the
fall-of-potential ground testing method correlate. The
advantages of using the clamp-on tester were the ability to
test without disconnecting the rod from service and the
ability to test without the need for auxiliary ground electrodes. These two points saved considerable amount of
time in conducting the test

Figure 6. Single rod test using the Model 3731 clamp-on ground resistance tester

Electricity Testing and Measurement Handbook Vol. 7

71

AN INTRODUCTION TO ANTENNA TEST RANGES,


MEASUREMENTS AND INSTRUMENTATION
The basic principles of antenna test and measurement are discussed along with an
introduction to various range geometries, and instrumentation
Jeffrey A. Fordham, Microwave Instrumentation Technologies, LLC.

INTRODUCTION

DIRECTIVITY

By definition, all of todays wireless communication systems contain one key element, an antenna of some form. This
antenna serves as the transducer between the controlled energy
residing within the system and the radiated energy existing in
free space. In designing wireless systems, engineers must choose
an antenna that meets the systems requirements to firmly close
the link between the remote points of the communications system.
While the forms that antennas can take on to meet these system
requirements for communications systems are nearly limitless,
most antennas can be specified by a common set of performance
metrics.

It is convenient to express the directive properties of an


antenna in terms of the distribution in space of the power radiated
by the antenna. The directivity is defined as 4p times the ratio of
the maximum radiation intensity (power radiated per unit solid
angle) to the total power radiated by the antenna. The directivity
of an antenna is independent of its radiation efficiency and its
impedance match to the connected transmission line.

ANTENNA PERFORMANCE METRICS


In order to satisfy the system requirements and choose a
suitable antenna, system engineers must evaluate an antennas
performance. Typical metrics used in evaluating an antenna
includes the input impedance, polarization, radiation efficiency,
directivity, gain and radiation pattern.

INPUT IMPEDANCE
Input impedance is the parameter which relates the antenna
to its transmission line. It is of primary importance in determining
the transfer of power from the transmission line to the antenna and
vice versa. The impedance match between the antenna and the
transmission line is usually expressed in terms of the standing
wave ratio (SWR) or the reflection coefficient of the antenna when
connected to a transmission line of a given impedance. The reflection coefficient expressed in decibels is called return loss.

POLARIZATION
The polarization of an antenna is defined as the polarization of the electromagnetic wave radiated by the antenna along a
vector originating at the antenna and pointed along the primary
direction of propagation. The polarization state of the wave is
described by the shape and orientation of an ellipse formed by
tracing the extremity of the electromagnetic field vector versus
time. Although all antennas are elliptically polarized, most antennas are specified by the ideal polarization conditions of circular or
linear polarization.
The ratio of the major axis to the minor axis of the polarization ellipse defines the magnitude of the axial ratio. The tilt
angle describes the orientation of the ellipse in space. The sense
of polarization is determined by observing the direction of rotation of the electric field vector from a point behind the source.
Right-hand and left-hand polarizations correspond to clockwise
and counterclockwise rotation respectively.

GAIN
The gain, or power gain, is a measure of the ability to concentrate in a particular direction the net power accepted by the
antenna from the connected transmitter. When the direction is not
specified, the gain is usually taken to be its maximum value.
Antenna gain is independent of reflection losses resulting from
impedance mismatch.

RADIATION EFFICIENCY
The radiation efficiency of an antenna is the ratio of the
power radiated by the antenna to the net power accepted at its
input terminals. It may also be expressed as the ratio of the maximum gain to the directivity.

RADIATION PATTERN
Antenna radiation patterns are graphical representations
of the distribution of radiated energy as a function of direction
about an antenna. Radiation patterns can be plotted in terms of
field strength, power density, or decibels. They can be absolute
or relative to some reference level, with the peak of the beam
often chosen as the reference. Radiation patterns can be displayed in rectangular or polar format as functions of the spherical
coordinates q and f. A typical antenna pattern in a rectangular
format is shown below1.

72

Electricity Testing and Measurement Handbook Vol. 7

ANTENNA RANGE SITING CONSIDERATIONS

aration between the transmitting antenna and the receiving


antenna for a reasonable approximation of the far field gain and
radiation patterns. At distances from a transmitting antenna,
which are large compared with the antenna dimensions, the
phase front of the emergent wave is nearly spherical in shape.
For extreme separations, the radius of curvature is so large that for
all practical purposes the phase front can be considered planar
over the aperture of a practical antenna. As the antennas are
brought closer together, a condition is reached in which, because
of the short radius of curvature, there is an appreciable separation
D between the wavefront and the edges of the antenna aperture.

The choice of an antenna test range is dependent on many


factors, such as the directivity of the antenna under test, frequency range and desired test parameters. Often the mechanical features of the antenna (size, weight and volume) can have as much
influence on the selection of an antenna range as do the electrical
performance factors. In selecting an antenna range to evaluate
antenna performance, care must be taken to ensure the performance metrics are measured with sufficient accuracy.

Rectangular Anechoic Chamber

Compact Antenna Test Range

Outdoor Elevated Range

Ground Reflection Range


Spherical Phase Front Tangent to a Plane Antenna Aperture

A criterion that is commonly employed in determining the


minimum permissible value of R is to hold D to a maximum of
1/16 wavelength (equivalent to 22.5 of phase variation). If this
condition is met, the receiving antenna is said to be in the far
field of the transmitting antenna. The mathematical expression
for this minimum range:
Planar Near-Field

Cylindrical Near-Field

A few of the more commonly used antenna test ranges


are shown here. Regardless of
the chosen test range, three key
factors must be addressed and
controlled to ensure a successful
measurement. These factors are
Spherical Near-Field
the phase variations of the incident field, the amplitude variations of the incident field and the stray signals created by reflections within the test range.

VARIATIONS OF THE PHASE OF THE INCIDENT FIELD


In order to accurately measure an antennas far zone performance, the deviation of the phase of the field across its aperture
must be restricted. The criterion generally used is that the phase
should be constant to within p/8 radian (22.5). Under normal
operating conditions, this criteria is easily achieved since there is
usually a large separation between transmitting and receiving
antennas. During antenna testing, it is desirable because of various
practical considerations to make antenna measurements at as short
a range as possible. Since the measurements must simulate the
operating situation, it is necessary to determine the minimum sep-

The major effect of a small deviation D is to produce


minor distortions of the sidelobe structure. Larger values of D
will cause appreciable errors in the measured gain and lobe structure. Conversely, this condition can mask asymmetrical sidelobe
structures which are actually present.

VARIATIONS OF THE AMPLITUDE OF THE INCIDENT FIELD


A second and important siting consideration is the variation
of the amplitude of the incident field over the aperture of the test
antenna. Excessive variations in the field will cause significant
errors in the measured maximum gain and sidelobe structure. This
effect can be seen better from the viewpoint of reciprocity.
Variations in the amplitude of the field over the aperture on receiving
are analogous to the transmitting case of a modification of the
aperture illumination by the primary feed. If the variation across
the antenna under test is limited to about 0.5 dB, error in the
measurements will be negligible for most applications. It is essential that the transmitting antenna be accurately directed so that the
peak of its beam is centered on the aperture of the antenna under
test. Improper alignment, which may not cause a noticeable loss

Electricity Testing and Measurement Handbook Vol. 7


of signal level, results in an asymmetrical aperture illumination
and error in the measurement of the sidelobe structure.

INTERFERENCE FROM REFLECTIONS


The requirement of providing adequate separation between
antennas to prevent excessive phase error makes it difficult to satisfy a further requirement that the site be free of large reflections
from the ground or other sources of reflection. Addition of reflected
fields at the test antenna can produce erroneous gain and pattern
measurements. For instance, an interfering field which is 30 dB
below the direct path signal can cause a variation of 0.25 dB in
the measured maximum gain and can seriously affect the measured sidelobe structure of the pattern. The usual method of minimizing the effects of fields caused by reflections are to (1) mount
the transmitting antenna and test antenna sites on towers, (2)
employ a directive transmitting antenna, (3) avoid smooth surfaces
which are oriented so that they produce direct reflection into the
test antenna, and (4) erect screens or baffles to intercept the reflected
wave near the reflection point.
An alternate procedure is to locate the transmitting and
receiving antennas over a flat range and to take into account the
specular reflection from the ground in making measurements.
The heights of the antenna under test and the transmitting antenna
are adjusted for a maximum of the interference pattern between
the direct and ground reflected wave. Generally, it is more convenient to mount the test positioner and antenna on a fixed height
tower or building and vary the height of the transmitting antenna.
This can be accomplished with the transmitting antenna mounted
to a motor driven elevator/carriage assembly that can travel up and
down a tower.
In cases where the antenna range length is reasonably
short, the entire range can be housed indoors in an anechoic
chamber. An indoor far-field anechoic chamber has the same
basic design criteria as an outdoor range except that the surfaces
of the room are covered with RF absorbing material. This
absorber is designed to reduce reflected signal over its design
frequency range. Testing indoors offers many advantages to conventional outdoor ranges including improved security, avoiding
unwanted surveillance and improved productivity due to less
time lost because of weather and other environmentally related
factors. The advantages of testing indoors are primarily responsible for the trend toward more advanced test ranges such as the
compact range and near-field ranges.

AMPLITUDE VARIATION ELEVATED RANGES


Variations in the amplitude of the field incident over a test
aperture must also be restricted for accurate far-zone measurements. For range geometries employing comparatively large
transmitting and test tower heights (i.e., elevated range geometries), it is advisable to restrict amplitude taper to the order of 1/4
dB or less by using the following criterion:

From the viewpoint of suppressing range surface reflections, it is also desirable to maintain the test height H, greater
than or equal to 6D. If one must, for practical reasons, employ
test heights less than approximately 4D, the ground-reflection
technique should be considered.

73

GROUND-REFLECTION ANTENNA TEST RANGES


Ground-reflection antenna range geometries are often
advantageous when the test situation involves low directivities
and high accuracy requirements or when practical test heights
are less than approximately four times the maximum vertical
dimension of the test aperture. In this technique specular reflection from the range surface is caused to create constructive interference with the direct-path energy in the region of the test aperture, such that the peak of the first interference pattern lobe is
centered on the test aperture. Four basic criteria are applicable to
ground-reflection range geometries:

COMPACT ANTENNA TEST RANGES


Compact ranges are an excellent alternative to traditional
far-field ranges. Any testing that can be accomplished on a farfield range can be accomplished on a compact antenna test range.
This method of testing allows an operator to employ an indoor
anechoic test chamber at a reasonable cost and avoid the problems
associated with weather and security often encountered when
using an outdoor test range. In a research and development situation, the small size of a compact range allows it to be located
convenient to the design engineers. In a manufacturing environment, the compact range can be located near to the final testing
and integration facilities. By placing a compact range in a shielded
chamber, one can eliminate interference from external sources.
This last feature has become more important in the last several
years as the proliferation of cell phone and wireless systems has
created a background noise environment which has made antenna testing in a quiet electromagnetic environment more difficult
The principle of operation of a compact range is based on
the basic concepts of geometrical optics. Diverging spherical waves
from a point source located at the focal point of a paraboloidal surface are collimated into a plane wave. This plane wave is incident
on the test antenna. The resultant plane wave has a very flat phase
front, but the reflector-feed combination introduces a small (but
generally acceptable) amplitude taper across the test zone.
In principle, the operation of a compact range is straightforward; however, its ultimate design, construction, and installation
should be carefully considered.

74

Electricity Testing and Measurement Handbook Vol. 7

NEAR-FIELD ANTENNA TEST RANGES

integrated antenna assembly with its associated transmitting and


control circuitry. Due to these issues, pulsed RF operation presents an additional set of test problems not often encountered in
CW operation. As a result, instrumentation complexity increases
and measurement system timing issues become critical.
The basic pulsed antenna test parameters are identical to
those encountered in CW measurements. Gain, sidelobe levels,
pointing accuracy, beamwidth, null locations and depths, and
polarization parameters are essential to fully characterizing an
antenna. In addition to the traditional time invariant antenna performance parameters, some new time dependent parameters
emerge when testing under pulsed conditions. These include transient effects such as beam formation and distortions as a function
of time within a pulse or over an ensemble of pulses, power output
(i.e. gain) as a function of time within a pulse or pulse burst, etc.
Compounding these measurements is the additional burden
of multi-channel, multi-frequency, and multi-state measurements
as a function of pulse repetition frequency (PRF), duty factor (DF)
and operating frequency. Due to the increasingly integrated nature
of antennas with their transmitters, the measurement system must
be responsive to external RF pulse generation and timing for both
single and multiple pulse measurements.

Near-field ranges are used where large antennas are to be


tested indoors in a relatively small space. This type of range uses a
small RF probe antenna that is scanned over a surface surrounding
the test antenna. Typically, separation between the probe and the
antenna structure is on the order of 4 to 10 wavelengths. During the
measurement, near-field phase and amplitude information is collected over a discrete matrix of points. This data is then transformed
to the far-field using Fourier techniques. The resulting far-field data
can then be displayed in the same formats as conventional far-field
antenna measurements.
In addition to obtaining far-field data, Fourier analysis
techniques are used to back-transform the measured electromagnetic field to the antennas aperture to produce aperture field
distribution information. This offers the ability to perform element
diagnostics on multi-element phased arrays.
In near-field testing, the test antenna is usually aligned to
the scanners coordinate system and then either the probe or the
test antenna is moved. In practice, it is easier and more cost
effective to scan the RF probe over linear axes or the test antenna over angular axes. But this does not have to be the case. There
are many scanning coordinate systems possible for collecting the
near-field data. Three techniques are in common usage:
Planar Near-Field Method With planar near-field
scanning, the probe usually is scanned in X and Y linear coordinates over the aperture of the test antenna. A large planar scanner is used to move the probe over a very accurate plane located
in front of the test antennas aperture. Once aligned to the scan
plane, the test antenna is not moved during the collection of the
near-field data. Planar near-field provides limited angular coverage of the test antennas field due to the truncation caused by the
scanners dimensions.
Cylindrical Near-Field Method For this method the
probe typically is scanned in one linear dimension using a single
axis linear positioner. The test antenna is stepped in angle on a
rotary axis oriented parallel to the linear axis. The resulting scan
describes a cylindrical surface around the test antenna. Cylindrical
near-field scanning can provide complete angular coverage of the
test antennas field in one plane. The orthogonal plane has limited
angular coverage due to truncation caused by the finite length of
the linear scanner.
Spherical Near-Field Method Spherical near-field
scanning normally involves installing the test antenna on a spherical scanning positioner. The probe antenna is normally fixed in
space. The test antenna is normally scanned in one angular axis
and stepped in an orthogonal angular axis. The resulting data is
collected over a spherical envelope surrounding the test antenna.
Full or nearly-full coverage of the test antennas radiating field
can be evaluated with this type of near-field system.

PULSED ANTENNA MEASUREMENTS


Characterizing antennas under pulsed RF condition is
becoming increasingly commonplace. Advanced radar and wireless systems and their enabling technologies such as monolithic
microwave integrated circuits (MMICs) require testing methods
to verify performance over a wide range of operating parameters.
In addition to the pulse parameters, the major factors influencing
pulsed RF testing include high transmit power levels, thermal
management of the antenna under test (AUT) and its supporting
equipment in the test environment, and interfacing to a highly

ANTENNA RANGE INSTRUMENTATION


Regardless of the type of antenna range to be implemented,
the complement of instruments to operate the range is very similar. Differences occur due to the location of the various instruments with respect to the source and test antennas, types of
measurements to be performed and the degree of automation
desired. A description of the basic instrumentation subsystems
and typical applications of different types of antenna ranges, will
be presented here.
The instrumentation for measuring antenna patterns consists of four subsystems, which can be controlled from a central
location. These subsystems are:
1. Positioning and Control
2. Receiving
3. Signal Source
4. Recording and Processing
The test antenna is installed on a positioner and is usually,
tested in the receive mode. The motion of the positioner (rotation
of the test antenna) is controlled by a positioner control unit located in the control room. The positioner is equipped with synchro
transmitters or high accuracy encoders to provide angle data for
the position indicator and the recording/processing subsystem.
To process the received signal for recording, the RF signal
must be detected. In most cases, microwave receivers are
employed on the antenna range to accept the very low-level signals from the test antenna and to downconvert these signals to
lower frequencies for processing. Microwave receivers offer many
advantages including improved dynamic range, better accuracy,
and rejection of unwanted signals that may be present in the area.
Also phase/amplitude receivers provide the ability to measure
phase characteristics of the received signal. Phase information is
required for many types of antenna measurements.
A signal source provides the RF signal for the remote
source antenna. The signal source can be permanently fixed on
the ground or floor, or located on a tower near the source antenna,
depending on the frequency of operation and mechanical considerations. The signal source is designed for remote operation. The
source control unit is usually located in the control room with the
measurement and control instrumentation.

Electricity Testing and Measurement Handbook Vol. 7

75

Often, a computer subsystem is added to the instrumentation


to automate the entire measurement sequence. This computer subsystem employs a standard bus interface, like the IEEE-488, to
setup and monitor the individual instruments. High-speed data
busses are utilized for the measurement data to maximize data
throughput and productivity.
An automated antenna measurement system offers a high
degree of repeatability, speed, accuracy, and efficiency with minimum operator interaction. Data storage is conveniently handled
by a variety of media including a local hard drive, floppy disk,
removable drives or bulk data storage on a local area network.
After data acquisition is completed, an automated system supports analysis of the measured data such as gain and polarization
plus a wide variety of data plotting formats such as rectangular,
polar, three-dimensional, and contour plots.

The test positioner axes are controlled and read out by the
positioner control and readout units. A typical control system consists of a control unit located in the operators console. It is interfaced to a power amplifier unit located near the test positioner.
This configuration keeps the high power drive signals near the
positioner and away from sensitive measurement instruments
while providing remote control of positioner functions from the
equipment console. The position readout unit is located in the
equipment console to provide real time readout of position axes to
the operator or, in the case of an automated system, to the computer.
The source antenna is normally located at the opposite
end of the range on a tower or other supporting structure. The
signal source is installed near the source antenna to minimize
signal loss. An outdoor enclosure protects the source from the
elements. For some applications a multiplexer can be used
between the signal source and a dual polarized source antenna.
This configuration allows simultaneous co- and cross-polarization
measurements to be performed. Motorized axes to position the
source antennas polarization, height and boresight are controlled
by a positioner control and indicator system.
The signal source and positioner axes are remotely controlled from the operators console via serial digital link(s).
Twisted pair cable, fiber optics or telephone lines can be used to
interface the digital link from the source site to the control console.
One or two positioner control systems may be used on an
outdoor range depending upon the length of the range and the
total number of axes to be controlled. On very long ranges, or in
cases where the control room is not close to either positioner, it
may be advantageous to use a separate control unit for each end
of the range. Also, since outdoor ranges frequently have many
axes due to the source tower axes, multiple controllers may be
required to control all axes.
A block diagram of a typical outdoor range is shown below.

TYPICAL APPLICATIONS OF ANTENNA RANGE


INSTRUMENTATION
OUTDOOR FAR-FIELD RANGE
In an outdoor far-field range configuration, the test antenna is installed on the test positioner located on a tower, roof or
platform outside the instrumentation control room. The receiver
front end (Local Oscillator) is usually located at the base of the
test positioner, with the mixer connected directly to the test
antenna port. This configuration requires only a single RF path
through the positioner, greatly simplifying system design. Use of
the remote front end also minimizes local oscillator power loss
to the mixer and maximum system sensitivity. An outdoor enclosure protects the local oscillator from the weather and temperature
extremes. For multi-ported antennas, simultaneous measurements
can be made on all ports through the use of multiplexers installed
in front of the mixer. The receiver front end is remotely controlled
from the control console through interfaces with the receiver.

Outdoor Range with Manual Control1

76

Electricity Testing and Measurement Handbook Vol. 7

INDOOR FAR-FIELD RANGE


Anechoic Chambers are instrumented essentially the same way as outdoor ranges with range lengths the primary difference. The receiver front end
is typically positioned near the test positioner with the mixer connected directly
to the test antenna port. The source is
located near the source antenna. The
control room is generally centrally
located and connected to both ends of
the range via cables or digital links.
Since these systems are located indoors,
special enclosures for the receiver front
end, positioner control, and signal
source subsystems are not required.
Usually, the source antenna
requires only polarization control. This,
as well as the short range length, usually
allows a single positioner control unit to
be used to control all the range axes.
Anechoic chambers can be configured for either manual or automatic
control.

Indoor Range with Automatic Control1

COMPACT RANGE
In a point-source compact range,
the feed is usually located just in front of
and below the test antenna. In this configuration, the receiver local oscillator
and signal source can be located very
close together. Special care must be
taken to guard against direct leakage of
the signal source into the test antenna.
High quality RF cables and special
shielding are sometimes used to insure
against this stray leakage. Otherwise,
instrumentation for the compact range is
very similar to an anechoic chamber.

Compact Range with Automatic Control Configured for Multiple Port Measurements1

Electricity Testing and Measurement Handbook Vol. 7

77

NEAR-FIELD RANGE

CONCLUSIONS

Near-field ranges usually are configured for automatic


control. The large numbers of measurements required, and the
need to transform the near-field data to the far-field, requires the
use of a computer system both for data acquisition and for data
reduction and display.
The configuration of a near-field range is similar to a very
short indoor range. The antenna may be tested in the transmit
mode, receive mode, or both. Consequently, the design of the RF
system and the location of the source and receiver front end must
be considered for each application. The figure below is one
example of a planar near-field application where the test antenna
is to be tested in both transmit and receive modes.

As technology progresses, the requirements placed upon


wireless communication systems and their associated antennas
will continue to become more stringent. For example, the desire
to increase network capacity will result in the requirement to
reduce adjacent channel interference within the system, which
will result in more stringent antenna sidelobe and cross-polarization requirements.
The verification of the performance of antennas selected
to meet these and other requirements will, in turn, require test
ranges with higher accuracy measurement capability.
Fortunately, the technologies used to advance the art of antenna
design is also being used to advance the design of antenna test
and measurement ranges and instrumentation. Many of the simulation tools available to antenna designers are also used to
design antenna ranges. The increased use of commercial off-theshelf hardware and software, in conjunction with the increased
use of automated test instrumentation networked into the local
area network, will ensure that current state-of-the-art antenna
measurement systems meet the needs of the advanced antennas
and systems coming to the wireless marketplace.

REFERENCES:
[1] Product Catalog, Microwave Measurements Systems and
Product, Microwave Instrumentation Technologies, LLC.
[2] R. Hartman and Jack Berlekamp, Fundamentals of Antenna Test
and Evaluation, Microwave Systems New and Communications
Tracking, June 1988.
[3] J.S. Hollis, T.J. Lyon, and L. Clayton, eds., Microwave
Antenna Measurements, Scientific-Atlanta, Inc., 1985.
[4] R.C. Johnson and Doren Hess, Conceptual Analysis of
Measurements on Compact Ranges, Antenna Applications
Symposium, September 1979.
[5] R.C. Johnson editor, Antenna Engineering Handbook,
McGraw-Hill Inc., 3rd edition, 1993.

A Typical Planar Near-Field Application1

78

Electricity Testing and Measurement Handbook Vol. 7

Electricity Testing and Measurement Handbook Vol. 7

79

DERIVING MODEL PARAMETERS FROM


FIELD TEST MEASUREMENTS
J.W. Feltes, S. Orero, Power Technologies, Inc., B. Fardanesh, E. Uzunovic, S. Zelingher,
the New York Power Authority, N. Abi-Samra, EPRIsolutions, Inc.
The purpose of DeriveAssist is to speed up the parameter
derivation process and to allow engineers less versed in parameter matching and identification to get involved in the process.
A major component of any power system simulation
model is the generating plant which comprises three major subcomponents of interest: the generator, excitation system, and the
turbine/governor. Accuracy of representation is dependent both
on the structure of the component models and the parameter values used within those models.
Since the accuracy of power system stability analysis
depends on the accuracy of the models used to represent the generators, excitation control systems, and speed governing systems, the parameters used in those models could affect the calculated margin of system stability. Use of more accurate models
could result in increases in overall power transfer capability and
associated economic benefits. Alternately, inaccurate simulation
models could result in the system being allowed to operate
beyond safe margins.
To assist in these efforts, Power Technologies, Inc. (PTI),
the New York Power Authority (NYPA), and EPRI solutions,
Inc. have developed an automated tool to assist engineering staff
in the derivation of model parameters from the recorded results
of staged tests.

NEED FOR BETTER PARAMETERS


Modern power systems are highly dependent on the proper
use of dynamic control. Special-purpose computer programs have
been developed to simulate the dynamics of large complex interconnected power systems. At the same time, power systems have
become more highly stressed through heavier system loading
caused by transfer of low-cost energy and increased use of system
controls to increase transfer limits with existing transmission.
Inaccuracies in equipment modeling can be both due to
inadequate model structures and, more often, due to lack of data
on equipment model parameters. Model parameters currently
used for stability analysis are usually provided by equipment
manufacturers and calculated from design data and, in some
cases, factory tests. Generally, they are not verified by field tests.
Some pieces of equipment are tuned by field personnel with
results of that initial tuning rarely incorporated into simulation
models. Parameters may have changed from initial values due to
retuning, aging, and equipment changes, such as generator
rewinding. The extensive use of computer simulation requires a
high degree of confidence in the computer models. The only way
to assure that study assumptions are accurate is to field test
equipment and validate simulation models by comparing model
responses with those obtained from field tests. Thus, there is a
major industry need to enhance equipment model development
as well as model parameter identification and validation.

This need has been recognized by organizations responsible for system reliability. For example, the Western System
Coordinating Council (WSCC) instituted a program requiring
testing and model validation for all generating units greater that
10 MW. The North American Electric Reliability Council
(NERC) is presently formulating its requirements in this area.

STAGED TESTS
Staged field tests provide sufficient information to identify
the values of the key parameters of the computer simulation models.
Such tests are selected to minimize the effect on plant operation,
allow ease of simulation of the staged tests, and, to the extent possible, reduce the complexity of the parameter derivation problem by
having the response of an individual test significantly affected by
only a few parameters.
The test methodology described is just one testing methodology; other methods are used successfully also. However, the
tools and procedures developed and used in this parameter derivation software could be adapted to these other variants in the
testing process.
The testing process is divided into two phases. One phase
involves collecting steady-state measurements, which are used to
establish base values of quantities and to identify values for
parameters that are associated with steady-state operation. The
second phase involves collecting the dynamic response of the generator, excitation system, and governor/turbine system to staged
disturbances.

STEADY-STATE MEASUREMENTS
The steady-state measurements are divided into two
groups: the open circuit saturation curve measurements and
online measurements. The open circuit saturation curve is measured with the unit operating offline at rated speed. The generator
field excitation is varied, and measurements of terminal voltage,
field voltage, and field current are taken.
The online measurements (also sometimes called V-curve
measurements) are performed with the unit
connected to the electrical network and placed
at a given load. At that
load level, the generator
field excitation is varied
to change the reactive
power output. Typical
measurement points are
given in graphical form
in Figure 1.

80

Electricity Testing and Measurement Handbook Vol. 7

DYNAMIC TESTS

PROGRAM OVERVIEW

The gains and time constants of the models can be determined only from tests that excite the dynamic response of the
equipment. The models of concern are those of the generator,
excitation system, and governor. The purpose of the dynamic
tests is to provide a simple and safe disturbance to excite the unit
in order to record its dynamic response. The usual approach is a
series of load rejection tests with the unit initially carrying partial load.
Each of the tests has identification of certain parameters
as its primary goal. The loading of the machine is selected to isolate those parameters as much as possible in order to reduce the
complexity of the derivation process. The initial conditions for a
typical set of load rejections are listed in Table 1.

DeriveAssist works through a graphical user interface


(GUI) and is written to operate on a personal computer. The program uses the MATLAB/Simulink and the Optimization Toolbox
as the calculation engine for the parameter derivation process.
MATLAB is a high-performance language for technical computing. Simulink is a graphical tool for modeling, simulation, and
analysis of dynamic systems. The Optimization Toolbox is a collection of routines that extend the capability of MATLAB for
such problems as nonlinear minimization, equation solving, and
curve fitting. By combining these tools with the experience
gained through years of testing and parameter derivation, the
process of parameter identification and derivation has been significantly advanced. The program is organized to facilitate the
derivation of the parameters in a logical order, starting with the
steady-state tests and then proceeding to the dynamic tests. The
main entry point into the program is a window with several pull
down menus. Each menu item has several submenu choices.
Each of these submenu choices performs a particular task. The
submenus were also designed to reflect a particular test.
The five menus are:
File I/O
Steady-state tests
Dynamic tests
Control tuning
Help.
The parameter derivation program is organized to facilitate
the derivation of the parameters in a logical order, starting with
the data from the steady-state tests and then proceeding to the
dynamic tests for the generator, excitation system, and governor.
Where a certain sequence of actions must be observed (i.e., read
saturation curve first before trying to calculate the saturation
parameters), the program performs a check and warns if the prerequisite tasks have not been performed.

TRADITIONAL GENERATOR PARAMETER DERIVATION


PROCESS AND THE DERIVATION SOFTWARE
The traditional method of determining the model parameters based on the recorded test results forms the basis for the
approach used by the parameter derivation automation software.
The traditional methodology has been highly dependant on skilled
engineers applying their knowledge to select initial parameters,
perform calculations using those parameters, and, based on the difference between measured and calculated values, adjust the
parameters manually to improve the fit between model and realworld response. This iterative approach is quite time consuming
and requires a skilled, experienced engineer to make the adjustments so as to accomplish a good match in a reasonable amount of
time. The purpose of the software described here is to speed up
that process and also allow engineers less versed in parameter
matching and identification to get involved in the process.

DERIVATION OF STEADY-STATE PARAMETERS


The steady-state parameters are derived first. They can be
derived based on a series of measurements of steady-state quantities as described previously. The first task is to analyze the open
circuit data (obtained from the offline open circuit test) to establish the base values for field current and field voltage. This has
been traditionally accomplished by plotting terminal voltage versus field current and drawing the air gap line. The value of field
current corresponding to rated terminal voltage on the air gap
line is identified as the base value. Next, the saturation values
S(1.0) and S(1.2), the parameters used to describe the shape of
the saturation curve, are identified using the open circuit data.
The program automates this task, calculating the base value and
saturation parameters using a least squares fit. Figure 2 shows an
example of the output of the program, showing the close match
achievable between test and calculated values. Tabular output
demonstrating the fit of the measured data and calculated results
are also given.
The online steady-state measurements are used to identify
the values for Xd and Xq. Recordings of voltage, power, reactive
power, field current, field voltage, and power angle are made at
different power levels and reactive power outputs. The user can
select which points to use in the calculation. The program calculates the reactances Xd and Xq that best fit the measured data,
again using a least squares optimization. Each reactance can be
calculated separately or both at the same time. If the user selects

Electricity Testing and Measurement Handbook Vol. 7

81

Xd only, then the error function to minimize uses only field current, Ifd. However, if the user selects Xq, the error function will be
based on power angle. The last option is only possible when
rotor angle measurements have been made during the field tests.
Figure 3 shows the program output screen following the derivation of the generator steadystate reactances.

and measured quantities are quite different. For the generator, the
information from the load rejection tests is used to calculate the
time constants, the transient reactances and the subtransient reactances of the generator. For the excitation system, the AVR and
exciter gains and time constants can be determined.
The governor models vary significantly depending on the
type of prime mover, that is, steam, gas, or hydroelectric turbine.
However, the process determines the gains and time constants
representing the governor and turbine dynamics. In all cases, a
comparison was made between a measured signal and a simulated signal to define an error function. The program attempts to
improve the model performance by adjusting the model parameters in the appropriate direction and repeating this simulation
until the error signal is minimized. The process will be illustrated using the derivation of excitation system parameters, but the
reader should keep in mind that the general process would be
similar for the other equipment models.

EXCITATION SYSTEM PARAMETER DERIVATION

DERIVATION OF DYNAMIC MODEL PARAMETERS


The traditional parameter derivation process for the
parameters of the dynamic models required numerous simulations to see exactly what happens as one changes each of the
model parameters. With each parameter change, a comparison is
made between response of the model and that obtained from the
actual tests. The determination of the parameters to adjust and
the amount by which to adjust them to get a close match between
the model response and the actual performance requires the skills
of an experienced engineer. If there are nonlinear dynamic interactions among variables, it is usually very difficult to know how
to set each of the parameter values to give the desired performance. The process of choosing the appropriate model parameters
that provide the desired response can be automated by the use of
optimization tools. MATLAB, Simulink, and the Optimization
Toolbox provide a suitable advanced programming environment
that can allow an optimization engine to interact with a dynamic
simulation package.
Thus the optimization phase of the model parameter derivation involves the automatic adjustment of the model parameters
until the difference between the model response and the desired
response (obtained from field tests) is minimized. The optimization
process tries to find the combination of model parameters that most
closely matches the measured response.
A large task involved in the development of the DeriveAssist
program was the building of generator, exciter, and governor models in Simulink, testing them against a widely used commercial
power system simulation program (PTIs PSS/E program) as a
benchmark, and making the interface between all three components
of the software: Simulink, MATLAB, and the optimization toolbox,
as integrated as possible. Tests are performed to record the equipment response with different initial conditions and with disturbances
designed to target the derivation of specific generator, excitation,
and governor model parameters.
The derivation process is similar for the generator, excitation system, and governor parameters, although the staged tests

The first step is, of course, to choose an appropriate model


structure for the excitation system. The selection is usually guided
by the manufacturers recommendation or from industry standards. In some cases, the schematics of the excitation system may
need to be examined to make the proper model selection or, if a
standard model structure is not appropriate, to create a new model.
As noted above, the traditional strategy to identify the values for the excitation parameters involves an iterative hill-climbing
technique by the engineer, who changes the value of one parameter
at a time until a match between simulation results and recorded
measurements are made. This process requires good familiarity
with the specifics of how the equipment functions and of the effect
that a change in a parameter or a set of parameters has on their
dynamic response; unfortunately, such familiarity is quite rare. The
parameter derivation program greatly simplifies the process.
The user must read in the test data, which is easily performed using the program GUI. The GUI also facilities the selection of signals for the derivation process and manipulation of the
test data as necessary, for example, conversion to per unit using
the base values derived in the steady-state derivations.
Figure 4 shows the connections of round rotor generator
model (GENROE in PSS/E) with an IEEE type 1 (IEEET1) excitation system. Inputs to the generator model are field voltage,
generator currents Id and Iq, and mechanical power Pm. Outputs
are terminal voltage magnitude and angle and generator speed.

82
Inputs to the excitation model are terminal voltage (from
the generator) and reference voltage, while the output is field
voltage Efd, which is fed back as an input to the generator model.
The inputs and outputs allow data to be passed between
MATLAB and Simulink and between the models. The parameters are defined such that they can be changed and passed to
Simulink in the optimization process.
As an example, Figure 5 shows what lies under the excitation system block in Figure 4. The primary input is the voltage
Ecomp and the output is the field voltage Efd. The reference Vref
is calculated from the initial condition of the test. Auxiliary signals such as those from the power system stabilizer and under- or
over-excitation limiters are present in the model structure but are
not exercised purposely by the selection of the tests, concentrating on the excitation parameters.
The optimization phase of the model parameter derivation
involves the automatic adjustment of the model parameters until
the difference between the Simulink model response and the
desired response (measured response) is minimized. The optimization process tries to find the combination of model system
parameters that best provide the desired response, that is, to find
the values of the excitation system model parameters that will
move the initial model response as close as possible to the measured response.

Electricity Testing and Measurement Handbook Vol. 7

every few seconds as the model parameters are adjusted, a new


simulation is performed, and the new output is displayed. The
simulation output gradually shifts from the original response to
very closely match the desired response. Figure 6 shows the
shows a comparison of a simulation
using an initial set of parameters and
the measured response. Note that the
original parameters are not a good
approximation of the actual equipment. The simulation, in this case, is
quicker and much better damped. The
response gets progressively closer to
the measured output following each
pass of the derivation process until; at
last, the two curves are essentially one
on top of the other. The final plot is
shown in Figure 7. The whole optimization process to determine the
parameters takes only a minute or two
on a typical PC.
Generator and governor model parameters are derived in a
manner quite similar to that described for the excitation systems.
The tests performed on the unit are considerably different, of
course, as the determination of the governor response characteristic requires a test resulting in a power imbalance and subsequent
movement of generator speed while the tests for the generator
parameters require isolation of the generator dynamics by placing
the AVR on manual.

ONGOING EFFORTS

A comparison of the simulation output and the measured


(desired) output is displayed for each successive pass of the optimization process. The user sees the simulation output change

The DeriveAssist parameter derivation software


described so far allows the derivation of all the generator steady
state parameters and includes most of the Simulink models for
generators, exciters and governor systems. However, additional
work is required to further develop the software. Some of the
tasks for future work include:
Derivation of excitation system parameters from AVR
reference step tests
Extension of the methodology to brushless excitation
systems.
Develop algorithms designed to assist the user in the
tuning of equipment such as exciters and stabilizers.

Electricity Testing and Measurement Handbook Vol. 7


There are also a few technical issues that require further
attention, including:
Additional investigations to overcome some problems in
the automatic initialization of Simulink models
Improvements to the model library and the ease of
selecting the model structures for the excitation system
and governor
Improvements to the GUI and data passing between
Simulink, MATLAB, and the Optimization Toolbox
Additional reporting routines.
The ongoing further development work will improve its
functionality and expand its capabilities.

ACKNOWLEDGMENTS
This article describes research sponsored by EPRI and
NYPA. The authors would also like to acknowledge F.P. de
Mello for his contributions to the original ideas behind the
parameter derivation process used in this project and Ricardo J.
Galarza for his contributions in the development of this parameter derivation software. The idea of developing this MATLABbased tool was originally conceived by Bruce Fardanesh.

FOR FURTHER READING


Synchronous machine parameter derivation program, EPRI,
Palo Alto, CA, Rep. 10006653, 2001.
F.P. de Mello and J.R. Ribeiro, Derivation of synchronous
machine parameters from tests, IEEE Trans. Power App. Syst.,
vol. PAS-96, no. 4, pp. 1211-1218, Jul./Aug. 1977.
F.P. de Mello and L.N. Hannett, Determination of synchronous
machine electrical characteristics by tests, IEEE Trans. Power
App. Syst., vol., PAS-102, no. 12, pp. 3810-3815, Dec. 1983.
J.W. Feltes and L.N. Hannett, Derivation of generator, excitation
system and turbine governor parameters from tests, presented at
Int.
Conf Large High Voltage Electric Systems, Colloquium on
Power System Dynamic Performance, Florianopolis, Brazil,
Sept. 1993.
L.N. Hannett, J.W. Feltes, and B. Fardanesh, Field tests to validate hydro turbine-governor model structure and parameters,
IEEE Trans. Power Syst., vol. 9, no. 4, pp. 1744-1751, Nov.
1994.
L.N. Hannett, B. Fardanesh, G. Jee, A governor/turbine model
for a twin-shaft combustion turbine, IEEE Trans. Power Syst.,
vol. 10, no. 1, pp. 133-140, Feb. 1995.

83

84

Electricity Testing and Measurement Handbook Vol. 7

Electricity Testing and Measurement Handbook Vol. 7

85

TESTING ELECTRIC STREETLIGHT COMPONENTS WITH


LABVIEW-CONTROLLED VIRTUAL INSTRUMENTATION
Ahmad Sultan, Computer Solutions, Inc.
The Challenge: Automated testing of magnetic ballasts used in
electric streetlights.
The Solution: Developing a PC-based virtual instrumentation
system using a DAQ board controlled by
LabVIEW.

INTRODUCTION
Our task was to develop an automated test system for
magnetic ballasts used in high-pressure sodium (HPS) streetlights. Our client, who manufactures ballasts for the North
American and international markets, believes that product development and quality assurance require thorough and complete
testing of prototypes and production samples to verify compliance with national and/or international standards.
The test system needed to accommodate the following:
Different types of core and coil ballasts, such as reactor,
autotransformer, constant wattage autotransformer
(CWA), and constant wattage isolated transformer (CWI)
Operating voltages from 120 to 600 V and rated lamp
wattage from 50 to 400 W
Capacitors for wattage control and/or power factor
correction
Different lamp igniters
Open-circuit, short-circuit, lamp-starting, and lamp
running tests
At the ballast input and output ports, we needed to measure true rms values of current and voltage, true power, and the
ratio of watts to volt-amperes (power factor, if the voltage and
current waveforms are clean sinusoids). Because HPS lamps are
nonlinear loads, we monitor current and voltage peak values and
crest factors, along with total harmonic distortion.

Hardware Block Diagram of the Ballast

SYSTEM INTEGRATION APPROACH


With the tight budget of a growing company, establishing
a test bench with the functionality we required using conventional
test equipment becomes difficult. We implemented a virtual
instrumentation approach to achieve project objectives within
budget while maintaining flexibility for future needs.
Virtual instrumentation consists of using mainstream
computers, off-the-shelf plug-in instrumentation boards, and
software. Because the virtual instruments you create with these
products are user-defined, not vendor-defined, you can tailor
applications to meet your needs exactly. Some of the benefits of
virtual instrumentation are ease of use, flexibility, and savings of
time and money.
We used LabVIEW software as the heart of the instrumentation system. BallastVIEW is the name of the LabVIEW
application we wrote to acquire signals, process data, and present
results to the user on the computer screen.
The instrumentation system hardware consists of:
486 DX2-66 PC (12 MB RAM, 340 MB hard drive) running Windows
Variac (manually adjustable transformer) supplying AC
power to the ballast under test through the system test
fixture
System test fixture containing switches and wiring
required for the different test configurations
Transducers for sensing current and voltage signals
(such as resistive dividers and current shunts)
Antialiasing RC filters, with components selected to
avoid loading the board input amplifiers

86

Electricity Testing and Measurement Handbook Vol. 7

5B signal conditioning modules, to amplify and isolate


the filtered signals
National Instruments Lab-PC+, installed in the PC, to
digitize the conditioned signals
The cut-off frequency of the antialiasing filters was set to
half the sampling frequency; the RC filters also serve to protect
the electronics items from the high-voltage spikes generated
when the igniter starts the lamp.
We configured the Lab-PC+ board for bipolar differential
input (four channels). We set the sampling frequency to 7680
Hz/channel. Acquisition was software-triggered on the rising
slope of the input voltage.

EXAMPLE RESULTS

BALLASTVIEW PRESENTATION
The LabVIEW screen on the next page is the front panel
of BallastVIEW. It illustrates a stack of VIs representing an input
AC power analyzer, an output AC power analyzer, a waveform
graph, and a harmonic analyzer. The controls at the top of the
screen are switches for controlling acquisition, metering, harmonic analysis, and program execution. The user can capture a
single shot or continuously acquire signals.
For the power analyzers, the indicators (from left to right
in each row) display the rms, maximum, minimum, peak average,
and crest factor of each signal. The active and apparent power,
and their ratios, are displayed in the right column.
The waveform graph displays the signals acquired by the
data acquisition (DAQ) board.
Because both voltage current waveforms are displayed,
the ordinate is labeled in relative units (PU). To find the true
amplitude of a particular signal, multiply its measured value
from the graph, in PU, by the respective base value from the PU
Base table (to the right of the waveform).
The line spectrum, shown in the bottom right corner, displays harmonic magnitude in either peak volts/amperes or per
unit values normalized to the fundamental component of the
respective signal. Magnitude of harmonics can be checked by
flipping the cursors of the harmonic magnitude indicator (bottom
center). The user can window signals before applying the Fast
Fourier Transform.

The results presented in the BallastVIEW screen are test


results for a 200 W CWI ballast. The output power analyzer indicates that the lamp is operating at rated lamp power. Lamp voltage and current are very close to the ANSI reference specifications
(100 V and 2.4 A). Lamp current crest factor (CCF) is 1.6 (1.8 is
the maximum permissible). The input power analyzer indicates
that the ballast draws 2.037 A at rated input voltage. Ballast loss is
approximately 39 W and the power factor is high (0.973 lagging).
The waveform graph shows almost clean input voltage and
current signals. Output (lamp) voltage is the square waveform of
a typical arc in a high-intensity-discharge (HID) lamp, containing
the full odd harmonics spectrum. The magnitude of the lamp voltage third-harmonic component is 39 percent of the fundamental.
Total harmonic distortion (THD) of lamp voltage and lamp current are 33.84 percent and 3.73 percent, respectively.
We verified the credibility of this system by obtaining
agreement with test results from an independent test laboratory,
electric utility companies, and customers of the ballast company.
The result is a flexible, high-performance, easy-to-use,
and cost-effective PC-based measurement system, which saved
time in both product development and production testing.

CONCLUSION
BallastVIEW measures and displays the electrical parameters required to test and develop ballasts and performs on-line
waveform analysis. The result is a flexible, high-performance,
easy-to-use, and cost-effective PC-based measurement system,
which saved time in both product development and production
testing. An advantage of using LabVIEW is our ability to
increase BallastVIEW functionality in the future, for example,
by monitoring the ballast-lamp characteristic curves and compiling results. The core of the BallastVIEW program constitutes the
cornerstone for testing other electrical products, such as transformers, rectifiers, inverters, and UPSs, as well as for power line
monitoring.

LabVIEW Front Panel showing BallastVIEW


Test Results

Electricity Testing and Measurement Handbook Vol. 7

87

ASSET MANAGEMENT
The Path to Maintenance Excellence
Mike Sondalini, Managing Editor, Feed Forward UP-TIME Publications
This article tell the Japanese way of doing asset management and maintenance. If you think you already have a good system then you will enjoy reading this months newsletter as you
compare yours and theirs. If you have a poor system then you will
get a totally different view of how great maintenance can be done.

I spent a week in Japan at the chemical plant of an internationally renowned chemical manufacturer. While there I asked
them about how they do their maintenance. They told me about
their maintenance philosophy. And I want to pass on to you what
I learnt about the Japanese way of doing maintenance on that
trip.
You will read about how this Japanese company determines
its equipment and component criticality. You will learn about a new,
truly effective way, of making next years maintenance plan. We
will cover condition monitoring the Japanese way. The Japanese are
great maintenance investigators and you will be impressed when
you learn how they do their failure analyses. We will also cover their
psychology of maintenance the way they think about maintenance
and how they look at it. You will be astounded at their mind-set.

tion as the previously mentioned methods but they arrive at the


rating and the response to it in a unique, quick four-step process.
They used a simple flow chart that production and maintenance worked through together, equipment by equipment.
Those failures that caused safety and environmental risks were
not allowed to happen and either the parts were carried as spares
and changed out before failure or the plant item was put on a
condition monitoring program. Those failures that caused production loss or affected quality also were either not allowed to
happen or put into a condition-monitoring program. And those
failures that didnt matter were treated as a breakdown.
The flowchart let one arrive at a rating and a corrective
action for each piece of equipment and component fast. No need
to spend hours and days looking at failure modes and deciding
what to do about them. If an equipment or component loss produced dangerous situations, or if the failure stopped production
or affected quality, it was either changed out before the end of its
working life or it was put on a monitoring program.
The maintenance philosophy for every bit of plant could
be arrived at in a four-step decision process. It was very easy to
use and to decide what action to take.

A JAPANESE WAY TO DECIDE EQUIPMENT CRITICALITY

HOW TO TURN A MAINTENANCE PLAN INTO A STRATEGY

How do you decide what level and type of maintenance to


use on an individual item of plant and its sub-assemblies? Not all
equipment is equally important to your business. Some are critical
to production and without them the process stops. Others are
important and will eventually affect production if they cannot be
returned to service in time. While other items of plant are not
important at all and can fail and not affect production for a very
long time.
As a maintainer you want to know which equipment in
your plant falls into each of those categories so you can determine your response. Furthermore you want to know which subassemblies in each item of equipment are critical to the operation
of the machine.
From this information you can decide which spares to
hold on-site and which to leave as outside purchases. The equipment criticality also determines what level of preventive maintenance to use, what type and amount of condition monitoring to
use and what type and amount of observation is required from
the operators. You can also use it to justify on-line monitoring
systems to protect against catastrophic failure.
The western approach to determine criticality is often to
use either Reliability Centered Maintenance or Risk Based
Maintenance to determine consequences of failure and then
address the appropriate response to prevent the failure. The
Japanese chemical manufacturing company I visited had a novel
way of determining their equipment criticality. They based the
equipment and component criticality on the knock-on effect of a
failure and the severity of the consequences. It is the same inten-

The maintenance plan my Japanese hosts showed me in


August 2002 was on a big spreadsheet. It listed all the equipment
in a plant by tag number covering the period 1994 through to
2003. The maintenance histories of problems on a piece of
equipment for the past eight years were listed. A short note
detailing the month of occurrence and the failure was made in
the column of the year it happened. For this year, 2002, and the
next, 2003, the spreadsheet listed what maintenance and modifications were going to be done on the equipment.
It was a ten-year plan the like that I had never seen before!
But now, as I write, it has become clear why its worthwhile doing it like that. What I saw was not a plan! What I saw
was a strategy! It was a strategy to reduce the known production
stoppages and to focus the maintenance effort.
Can you see how something like that would work? You
know what has gone wrong with the equipment over the last
eight years, its listed right there in front of you. You can see how
effective the past practices, methods and solutions have been.
From that you can wisely decide what to do over the next two
years to prevent the reoccurring problems. Instead of writing the
usual blue sky 5 or 10 year maintenance plan that no one
believes anyway, you only plan for the believable two years
ahead. You write down exactly what can really be done in the
foreseeable future to reduce or prevent the real problems.
The plan for the next two years would include proposed
modifications, equipment replacements, new condition monitoring plans, etc.
Now that is a great way to make next years maintenance

OVERVIEW

88
plan! It would be one that is totally defendable and fully justifiable to upper management because it is well thought out, rooted
in getting the best return for your money and based on the important business requirements to continue in operation.
My suggestion to cover the period beyond the next two or
three years (and only if it is necessary in your company), is to use
the spreadsheet to make forecasts. Project ahead based on what
you plan to do in the coming two to three years to fix the current
problems. If you arent going to fix the problems then dont
assume less maintenance in the future. Remember that a forecast
is not a plan! A forecast is a best-guess suggestion, often known
as blue sky dreaming. A plan is a set of action steps that over
time will produce a desired result. They are totally different to
each other.

Electricity Testing and Measurement Handbook Vol. 7

Electricity Testing and Measurement Handbook Vol. 7

89

THINK SYNCHRONIZATION FIRST TO


OPTIMIZE AUTOMATED TEST
www.ni.com
OVERVIEW
Latencies and timing uncertainties involved in orchestrating the operation of multiple measurement components present a
significant challenge in building automated test systems. These
issues, often overlooked during the initial system design, limit
the speed and accuracy of the system. However, with a good
understanding of timing and synchronization technologies, you
can address these issues from the onset and deploy a system optimized for throughput and performance.
Before we proceed, first consider that most automated
measurements for test fall into one of two categories. The first
category, often called time-domain measurements, characterizes
the variation of a device under test (DUTs) output over time. For
these measurements, the accuracy of the measured response
depends not only on the accuracy of its magnitude, but also on
the time at which the signals are measured.

The second type or steady-state measurements occurs


when one or more inputs of known value are applied to the DUT
and its outputs can settle to their steady-state value before you
measure the signals. In this case, the measurement process
depends on the time of the measurement if you measure the
signals too early, accuracy suffers because the source output may
not have fully settled. Although you can measure the signals
accurately any time after the output has settled, you must minimize the delay to reduce test time. Many test developers insert an
arbitrary delay in their test programs to ensure accurate results.
While this is a simple fix, test time suffers.
Analog electronic component evaluation and manufacturing
test often involves measurements of both transient and steady-state
parameters.

WHAT IS INVOLVED IN SYNCHRONIZATION?


The main objective of synchronization of multiple measurement devices is correlated measurements and/or precise control
of process execution. In most cases, you are interested in correlation in terms of time, but correlation can occasionally be in dif-

ferent terms, such as position. For temporal correlation, you must


synchronize measurements to correlate with the sample clock of
your measurement device. In other words, it is pointless to examine measurements synchronized to within nanoseconds if your
sampling clocks are 1 MHz. The objective is a system with synchronized devices that are synchronized to sub-microsecond
accuracy. Precise timing of measurements is a prerequisite of the
measurement device.
Let us take a digitizer as an example to elaborate on key
timing technologies. The heart of a digitizer is an ADC, which
samples your signal and converts it to digital data. The sample
clock, which controls the timing of the ADC, is most often derived
from an onboard crystal oscillator. Thus, the synchronization of
measurements across multiple devices, such as a source or other
digitizers, implies that you must synchronize all sample clocks to
within the uncertainty of the period of the sampling clocks.
Another important element to the measurement is collecting
data. This is usually accomplished with trigger signals. External
events or triggers are the main methodologies for initiating an
acquisition. Triggers come in three forms analog, digital, and software. Analog triggering refers to trigger generation when a monitored analog signal passes the imposed triggering condition. You
can measure the analog signal itself or an auxiliary analog signal.
Digital triggering refers to trigger generation when a digital signal,
such as a TTL level signal, is received. Software trigger refers to
trigger generation on software command. The software trigger can
be as simple as hitting a start button on the soft front panel or
graphical user interface (GUI). Thus, synchronization of measurements requires not only synchronized sample clocks, but also the
distribution of a trigger to all measurement devices to initiate operation at the same time. In synchronization applications, it is common to designate a master measurement device to monitor the
operation of the entire measurement system. When the system
meets triggering conditions on this device, it distributes a common
trigger signal to all other devices that are slaved to the master.
To achieve tight synchronization across multiple devices,
you need to examine the distribution of clocks and triggers.
There are three main schemes for synchronization:

1. START/STOP TRIGGERS CONTROL OPERATION


ON ALL DEVICES
This scheme for synchronization is the simplest. It
involves a single start or stop trigger signal to all measurement
devices involved. One device, designated as the master device,
monitors the operation. The master is set to look for an external
trigger (analog or digital), or to generate a synchronizing trigger
on a software command. When triggering conditions are met on
the master device, or the software command is issued, the master
distributes a trigger signal to all slave devices to start operation
as shown in Figure A.

90

Electricity Testing and Measurement Handbook Vol. 7

Some examples are:


Rotationally oriented measurements A master digitizer or oscilloscope, monitoring defects found on rotating
circular or cylindrical devices such as computer hard
drives, industrial cylindrical tubes, and automotive wheel
shafts, passes a digital trigger to a slave counter/timer
device making quadrature encoder measurements (position measurements). The system can correlate defects and
anomalies to angular and radial position rather than time.
High channel count measurements Multiple digitizers
acquire data on reception of an external digital trigger
from an external triggering module or a master digitizer
in the system.
With the examples above, two issues arise:
The trigger signal should arrive at each slave device with
minimal delay and skew between each other. The delay and skew
are separate issues and need equal consideration. With a significant delay from the master to the slaves, you lose synchronization.
Minimal path length for signal propagation from master to slaves
is crucial for tight synchronization. The other important, but subtle, consideration is the skew between slave devices. So that each
slave triggers at precisely the same time, you need to minimize
the device-to-device skew in time. At the least, the delay and
skew should be identified to some uncertainty. Measurements
that require relatively low sampling rates can tolerate a degree of
slack in the specifications of a system set up. At high sampling
rates, these issues can affect the measurement integrity.

tinually rewrites until it receives a trigger. After the device


receives the trigger, the digitizer continues to acquire post-trigger samples if you specified a post-trigger sample count. The
ability to correlate waveforms acquired on the various devices
depends on the accuracy of the time-stamp of the trigger.

2. TRIGGERS AND A DIRECT SAMPLE CLOCK


INITIATE AND CONTROL THE TIMING ON ALL DEVICES
This scheme takes synchronization a step higher. It involves
trigger signals and a sample clock to all the devices involved. One
device, designated as the master device, controls the operation of
the entire measurement system. This device exports its sample
clock to all slave devices. For example, a system comprised of multiple digitizers and analog output sources has a common sample
clock from an appointed master. As illustrated in Figure B, the
master sample clock directly controls ADC and digital to analog
conversion (DAC) timing on all devices.
The master is set to look for an external trigger (analog or
digital), or to generate the trigger on a software command. When
triggering conditions are met on the master, the device distributes
a trigger signal to all of the slave devices to commence operation. The same issues that arose in the previous scenario are also
present in this situation. The trigger and sample clock signals
should arrive at each slave device with minimal delay and skew
between each other. At the least, the delay and skew should be
known to an uncertainty. The significant advantage of this scheme
compared with the previous scheme is that you use a common
sample clock to control all devices. With a common sample clock,
all waveforms are precisely sampled at the same time. This
resolves the central issue of synchronized measurements.
With this technique, you benefit in another important way.
If you employ the clock on each measurement device, you have
to take the jitter and drift inherent in each clock into consideration. On each digitizer, different clock jitter and drift may give
rise to sampling periods, which means you cannot correlate them
with relative accuracy.
The disadvantage of this scheme is that it is not optimal
for high-speed sampling because of the propagation delay of the
sample clock. The sample clock simply takes time to get to the
slaves from the master. This issue does not arise if the sampling
rate is slower than the propagation delay. For example, in a given
system the propagation delay is measured to be 10 ns. If the sampling rate is 5 MSamples/s, the period between each rising edge
of the clock is 20 ns. The sample clock reaches the slave devices
before the delay time encumbers the measurements. Additionally,
the path lengths from the master to each slave device have to be
carefully matched so the skew time is shorter than the sampling
clock period.

3. TRIGGERS AND A REFERENCE CLOCK TO INITIATE AND


CONTROL THE TIMING ON ALL DEVICES

The second issue concerns the intrinsic accuracy of the


measurement device you should identify or calibrate the time
that the device received the trigger signal to the first pre-trigger
or post-trigger sampled point in each device. You can program
many measurement devices, such as digitizers, to continuously
acquire samples into a circular onboard memory buffer that con-

This scheme of synchronization is usually for high-speed


synchronization. It involves start/stop trigger signals and a reference
clock (typically 10 MHz) to all devices involved. The sampling
clock of each measurement device is derived from the reference
clock by dividing the reference clock to obtain higher speed sampling clocks. The master is set to look for an external trigger (analog
or digital), or to start acquisition on a software command. When
triggering conditions are met on the master, this device distributes
a trigger signal to all slave devices to start operation.

Electricity Testing and Measurement Handbook Vol. 7

91

With the previous scheme, you could have a direct feed of


the sample clock to each device. This is the ideal scenario, however, it is not easy to pass a high-speed sampling clock (such as
100 MHz clocks) across cables and/or trigger buses because of
line integrity and propagation delays. So, this scheme shares a
common reference clock for generation of all sample clocks.
The method usually employed to synchronize and generate sampling clocks is phase lock looping (PLL). This method
basically monitors the phase of the reference clock and produces
a high-speed sampling clock that is phase locked to the reference
clock, as shown in Figure C above.
Third-party frequency sources, such as rubidium and
oven-controlled crystal oscillator (OCXO)-based frequency
sources, are ideal for synchronization applications because of
their accuracy. These are frequency sources with accuracies of
better than 100 parts per billion (ppb). Thus, an OCXO source
with 100 ppb accuracy yields a 10 MHz clock with 1 Hz uncertainty. Another important property of your reference clock is
multiple output capability for multiple instrument synchronization. The reference clock from either the master instrument or a
precision frequency source should be capable of being driven to
multiple destinations without any loss of signal integrity. An example of this would be a minimal phase offset between the reference
clock outputs from the frequency source.
The same issues that arose in the previous scenarios are
also relevant in this scheme. The trigger and reference clock signals should arrive at each slave device with minimal delay and
skew between each other. At the least, the delay and skew should
be known to some uncertainty. The issue of minimal skew
between each device is crucial for high-speed digitization. If the
skew is large, the time stamp of the incoming trigger on each
device will not be coincident in time, and you cannot accurately
correlate events captured on separate devices.

are shared via a ribbon. You can serially chain two, three, four,
or five boards together, thus achieving synchronization of several
I/O channels. Another attractive feature of these trigger buses is
built-in switching, so you can route signals to and from the bus
on-the-fly through software programming. This eases the burden
of having to manually configure your timing and triggering signal
distribution on your boards. You can find examples of these features in National Instruments measurement products in the form
of the RTSI bus.
Connections Integrated with the Measurement
Platform Some of the computer-based measurement devices are
implemented in form factors such as VME/VXI and
CompactPCI/PXI. VME/VXI, an older industrial form factor, and
PXI/CompactPCI, a newer industrial form factor, both address test
and measurement, telecommunications, defense, industrial
research, and many other markets. VXI and PXI extended VME
and CompactPCI by adding timing and triggering buses to the form
factors. This greatly simplifies synchronization of multiple devices.

SYNCHRONIZATION OPTIONS
Measurement devices come with three main options for
connecting synchronization signals user-supplied cabling, proprietary vendor-defined cabling, and connections integrated with
the measurement platform.
User-Supplied Cabling User-supplied cabling of signals for synchronization is available for both computer-based
and stand-alone measurement devices. For example, you can
often externally synchronize your function generator or digital
storage oscilloscope (DSO) to a reference frequency source.
When you decide to synchronize your instrumentation, you have
to ensure that your cables from your frequency source to the
other components of your measurement system are precisely
matched in length in order to avoid skew. The same criteria need
to apply in distribution of your trigger signal from master to all
slave devices. As noted above, your frequency source should
have the ability to distribute a common reference clock to multiple
destinations. This is the only synchronization option for traditional
stand-alone instruments.
Proprietary Vendor-Defined Cabling Some vendors
of computer-based measurement devices, such as data acquisition boards, address synchronization by providing a proprietary
bus, which may be external or internal to the computer. Sampling
clocks, reference clocks, and triggers are distributed from master
to slaves through the bus. These dedicated high-speed digital
buses are designed to facilitate systems integration. The physical
bus interface is a multipin connector on the board, and signals

MORE ON VXI AND PXI


VXI and PXI are open standards and many companies
make products for both variations. VXI is traditionally used in
large test and measurement applications. Though relatively new
to the market, PXI is gaining acceptance because of its relatively smaller footprint, portability, high throughput due to the PCI
bus, and lower costs, made possible through use of standard
commercial technologies spawned by the large PC Industry.
Electrically, VXI and PXI add a trigger bus, a star trigger
bus, a 10 MHz reference clock, and local buses. For synchronized measurements, the trigger bus, the 10 MHz reference
clock, and STAR trigger bus are key features. The PXI features
described below broadly apply to VXI as well.
System Reference Clock The PXI back-plane provides
a built-in common reference clock for synchronization of multiple modules in a measurement or control system. Each peripheral slot features a 10 MHz TTL clock. Equal-length traces from
the clock to each peripheral slot yield low skews of less than 1
ns between slots. The accuracy of the 10 MHz clock is usually
25 ppm (dependent on individual chassis vendors), making it a
relatively reliable clock for synchronization applications that
rely on PLL methods. If you need a more accurate reference
clock, you can insert a PXI counter/timer device with an OCXObased clock source into the second slot of the chassis. The slots
OCXO 10 MHz clock can be driven onto the PXI backplane
clock lines in lieu of the PXI backplane clock. Then, the whole
PXI chassis can inherit the OCXO clock stability.
Trigger Bus The PXI eight-line trigger bus provides
intermodule synchronization and communication. Trigger or clock
transmission can use the trigger bus lines. You can pass triggers
from one module to any number of modules, so you can distribute
digital trigger signals from master to slave measurement devices.
With variable frequency sampling clock transmission, multiple
modules can share a timebase that is not a derivative of the 10
MHz reference clock. For example, four data acquisition modules
using a 44.1 kS/s CD audio sampling rate can share a clock that is
a multiple of the 44.1 kHz or the direct 44.1 kS/s clock. For highspeed synchronization, the propagation delay and skew between
slots can reach up to a maximum of 10 ns on a single PXI backplane.
Star Trigger for Ultra High Speed Synchronization
The Star trigger bus has an independent trigger line for each slot

92

Electricity Testing and Measurement Handbook Vol. 7

that is oriented in a star configuration from a special Star trigger


slot (defined as slot 2 in any PXI chassis). The trigger can provide an independent dedicated line for each of up to 13 peripheral slots on a single PXI backplane. The PXI Star line lengths are
matched in propagation delay to within one nanosecond from the
Star trigger slot. This feature addresses ultra high-speed synchronization where you can distribute start/stop trigger signals from
the master measurement module in the Star trigger slot with low
delay and skew.

CONCLUSIONS

Platform

Trigger Bus

Reference Clock

Star Bus

VXI

8 TTL, 2 ECL

10 MHz ECL

Yes

PXI

8 TTL

10 MHz TTL

Yes

Computer-based measurement components are transforming


creation of synchronized measurement systems from integration of
loosely coupled, and often incompatible instruments, into an
orderly engineering process that results in tightly integrated,
high-performance systems. For synchronized measurements,
timing and triggering details are critical keys to your automated
measurements. Precise synchronization requires proper distribution
of clocks and triggers. The three main synchronization schemes
and proper knowledge of the pros and cons of each and the capabilities of your measurement devices help you to make the right
decision in choosing your solution.

Electricity Testing and Measurement Handbook Vol. 7

93

USING NATIONAL INSTRUMENTS SYSTEM


IDENTIFICATION, CONTROL DESIGN AND
SIMULATION PRODUCTS FOR DESIGNING AND
TESTING A CONTROLLER FOR AN UNIDENTIFIED
SYSTEM
www.ni.com
1. INTRODUCTION
This article describes the process of designing a closed
loop control system, or plant, using the NI System Identification
and Control Design Assistants. A DC Motor will be the plant
(Figure 1).

tachometer as a function of motor velocity) and the set point (the


command voltage you specify). The controller then determines
the next voltage level to command to the motor to meet the specifications defined while designing the controller. Figure 2 shows
the final closed-loop system.

Figure 2. The final closed loop system. The Plant Model is the QET (Figure 1).

Figure 1: The Quanser Engineering Trainer (QET) will be the plant for which we will design a
closed loop controller.

The Quanser Engineering Trainer will be used in velocity


mode. A voltage signal commands the motor to move and the
tachometer output determines the velocity. The motor system is
connected to a National Instruments Data Acquisition (DAQ)
device, where Analog Input 0 (AI0) is connected to the tachometer and Analog Output 0 (AO0) is connected to the motor command input. For demonstration purposes, you can replace the DC
motor with an RC circuit.
This example uses the following LabVIEW add-ons:
NI LabVIEW System Identification Toolkit
NI LabVIEW Control Design Toolkit
NI LabVIEW Simulation Module
You can purchase these products together in the Control
Design and Simulation Bundle.
To use these add-ons, you must install the following
software:
NI LabVIEW 7.1
NI Signal Express 1.0
The closed-loop system acts on the difference between
two quantities: the process variable (the voltage output of the

This example describes the process of designing a simple


PI controller for a system with unidentified dynamics. Note that
all functionality described in the Express Workbench environment is also available in LabVIEW. All project scripts and
LabVIEW VIs described in this document are available as
attachments to this document.

2. IDENTIFYING THE SYSTEM


To identify an open loop system we need to excite it with
a signal that has voltage levels and frequency content that corresponds to its actual operating conditions. For more information
about this process, refer to Stimulus and Acquisition
Considerations in the System Identification Process, located at
www.ni.com > NI Developer Zone > Development Library >
Analysis and Signal Processing > PID Control/System
Characterization/Stability.
You can use many different signal types to identify a system, including chirp signals, square waves, square waves overlaid with white noise, and so on. For this example, the stimulus
signal is a 3V p-p triangular wave. You create this signal using
the Create Signal step available in the NI Express Workbench.
Figure 3 shows how you create this signal.

94

Electricity Testing and Measurement Handbook Vol. 7


Next, you must have the DAQ device generate this signal
as an analog output. Use the DAQmx Generate step to perform
this function, shown in Figure 5.

Figure 3. Creating a 3 Volt p-p triangular wave.

Table 1 shows where to find the settings for this particular step and what values to use:

Figure 5. DAQmx Generate step to output the created signal as an analog signal on the
DAQ card.

Table 1. Settings for creating the signal


Step

Settings/Actions

Signal Input/Output-> Create Signal

Signal Type = Triangle Wave

Table 2 shows the settings for this step:

Frequency = 1 Hz
Table 2. Settings for generating the signal on the appropriate
Device and Channel (AO0 in this case).

Amplitude = 3 V
Sample Rate = 1kS/s
Block Size = 5000 samples

To display the created signal on the data viewer in


Express Workbench, drag the Calculated Signal Output to the
Data View (Figure 4)

Step
Signal Input/Output
-> Generate Signals
-> NI DAQmx Generate

Settings/Actions
Config Tab: Device: Make sure to
select appropriate DAQ Device and
Channel

NOTE: This example does not synchronize the AO and AI


channels of the DAQ device. Typically you should synchronize
these channels, which you can accomplish using the Advanced
Timing page, because any delays caused by the difference in timing
between AI and AO would be described by the transfer function of
the open loop system, resulting in some error in the identification. In
this example, the sample rate for AI and AO is 1 kHz, so the maximum jitter between the two channels is 0.5 ms. This amount of jitter
is negligible compared to the plant dynamics.
Use the DAQmx Acquire step to acquire the response of
the plant to the stimulus signal. Figure 6 shows this acquisition.

Figure 4. The created signal in the data viewer in Express Workbench.

Electricity Testing and Measurement Handbook Vol. 7

95
symbol are not dependent on the steps above. This symbol disappears after you use the Create and Acquire steps in the system
identification process.
When this dialog box appears, select the No button to create
a new display for the signal.
Next, run the project script once by clicking the green
run arrow. This project generates and acquires 5000 data points
at 1 kS/s for a total of five seconds of plant response data. This
response data appears in the display you added in the previous
step. Figure 8 shows the stimulus signal and the plant response
data.

Figure 6. DAQmx Acquire step to acquire the response back from the DC motor plant as an
analog signal to the DAQ card.

Table 3 shows the where to find and settings for this step:
Table 3. Settings for acquiring the signal on the appropriate
Device and Channel (AI0 in this case).
Step

Settings/Actions

Signal Input/Output
-> Acquire Signals
-> NI DAQmx Acquire

Config Tab: Device: Make sure to


select Device and Channel

Figure 8. The stimulus signal is in the upper display. The plant response to this signal is in
the lower display.

Config Tab: Acq. Timing: 5000


samples to read
Config Tab: Acq. Timing: 1 kHz
sample Rate

Next, drag the output of the DAQmx Acquire step to the


data viewing window. Express Workbench notifies you that the
data from the current step appears unrelated to the data already
on the display. A small disconnect symbol, circled in Figure 7, is
also displayed between the DAQmx Generate step and DAQmx
Acquire step, which indicates that the steps below the disconnect

You use the stimulus signal and the response data to


define a transfer function for the open loop DC motor system. To
define this model, you will use a parametric estimation of the
motor model. For more information about parametric estimation,
refer to System Identification Model Structure Selection,
located at www.ni.com >> NI Developer Zone >> Development
Library >> Analysis and Signal Processing >> PID Control /
System Characterization / Stability.
In this example, you use the default settings of the
Parametric Estimation step, shown in Figure 9, to create a firstorder transfer function. The model order is based on the plant
dynamics.

Figure 7. Create a new display to view the output from the DAQmx
acquire step. The yellow circle locates the disconnect symbol displayed between the DAQmx Generate and DAQmx Acquire steps. This
symbol indicates that the steps below the disconnect symbol are not
dependent on the steps above.

96

Electricity Testing and Measurement Handbook Vol. 7

Figure 10: Saving the System Identification Model

Figure 9. Identifying a parametric estimation of the DC motor plant system.

Table 4 shows where to find and the settings for this step:
Table 4. Settings for identifying a parametric
estimation of the QET DC motor plant system.
Step

Settings/Actions

System Identification
-> Model Estimation
> Parametric Estimation

Input Signals and Model Tab:


Stimulus Signal: Calculated signal
Response Signal: Device and
Channel from NI DAQmx Acquire
step.

At this step in the example, the transfer function is discrete. Although you can design a discrete proportional-integral
(PI) controller in Express Workbench, this example converts the
transfer function model to a continuous one because the motor is
a continuous plant. To facilitate this design in the continuous
domain, also known as the s-domain, this example transfers the
model into a Control Design type function and then converts the
model into continuous representation. Figures 11a and 11b show
this process.

Add Display under DAQmx Acq


Output Display
Drag Estimated Response to new
Display

Notice that the disconnect symbol, shown in Figure 7, no


longer appears. NI Express Workbench removes this icon because
you used the Create and Acquire steps in the system identification
process. Also, notice that the largest prediction error typically
occurs in the beginning of the signal. This error occurs for two
reasons: the initiation effects of spinning up the system (which is
typically not in perfect mechanical balance) and because the
numerical algorithm used to identify the model requires several
time steps to initialize itself. For example, the disc that this particular QET DC motor spins has 2 holes drilled through it, and
depending on the location during startup the motor, might start
slower or faster. Therefore, the coefficients of the resulting transfer
function change slightly every time you run the final Express
Workbench Project Script.
After you identify a model, you must save the transfer
function for further analysis. Select System Identification
Import-Export ModelSave System Identification Model, shown
in Figure 10, to save this model.

Figure 11a: Converting the System ID Model to a Control Design Model Type
(Transfer Function)

To display the step shown in Figure 11a, select System


IdentificationImport-Export ModelConvert to Control Design
Model.
To display the step shown in Figure 11b, select Control
DesignModel TransformationDiscretize Model. On the
Configuration page of this step, select Make Continuous from
the Operation pull-down list.

Electricity Testing and Measurement Handbook Vol. 7

97

Figure 11b. Making the discrete model continuous


Figure 13. Displaying the transfer function of the open-loop system plant in the s-domain.

Next, create a new display for the continuous transfer


function. To create this display, right-click the DAQmx Acquire
Result display in the data viewer window and select Add
DisplayBelow from the shortcut menu. Figure 12 shows this
creation.

3. DESIGNING THE CONTROLLER


Now that the transfer function of the plant is available, the
next step is to design a controller for this plan. This example
describes how to design a controller that meets requirements for
rise time, settling time, overshoot, and so on. This controller will
complete the closed-loop system.
This example designs a simple proportional-integral (PI)
controller using the PID Synthesis step, shown in Figure 14. You
also can perform a root locus or interactive bode design.

Figure 12. Adding a window to display the transfer function for the open-loop system.

Then, drag the output of the Discretize Model step to the


new display, shown in Figure 13.
The system has now been identified as a first order transfer function. Every time the Express Workbench project script is
run, the coefficients of the transfer function will change slightly.
This is due to the spinning wheel and other mechanics of the
motor itself (and the tachometer, and the fact that AI and AO is
not 100% synchronized etc.), as explained above during the
Estimation of the Parametric Model.

Figure 14. Designing a PI controller for the DC motor.

Table 5 lists the settings for designing a PI controller.


Table 5. Settings for designing a PI controller for the DC Motor Plant System.
Step

Settings/Actions

Control Design ->


Controller Design ->
PID Synthesis

Controller Synthesis Tab: Check


Gain and "Integral (s)" boxes
Adjust P and I gains to obtain
desired step response. Refer to Figure
14 for the recommended settings.

98
As you adjust the values of the P and I gains, the step
response graph changes to show the resulting rise time, overshoot, ringing, settling time, and so on. Adjust the P and I gains
so the step response looks similar to the step response shown in
Figure 14. This step response has a rise time of approximately 25
ms and overshoot of less than 50% of the steady state value.
Optionally, you can check these time domain specifications by
adding a Time Domain Analysis step after the PID Synthesis step
WARNING: Too much overshoot can causes the output
of the controller to command a voltage much higher than the
Analog Output board and the motor can handle. However, later
on in this example, you will use the Simulation Module to
enforce a limit on the valid range of the output.
After you have properly adjusted the P and I gains, save
the model by using the Save Control Design Model step, located at Control DesignImport-Export Model. Figure 15 shows
this step.

Electricity Testing and Measurement Handbook Vol. 7

Figure 16. Using the Simulation Module to simulate the behavior of the identified plant and
the PI controller in a closed-loop configuration.

Figure 17 shows the response of the closed-loop system


to a square wave input. Refer to Figure 14 to verify that this is
the expected behavior.

Figure 15. Saving the PI controller.

4. SIMULATING THE CLOSED LOOP SYSTEM


In this example, the previous sections provided information
about identifying the plant model and designing a PI controller
based on this plant model. Before you use this controller on the
actual DC motor, you use the Simulation Module to verify the
controller behaves as you expect. The Simulation Module
includes several ordinary differential equation (ODE) solvers
you use to integrate the continuous transfer function model over
a period of time. For more information about the Simulation
Module, refer to www.ni.com >> Products & Services >> RealTime Measurement and Control >> NI Real-Time Software >>
Add-On Toolkits >> Simulation Module.
Figure 16 shows the LabVIEW block diagram, including
the Simulation Loop that defines the simulation diagram. Notice
the pale yellow color of the simulation diagram to distinguish it
from the LabVIEW block diagram. Also notice that the Simulation
Module allows you to directly implement feedback, completing
the closed-loop system.
Notice LoadController.vi and LoadPlant.vi. These subVIs
load the models that you created in Express Workbench and
transfer the models into the appropriate Simulation functions.
The LoadController subVI also converts the discrete transfer
function into a continuous one. Recall that you implemented this
step in the Express Workbench. However, all functionality available
in the Express Workbench environment is available in LabVIEW.

[+] Enlarge Image


Figure 17. Stimulating the closed-loop system with a square wave input and showing
the response.

Notice the knobs on the front panel of Figure 17. You use
these knobs to change the type, amplitude, and frequency of the
stimulus signal while immediately viewing the response of the
closed-loop system.

5. DRIVING THE MOTOR WITH THE CLOSED LOOP SYSTEM


Now that you have verified the closed-loop response of the
plant and controller models, the next step is to use this controller
to drive the actual DC motor. First, this example demonstrates an
open-loop system.
NOTE: This example does not synchronize the input and
output values of the system, because, the short jitter (0.5ms
with 1000 kS/s analog input and output) ensures that the output
does not display any significant difference.

Electricity Testing and Measurement Handbook Vol. 7

99

Figure 18 shows the LabVIEW block diagram for driving


the DC motor in an open-loop configuration.

By closing the loop and adding the PI controller to the


open-loop system, the response of the motor becomes faster and
more accurate with respect to the RPM you specify. The controller compares the actual speed of the motor with the speed you
specified and adjusts the motor speed accordingly. Figure 20
shows this increase in response time and accuracy.

Figure 18. Driving the plant (DC Motor) in an open loop configuration.

Figure 19 shows the front panel of this block diagram.


Figure 20. Driving the plant (DC Motor) in a closed loop configuration. The response from
the motor is fast, with overshoot, settling time characteristics as defined while designing the
controller (Figure 14).

The integration term in the PI controller minimizes the


steady-state error by taking the history of the error into account.
Figure 21 shows the LabVIEW block diagram that corresponds
to the front panel shown in Figure 20.

[+] Enlarge Image


Figure 19. Driving the plant (DC Motor) in an open loop configuration. The response from
the motor is slow.

Figure 19 shows how you specify the motor speed in rotations per minute (RPM). This example converts this value to the
corresponding analog voltage as directed by the manufacturer of
the DC motor. In this situation, the multiplier is 0.0015 volts/RPM.
The Analog Output Channel 0 (AO0) of the DAQ device then
sends this value to the DC motor.
This example then uses Analog Input Channel 0 (AI0) of
the DAQ device to acquire the data from the tachometer of the
DC motor. This example then converts the tachometer value to
RPM by using the manufacturer-supplied multiplier of 666.6
RPM/volts.
After you press the Stop button, this example stops the
motor by sending a value of 0 volts to AI0.
Notice in Figure 19 that the motor is slow to respond to
any change in specified RPM. This example also demonstrates
steady-state error, which is a permanent difference between the
specified and actual motor speeds. This error is due to the calibration uncertainty in the multiplication constants Figure 18
shows. The steady-state error is particularly noticeable at high
speeds, because high speeds increase the relative error that
results from not multiplying with the exact conversion factor.

Figure 21. Driving the plant (DC Motor) in a closed loop configuration.

The block diagram in Figure 21 converts the speed it is


converted to and from corresponding Analog Voltage using the
same multipliers described in Figure 18. The actual speed of the
motor is compared with the speed you specify, or the Set Point.
The controller is loaded from file as shown in Figure 15.
Figure 21 shows how Saturation function limits the output
voltage of the motor. This figure also shows how you can use the
SIM Set Diagram Params VI to programmatically change the
ODE solver and other parameters of the simulation.
NOTE: In the real world, the Saturation function is not
necessary, because the DAQ Analog Output Assistant Express
VI has a control that sets a limit on the output voltage. However,
this example demonstrates the capabilities of the Simulation
Module and how you would place Saturation function in a closed

100
loop. Also, if you place a LabVIEW data probe before and after
the Saturation function, changing the Set Point suddenly can
cause the motor to overshoot.

6. CONCLUSIONS
This example described how you can use LabVIEW and
related software to identify, control, and simulate a real-world
dynamic system. Although this example did not use any realtime (RT) hardware, you can use the LabVIEW Real-Time
Module in conjunction with the Simulation Module to deploy a
controller to any National Instruments RT Series hardware.
Refer to Using CompactRIO, located at http://sine.ni.com
/csol/cds/item/vw/p/id/538/nid/124200, for an example that
demonstrates how to build a full-authority FPGA-based engine
control system for a high-performance motorcycle engine.
NOTE: You also can describe the simulation itself in the
Express Workbench Project Script by adding a User-Defined
Step. You also can translate an Express Workbench project script
into LabVIEW code by launching LabVIEW and selecting
ToolsExpress WorkbenchConvert Express Workbench Project
from the pull-down menu.
More complex systems, such as the high performance
motorcycle engine described above, may have multiple inputs
and multiple outputs. In these situations, you can use state-space
model identification and control design methods to operate in the
multiple-input multiple-output (MIMO) environment. The
Control Design Toolkit, System Identification Toolkit and
Simulation Module support these design methods.

Electricity Testing and Measurement Handbook Vol. 7

Electricity Testing and Measurement Handbook Vol. 7

101

MAGNETO-MECHANICAL MEASUREMENTS FOR


HIGH CURRENT APPLICATIONS
Jack Ekin, NIST Electromagnetics Division
GOALS
This project specializes in measurements of the effect of
mechanical strain on superconductor properties such as criticalcurrent density for applications in magnetics, power transmission, and electronics. Recent research has produced the first
electromechanical data for the new class of high-temperature
coated conductors, one of the few new technologies expected to
have an impact on the electric-power industry. The Strain
Scaling Law, previously developed by the project for predicting
the axial-strain response of low-temperature superconductors in
high magnetic fields, is now being generalized to three-dimensional stresses, for use in finite-element design of magnet structures, and to high-temperature superconductors. Recent research
includes extending the high-magnetic-field limits of electromechanical measurements for development of nuclear-magneticresonance (NMR) spectrometers operating at 23.5 teslas and 1
gigahertz, and the next generation of accelerators for high-energy physics. The project has diversified its research to include
magnetoresistance studies on a new class of carbon nanostructures using our highfield superconducting magnet facility and a
newly developed, variable-angle, variable-temperature measurement capability.

CUSTOMER NEEDS
The project serves industry primarily in two areas. First is
the need to develop a reliable measurement capability in the
severe environment of superconductor applications: low temperature, high magnetic field, and high stress. The data are being
used, for example, in the design of superconducting magnets for
the magnetic-resonance-imaging (MRI) industry, which provides
invaluable medical data for health care, and contributes 2 billion
dollars per year to the U.S. economy.
The second area is to provide data and feedback to industry
for the development of high-performance superconductors. This
is especially exciting because of the recent deregulation of the
electric power utilities and the attendant large effort being devoted to develop superconductors for power conditioning and
enhanced power-transmission capability. We receive numerous
requests, from both industry and government agencies, for reliable
electromechanical data to help guide their efforts in research and
development in this critical growth period.
The recent success of the second generation of high-temperature superconductors has brought with it new measurement
problems in handling these brittle conductors. We have the
expertise and equipment to address these problems. Stress and
strain management is one of the key parameters needed to move
the second-generation high-temperature coated conductors to the
market place. The project utilizes the expertise and unique
electromechanical measurement facilities at NIST to provide
performance feedback and engineering data to companies and

national laboratories fabricating these conductors in order to


guide their decisions at this critical phase of coated-conductor
development.

TECHNICAL STRATEGY
Our project has a long history of unique measurement
service in the specialized area of electromechanical metrology.
Significant emphasis is placed on an integrated approach. We
provide industry with first measurements of new materials, specializing in cost-effective testing at currents less than 1000
amperes. Consultation is also provided to industry on developing
its own measurements for routine testing. We also provide consultation on metrology to the magnet industry to predict and test
the performance of very large cables with capacities on the order
of 10 000 amperes, based on our tests at smaller scale. In short,
our strategy has consistently been to sustain a small, well connected team approach with industry.
Electromechanical Measurements of Superconductors
We have developed an array of specialized measurement systems
to test the effects of mechanical stresses on the electrical performance of superconducting materials. The objective is to simulate the operating conditions to which a superconductor will be
subjected in magnet applications. In particular, since most technologically important superconductors are brittle, we need to
know the value of strain at which fractures occur in the superconductor. This value is referred to as the irreversible strain limit,
since the damage caused by the formation of cracks is permanent. The effect of cracks is extrinsic. In contrast, below the irreversible strain, there exists an elastic strain regime where the
effect of strain is intrinsic to the superconductor. In this elastic
regime, the variation in the critical-current density (Jc) with
strain, if any, is reversible and is primarily associated with
changes in the superconductors fundamental properties, such as
the critical temperature (Tc) and the upper critical field (Hc2), as
well as changes in the superconductors microstructure due to
the application of strain.
Measurement Facilities Extensive, advanced measurement facilities are available, including high-field (18.5 teslas) and
split-pair magnets, servohydraulic mechanical testing systems,
and state-of-the-art measurement probes. These probes are used
for research on the effects of axial tensile strain and transverse
compressive strain on critical current; measurement of cryogenic
stress-strain characteristics; composite magnetic coil testing; and
variable-temperature magnetoresistance measurements. Our
electromechanical test capability for superconductors is one of
the few of its kind in the world, and the only one providing specialized measurements for U.S. superconductor manufacturers.
Collaboration with Other Government Agencies
These measurements are an important element of our ongoing
work with the U.S. Department of Energy (DOE). The DOE
Office of High Energy Physics sponsors our research on electro-

102

Electricity Testing and Measurement Handbook Vol. 7

mechanical properties of candidate superconductors for particleaccelerator magnets. These materials include low-temperature
superconductors (Nb3Sn, Nb3Al, and MgB2), and high-temperature superconductors Bi-Sr-Ca-Cu-O (BSCCO) and Y-BaCu-O (YBCO) including conductors made on rolling-assisted,
biaxially textured substrates (RABiTS) and conductors made by
ion-beam-assisted deposition (IBAD). The purpose of the database produced from these measurements is to allow the magnet
industry to design reliable superconducting magnet systems. Our
research is also sponsored by the DOE Office of Electric
Transmission and Distribution. Here, we focus on high-temperature
superconductors for power applications, including power-conditioning systems, motors and generators, transformers, magnetic
energy storage, and transmission lines. In all these applications, the
electromechanical properties of these inherently brittle materials
play an important role in determining their successful utilization.
Scaling Laws for Magnet Design In the area of lowtemperature superconductors, we have embarked on a fundamental program to generalize the Strain Scaling Law (SSL), a
magnet design relationship we discovered two decades ago.
Since then, the SSL has been used in the structural design of
most large magnets based on superconductors with the A-15
crystal structure. However, this relationship is a one-dimensional law, whereas magnet design is three-dimensional. Current
practice is to generalize the SSL by assuming that distortional
strain, rather than hydrostatic strain, dominates the effect. Recent
measurements in our laboratory suggest however that this
assumption is invalid. We are now developing a measurement
system to carefully determine the three-dimensional strain
effects in A-15 superconductors. The importance of these measurements for very large accelerator magnets is considerable. The
Strain Scaling Law is now also being developed for high-temperature superconductors since we recently discovered that practical
high-temperature superconductors exhibit an intrinsic axialstrain effect.

The technique consists of measuring critical-current density (the maximum lossless current density that a superconductor
can carry) versus axial strain for a number of copper-plated specimens of the same wire with different amounts of copper. We
then deduced the strain properties of the virgin (noncopper-plated)
wire by an extrapolation technique. Copper plating made the niobium-tin wires electrically stable enough to characterize, but the
extra copper also influenced the value of the pre-compressive
strain (max); hence the need for extrapolation. We confirmed that
max indeed decreased linearly with increasing niobium fraction.
However, we found that other parameters such as the matrix
material and wire diameter also influence max.
The pre-compressive strain for high-niobium-fraction
wires can be reduced to about 0.1 percent, a very small strain
window for magnet design. Fortunately, we also found that the
use of copper alloys, instead of pure copper along with small
wire diameters substantially mitigates the problem and provides
reasonable strain operating margins in these high performance
conductors. The data were used by Oxford Superconductor
Technology to make immediate decisions regarding the conductor design for a new NMR system.
Copper Stabilizer Improves Coated Superconductors
Strain Tolerance High-temperature superconductor (HTS)
wires are now being fabricated in kilometer lengths, providing
the basis for a new generation of electric power devices, including
high power-density motors and generators, transmission lines,
and power conditioners. The development of HTS technology is
expected to play a crucial role in maintaining the reliability of the
power grid and upgrading power delivery to core urban areas.
The most promising superconductor candidate for replacing ageing utility equipment is the highly textured Y-Ba-Cu-O (YBCO)
compound deposited on buffered flexible metallic substrates.
These coated conductors have a much higher current-carrying
capacity compared to the Bi-Sr-Ca-Cu-O (BSCCO) tapes now
commercially available. Whereas BSCCO tapes experience permanent damage when subjected to axial strains less than 0.2 percent, we demonstrated last year that the formation of cracks in
the new YBCO system does not commence until subjected to
strains higher than 0.38 percent, almost a two-fold increase in
strain tolerance. This resilience of YBCO to strain is providing a
strong motivation to produce commercial lengths of this second
generation conductor, especially for the design of electric generators for which strain tolerance requirements have been raised
to 0.4 percent.
This year, we found that adding a Cu layer to the YBCO
coated-conductor architecture extends the irreversible strain
limit (irr) of this composite even further, from 0.38 percent to
more than 0.5 percent. This markedly widens the strain window
for coated-conductor applications and takes it beyond even the
most demanding benchmark for large-scale superconducting
generators. These measurements were undertaken in close collaboration with conductor manufacturers American Superconductor
(Westborough, MA) and SuperPower (Schenectady, NY), who
are incorporating the stabilizer layers either by Cu-lamination or
Cu-plating. The original motivation for adding the Cu layers was
to improve the electric and thermal stability of the conductor; the
strain-tolerance dividend was unexpected. We can relate this
remarkable result to the mismatch of thermal contraction between
Cu and the other components of the composite. During sample
cooling from processing temperatures to the cryogenic operating
temperatures, the Cu layer exerts an additional pre-compressive
strain on the YBCO film, and hence extends the irreversible

ACCOMPLISHMENTS

Pre-compressive strain max versus


Nb fraction for several niobium-tin
wires with high niobium density.
Data were obtained using a new
measurement method developed
by EEEL researchers for marginally
stable superconductor wires.

New Measurement
Method for Marginally Stable
Superconductor Wires The
next generation of particle
accelerators for high-energy
physics, and magnet systems
for nuclear magnetic resonance
(NMR) spectroscopy, will require the development of a new type
of superconducting niobium-tin wire able to carry extremely
high currents at high magnetic fields. One way to achieve high
currents is to push the density of superconductor filaments in
composite wires to new limits. Oxford Superconductor
Technology (Carteret, NJ) has successfully demonstrated the
feasibility of this concept. However, this could significantly
reduce the beneficial pre-compressive strain in these conductors upon cooling, an important parameter for magnet design.
Our superconductor electromechanical testing system is the only
one in the U.S. that utilizes stress-free cooling, which is essential
for a direct measurement of pre-compressive strain. Unfortunately,
the new niobium-tin wires, owing to their relatively small amount
of copper stabilizer, are only marginally stable, which makes electrical characterization extremely challenging. Hence, a new measurement technique was required that did not compromise the
stress-free cooling advantage.

Electricity Testing and Measurement Handbook Vol. 7

103

strain irr where permanent damage occurs. The Cu may also be acting as a crack arrester, which further improves the strain tolerance.

Textbook on Cryogenic Measurement Apparatus


and Methods A new textbook has been written on experimental techniques for cryogenic measurements to be published by
Oxford University Press. It covers the design of cryogenic measurement probes and provides cryogenic materials data for their
construction. Topics include thermal techniques for designing a
cryogenic apparatus, selecting materials appropriate for such
apparatus, how to make high-quality electrical contacts to a
superconductor, and how to make reliable criticalcurrent measurements. The textbook is written for beginning graduate students, industry measurement engineers, and materials scientists
interested in learning how to design successful low-temperature
measurement systems. The appendices are written for experts in
the field of cryogenic measurements and include electrical, thermal, magnetic, and mechanical properties of technical materials
for cryostat construction; properties of cryogenic liquids; and
temperature measurement tables and thermometer properties.
These appendices aim to collect in one place many of the data
essential for designing new cryogenic measurement apparatus.

Normalized critical current density as a


function of mechanical tensile strain
for unlaminated and Culaminated
YBCO coated conductor. The Cu stabilization layer extends the irreversible
strain limit irr of the composite from
0.38 percent to more than 0.5 percent.

Photograph of a new magnetoresistance


probe designed to investigate carbon
nanostructures. At the right end of the
probe, the photo shows the steppermotor-controlled worm-gear system and
sample stage, which allow precise angledependent, high-field measurements.

New Magnetoresistance Apparatus to Probe Carbon


Nanostructures Electronic properties of materials change
markedly as their dimensions approach those of a few atomic
layers. Carbon nanostructures (including graphite sheets, singlewalled carbon nanotubes, and multi-walled carbon nanotubes)
are prime examples of such potentially useful materials,
although some of their very fundamental properties remain controversial. Characterization of these structures at high magnetic
fields is one of the principal methods for determining the existence of ballistic conduction, for example, which could be the
foundation for a new generation of nanoelectronic devices.
We have designed and recently commissioned an apparatus to measure magnetoresistance of these highly directional
structures in fields up to 18.5 teslas. (For comparison, the Earths
magnetic field is only about 0.05 millitesla.) The apparatus automatically acquires data as a function of magneticfield magnitude,
angle, and temperature. It was designed to also be compatible
with the very-highfield magnet facilities at the National High
Magnetic Field Laboratory at Florida State University, permitting
the extension of EEELs measurements to fields up to 30 teslas.
Magnetic field mapping has commenced for nanotubes fabricated
at NIST and Rice University as well as for graphitic sheet structures manufactured by a nanotechnology research team at Georgia
Institute of Technology. Magnetic-field angle can be varied with a
resolution of better than 0.1 degree over a range of 130 degrees,
and sample temperature can be varied over an extended range of
4.1 to 120 kelvins, with a stability of better than 3 millikelvins at
4.2 kelvins.

104

Electricity Testing and Measurement Handbook Vol. 7

Electricity Testing and Measurement Handbook Vol. 7

105

A BASIC GUIDE TO THERMOGRAPHY


Land Instruments International Infrared Temperature Measurement

THERMOGRAPHY

ELECTROMAGNETIC SPECTRUM

Thermography is a method of inspecting electrical and


mechanical equipment by obtaining heat distribution pictures.
This inspection method is based on the fact that most components in a system show an increase in temperature when malfunctioning. The increase
in temperature in an electrical circuit could be due to
loose connections or a worn
bearing in the case of
mechanical equipment. By
observing the heat patterns
in operational system components, faults can be located
and their seriousness evaluated.
The inspection tool Figure 1. The Thermal Image of electrical
used by Thermographers is connector
the Thermal Imager. These are sophisticated devices which
measure the natural emissions of infrared radiation from a heated
object and produce a thermal picture. Modern Thermal Imagers
are portable with easily
operated controls. As physical contact with the system
is not required, inspections
can be made under full
operational
conditions
resulting in no loss of production or downtime.
The Land Cyclops
Thermal Imager is a device
designed for plant condition Figure 2. The inspection of electrical equipment
monitoring, preventative using a Thermal Imager
maintenance and process monitoring applications.
Potential applications include:
Inspection of electrical equipment
Inspection of mechanical equipment
Inspection of refractory lined structures

The energy from a heated object is radiated at different


levels across the electromagnetic spectrum. In most industrial
applications, it is the energy radiated at infrared wavelengths
which is used to determine the objects temperature. Figure 3
shows various forms of radiated energy in the electromagnetic
spectrum including X-rays, Ultra Violet, Infrared and Radio.
They are all emitted in the form of a wave and travel at the speed
of light. The only difference between them is their wavelength
which is related to frequency.

MEASUREMENT OF TEMPERATURE USING INFRARED


METHODS
When using a Thermal Imager it is helpful to have a basic
knowledge of infrared theory.

BASICS PHYSICS
An object when heated radiates electromagnetic energy.
The amount of energy is related to the objects temperature. The
Thermal Imager can determine the temperature of the object
without physical contact by measuring the emitted energy.

The human eye responds to visible light in the range 0.4


to 0.75 microns. The vast majority of infrared temperature measurement is made in the range 0.2 to 20 microns. Although emissions are mostly unable to be detected by a standard camera the
Thermal Imager can focus this energy via an optical system on to
a detector in a similar way to visible light. The detector converts
infrared energy into an electrical voltage which after amplification
and complex signal processing is used to build the thermal picture
in the operators viewfinder on board the Thermal Imager.

ENERGY DISTRIBUTION
Figure 4 shows the
energy emitted by a target
at different temperatures.
As can be seen, the higher the target temperature
the higher the peak energy level. The wavelength
at which peak energy
occurs becomes progressively shorter as temperature increases. At low
temperatures the bulk of
the energy is at long
wavelengths.
Figure 4. Infrared energy and distribution across
the Electromagnetic spectrum

106

Electricity Testing and Measurement Handbook Vol. 7

EMISSIVITY

It can be shown that there is a relationship between emissivity and reflectivity.

The amount of energy radiated from an object is dependant


on its temperature and its emissivity. An object which has the
ability to radiate the maximum possible energy for its temperature
is known as a Black Body. In practice, there are no perfect emitters and surfaces tend to radiate somewhat less energy than a
Black Body.
Figure 5 shows why objects are not perfect emitters of
infrared energy. As energy moves towards the surface a certain
amount is reflected back inside and never escapes by radiative
means. From this example, it can be seen that only 60% of the
available energy is actually emitted. The emissivity of an object
is the ratio of the energy radiated to that which the object would
emit if it were a Black Body.

Figure 5. The Infrared energy reflected at a body surface

Hence emissivity is expressed as

Emissivity is therefore an expression of an objects ability to radiate Infrared energy.

EMISSIVITY VALUES
The value of emissivity tends to vary from one material to
another. With metals, a rough or oxidised surface usually has a
higher emissivity than a polished surface.
Here are some examples:

For an opaque object this is Emissivity + Reflectivity = 1.0

Hence, a highly reflective material is a poor emitter of


infrared energy and will therefore have a low emissivity value.

EFFECTS OF EMISSIVITY
If a material of high emissivity and one of low emissivity
were placed side by side inside a furnace and heated to exactly
the same temperature, the material with low emissivity would
appear to the eye much duller. This is due to the different emissivities of the materials causing them to radiate at different levels,
making the low emissivity material appear cooler than the high
emissivity material, even though they are at exactly the same temperature.
The Thermal Imager would see this in the same way as
the eye and produce an error in making the temperature measurement. The temperature of an object cannot be determined by simply
measuring its emitted infrared energy, a knowledge of the
objects emissivity must also be known.
The emissivity of an object can be determined as follows:
1) Consult manufacturers literature (always ensure these
have been evaluated at the operating wavelength of your
Thermal Imager as emissivity can vary with wavelength).
2) Have the objects emissivity evaluated by a laboratory
method.
There are two main ways to overcome the problem of
emissivity.
a) Mathematically correct the temperature measurement
value. This is usually carried out within the signal
processor of the Thermal Imager. Most modern
Thermal Imagers have a compensation setting which
can quickly and easily be set by the operator.
b) It may be possible to paint the surface of a low emissivity target with a high and constant emissivity coating.
This tends to elevate the target to a much higher emissivity level, but this may not be possible on all process
plants.
When carrying out Thermographic inspections, faults are
often identified by comparing heat patterns in similar components operating under similar loads. This is an alternative to very
precisely predicting the emissivity of each individual component
and obtaining absolute temperature values.

Thermal Imager being used to inspect electrical equipment. With equal load and emissivities
the temperature of the three measurement points should be the same.

Electricity Testing and Measurement Handbook Vol. 7

THERMAL IMAGERS
Thermal Imagers are sophisticated devices which measure
the natural emissions of infrared radiation from a heated object
and produce a thermal picture. Modern Thermal imagers such as
the Land TI814 are usually very flexible containing many standard and optional features. Here are some of those of the TI814.

107
d) Parameter changes: parameters saved with the stored
image may be changed within the software. These
include emissivity, and background temperature.
e) Image enhancements: filtering, and zoom facility.
Figures 7 to 12 show some of the available temperature
measurement modes.

OPTICAL:
A motorised focus is used to obtain a clear image at different distances from the thermal imager. The focus distance is
from 380mm/15 inches to infinity. An electronic zoom function
enables 2X and 4X magnification of the image.

IMAGE DISPLAY:
The real time thermal image is displayed in colour on a
102mm / 4 inch LCD screen.
The image may be colourised by any one of the eight different palettes available.
The real time thermal image is also displayed on the builtin high resolution colour viewfinder.

Figure 7. Measuring the temperature at several points in the scene

DIGITAL MEMORY:
A built in non volatile memory system enables the simple
capture of a large number of thermal images. Thermal images are
stored on a removable compact flash memory card. This on board
facility enables stored image recall to the viewfinder and selective
image deletion.
Several seconds of digital voice clip may be stored with
each image and replayed or re-recorded on board the imager. The
sound file can be replayed in by the imager or with image processing software.
A 256MB card is capable of storing up to 1000 thermal
images and up to an eight second digital voice clip with each
image. Image file size including voice annotation is 256 KB.
Transfer to image processing software for further image
processing and report generation is via a USB Compact Flash
memory card reader.

Figure 8. Measuring the average temperature within several rectangles in the scene

TEMPERATURE MEASUREMENT:
Temperature measurement at single point in the scene is
possible.

POST PROCESSING:
This facility enables the generation of further temperature
analysis in the imager viewfinder on stored images. A single
movable point enables spot measurement at any point in the
scene and a movable cursor generates a temperature profile trace.

Figure 9. Measuring the average temperature within several polygons in the scene

IMAGE PROCESSING SOFTWARE


Frames of interest may be stored as an image file for
record purposes, or be subjected to a range of processing functions as follows:
a) File handling: save, delete and directory facility
b) Image colouring: the image may be colourised using
any one of five colour palettes.
c) Temperature measurement: a variety of different modes
are available to enable temperature measurement at any
point in the scene, calculation of maximum, minimum
or mean from within any defined area in the scene, profiles, histograms, and isotherms.

Figure 10. Measuring the temperature along several profiles in the scene

108

Electricity Testing and Measurement Handbook Vol. 7

THERMAL IMAGERS IN PREDICTIVE MAINTENANCE APPLICATIONS

Figure 11. Measuring the temperature distribution within a defined area in the scene

In todays industrial plants it is essential that unplanned


breakdowns and the resultant costly loss of production is kept to
an absolute minimum. Predictive maintenance schemes have
been introduced to identify potential problems and reduce downtime.
Thermography in maintenance applications is based on
the fact that most components show an increase in temperature
when malfunctioning and faults steadily get worse before failure.
Routine inspection programmes using Thermal Imagers
can often offer the following benefits:
Inspections can be made under full operational conditions
and hence there is no loss of production.
Equipment life can be extended
Plant downtime may be reduced
Plant reliability may be increased
Plant repairs scheduled for the most convenient time
Quality of repair work may be inspected
Thermal Imagers are mainly used for industrial predictive maintenance in the following areas:
Electrical Installations
Mechanical Equipment
Refractory lined Structures

INSPECTING ELECTRICAL INSTALLATIONS


Figure 12. Using Isotherm to highlight areas of the scene within a selected temperature
band

The software system is menu driven, making it extremely


easy to use.
Report Writer: The image processing system provides a
report writing facility. This may be used to provide a hard copy
record of the thermal image accompanied by an imported photograph and any other information for reference purposes.

Faults in an electrical installation often appear as hotspots which can be detected by the Thermal Imager. Hot spots
are often the result of increased resistance in a circuit, overloading,
or insulation failure. Figure 14 shows a hot-spot created by a bad
connection in a power distribution system.

Figure 14. Inspection of a power system

Some of the components commonly inspected are as follows:


Connectors: When looking at similar current carrying
connectors, a poor connection shows a higher temperature due to
its increased resistance. Hot-spots can be generated as a result of
loose, oxidised, or corroded connectors.

Figure 13. Typical items page in a report generated by the report writer facility

Figure 15. Inspection of connectors

Electricity Testing and Measurement Handbook Vol. 7


Figure 15. Shows the fuses in the control panel of a
machine. A faulty connection on the top of a fuse has created the
hot-spot which can easily be seen by the imager.
Three phase motors: Require balanced phases and correct
operating temperatures. It has been shown that if correct operating
temperatures are exceeded, the insulation life can be considerably
shortened.
Other commonly inspected components are:
Relays
Insulators
Capacitors
Switches

109

INSPECTION OF REFRACTORY LINED STRUCTURES


The refractory structures of process plants can often have
an increased lifetime if the degree of wear and erosion can be
assessed.
Thermal patterns produced by viewing the outer walls of
a structure can indicate hot-spots caused by worn refractories
which may be corrected by appropriate maintenance.

INSPECTION OF MECHANICAL EQUIPMENT


The type of mechanical equipment inspected is often rotating machinery. Increased surface temperatures can be the result of
internal faults.
Excessive heat can be generated by friction in faulty bearings
due to wear, misalignment or inadequate lubrication.
As with electrical installations, it is desirable to perform
the inspection with the system in operation wherever practically
possible. Interpretation of results should be based on comparison
between components operating in similar conditions under similar
loads or by trend analysis.
Equipment commonly inspected using thermal Imagers is
as follows:
Bearings
Gears
Drive Belts
Couplings
Shafts and Pumps.

Figure 16. Inspection of bearing housing

Figure 17. Inspection of a Kiln shell

Figure 17 shows an abnormal heat pattern on the wall of


a cement kiln, which has been caused by erosion of the refractory brick liner.
Equipment commonly inspected using Thermal Imagers
is as follows:
Electric Arc Furnaces
Ladles, Heat Treatment Furnaces
Glass Furnaces
Rotary Kilns and Dryers

110

Electricity Testing and Measurement Handbook Vol. 7

BUYERS GUIDE
struction and engineering, street and parking lot lighting installation and maintenance. Now available - Power Quality field
survey, monitoring and solutionsn to power quality problems.
3M Canada
PO Box 5757
London, Ontario N6A 4T1
Tel: (800) 3M Helps
Fax: (519) 452-6286
E-mail: innovation@ca.mmm.com
Web: www.mmm.com
Description of products/services:
Terminations and splices, using Cold Shrink
Technology,
moulded rubber, resin and heat shrink
Motor lead connection systems
Scotch vinyl insulation tapes, splicing and terminating
tapes, corrosion protection sealing and general use tapes
Scotchloc terminal, wire connectors and
insulation displacement
connectors, lugs, copper and aluminum connectors
Scotchtrak infrared heat tracers and circuit tracers
Fastening products, coatings and lubricants
Duct- , packaging, filament-, and masking tapes
Abrasive products
Personal safety products, sorbents.

B.G. High Voltage Systems Ltd.


1 Select Avenue, Units 15 & 16
Scarborough, ON M1V 5J3
Tel: (416) 754-2666 ext. 202
Fax: (416) 754-4607
E-mail: bert@bg-high-voltage.ca
www.bg-high-voltage.ca
Contact: B. J. (Bert) Berneche, C.E.T., President
Description of products/services: B.G. High Voltage
Systems offers a comprehensive approach to electrical project
management, providing design, construction and engineering
services to meet all your requirements. We team up with our
clients to ensure that all their needs are defined and met at each
stage of the project. Our experts will coordinate with your engineering personnel to ensure minimal disruption to facility operations. As well as complete electrical project management we
offer: material procurement, maintenance and training services,
emergency repair, overhead and underground distribution con-

CD Nova Ltd.
5330 Imperial St.
Burnaby, BC V5J 1E6
Tel: (604) 430-5612
Fax: (604) 437-1036
Contat: Don Bealle
E-mail: sales@cdnova.com
Web: www.cdnova.com
CD NOVA companies distribute and service, in Canada,
Energy and power Systems and devices, Transducer, Test and
Measurement Instruments, Batteries, Chargers, UPS, Wireline
and Wireless Comm. systems, SCADA systems, Power Quality
Analysers and systems. Teleprotection, Transformers, Breakers
Protective Relays, Gas and chemical Analysers, Stack sampline
systems.

Duncan Instruments Canada Ltd.


121 Milvan Drive
Toronto, Ontario M9L 1Z8
Tel: 416 742-4448
Fax:416 749-5053
Email: sales@duncaninstr.com
www.duncaninstr.com
Description of products/services: Duncan Instruments
Canada is a leading manufacturers representative and master distributor for a wide range of utility and electrical instrumentation.
We can offer you data loggers, power line analyzers
power/energy/harmonics analyzers, power disturbance monitors
and fused test leads/accessories.
In addition to sales, Duncan Instruments Canada can also
provide: calibration traceable to NRC, technical product support and application training, instrument repair/modifications,
and rental of selected electrical instruments. Registered to ISO
9001:2000

112

Flir Systems
5230 South Service Road #125
Burlington, ON
Tel: (905) 637-5696
Fax: (905) 639-5488
Web: www.flirthermography.com
FLIR Systems Ltd. (Agema Inframetrics) designs, manufactures, calibrates, services, rents and sells many models of
infrared imaging cameras and accessories. Complete predictive
maintenance solutions include the ThermaCam PM 695 radiometric camera with thermaland visual images, autofocus, voice
and text messaging and of course Reporter analysis software
with "drag-n-drop" image transfer software. Level's 1, 2 and 3
Thermography training conducted on site or at ITC facility.
Camera accessories, such as close-up and telescopic optics,
batteries, etc. can be sourced directly from Canadian
service/sales depot in Burlington, ON. Ask about trade in
allowances.

FLUKE ELECTRONICS Canada LP


400 Britannia Rd. East Unit 1
Mississauga, ON, Canada

Electricity Testing and Measurement Handbook Vol. 7


L4Z 1X9
Toll Free : 1-800-36-FLUKE
Tel : (905) 890-7600
Fax : (905) 890-6866
Contact : Robin Bricker
E-Mail : canada@fluke.com
www.flukecanada.ca
Fluke Electronics Canada (www.flukecanada.ca) offers
complete families of professional test tools, including power
quality, thermography, digital multimeters, clamp meters, insulation resistance testers, portable oscilloscopes, thermometers,
process testing equipment and accessories, as well as educational and training resources. A subsidiary of Fluke Corporation,
Everett, Washington, Fluke Electronics Canada is headquartered
in Ontario with offices across Canada. The Fluke brand has a
reputation for quality, portability, ruggedness, safety and ease of
use and Fluke test tools are used by technical professionals in a
variety of industries throughout the world.

G.T. WOOD CO. LTD.


3354 Mavis Road
Mississauga, ON L5C 1T8
Tel: (905) 272-1696

Fax: (905) 272-1425


E-Mail: lsnow@gtwood.com
Website: www.gtwood.com/flash/splash.html
Specializing in High-Voltage Electrical Testing, inspections, maintenance and repairs. Refurbishing and repair of New
and Reconditioned Transformers, Structures, Switchgear and
Associated Equipment. Infrared Thermography, Engineering
Studies and PCB Management.

High Voltage, Inc.


31 Rt. 7A, P.O. Box 408
Copake, NY 12516
USA
Tel : (518) 329-3275
Fax : (518) 329-3271
Contact : Bob Tighe,
E-Mail : sales@hvinc.com
Manufacturers of High Voltage Test Equipment. Products
include portable AC-VLF, .1Hz, .05 and 0.2Hz Very Low
Frequency hipots with sine wave output, switchgear and bottle
testers up to 100 kVac. Portable DC hipots up to 300 kV DC.
Aerial lift and bucket truck AC test sets up to 300 kVac according to ANSI standards. Controlled energy cable fault locators,
oil test sets and burners also offered.

LIZCO SALES
R.R. #3
Tillsonburg, ON N4G 4G8
Toll Free: 1-877-842-9021
Fax: (519) 842-3775
Contact: Robin Carroll
Website: www.lizcosales.com
We have the energy with Canadas largest on-site directory:
New and Rebuilt Power/Padmount/Dry Transformers
New Oil-Filled TLO Unit Substation Transformers
New HV S&C fuses/loadbreaks/towers
High and low voltage:
- Air Circuit Breakers Molded Case Breakers
- QMQB/fusible switches Combination Starters
Emergency Service and Replacement Systems
Design/Build custom Application Systems

Megger
4271 Bronze Way
Dallas, TX 75237-1088 USA
Tel: 1-800-723-2861 Ext. 7360 (Toll Free)

Tel: 214-331-7360 (Direct)


Fax: 214-331-7379
Email: gary.guthrie@megger.com
www.megger.com
Megger is a leading provider of electrical test and measuring
equipment for power, industrial, building wiring and communication applications. Its wide range of products extends from
equipment to test protective relays and other substation electrical
apparatus, to insulation resistance and ground testers. With three
manufacturing facilities and sales offices located around the
world, Megger is strategically positioned to provide customers
with innovative products, hands-on technical assistance and
superior service. For additional information, visit our web site
www.megger.com.

OPTIMUM ENERGY PRODUCTS LTD.


#333, 11979 - 40 St SE
Calgary, AB T2Z 4M3
Toll Free (877) 766-5412
Main (403) 256-3636
Fax (403) 256-3431
E-mail: info@optimumenergy.com
Optimum Energy Products Ltd are specialists in Power Quality
and Power Metering products. We represent Fluke, AEMC
Instruments, Electro Industries, and many other manufacturers.
We sell portable PQ instruments for engineers and troubleshooters in many industries. From Plug based voltage disturbance
meters to three phase Class A Power Quality instruments. We
also supply permanent power and power quality meters for use
in residential, commercial and industrial applications.
For complete product range and information, please visit our
specialty websites:
www.PQMeterStore.com
www.PowerMeterStore.com
www.ElectricityMetering.com
www.MyMeterStore.com

Raytech USA
90 C Randall Avenue
Woodlyn, PA 19094
Tel: 610-833-3017
Fax: 610-833-3018
email: sales@raytechusa.com
Web: www.raytechusa.com
RAYTECH is an employee owned company that specializes in the design and manufacture of precision test equipment
for the Electrical Industry. With extensive experience in the
design and application of test equipment, RAYTECH offers
products that truly meet the needs of the testing industry. Our
durable products are used by Manufacturers, Rebuild Shops,

Field Test Crews, Utilities, Rural Electrical CO-OP's, Universities


and Research Engineers.

RHCtest.com
610 Ford Drive Suite 248
Oakville
Ontario L6J 7W4
Canada
Tel : (905) 828-6221
Fax : (905) 828 -6408
Contact : John Riddell
E-Mail : jriddell@rhctest.com
RHCtest.com Inc. is a Canadian owned and operated
Distributor of Electrical Test and Measurement Equipment. We
carry various products lines such as Kyoritsu, Thurlby Thandar,
Dataq Instruments, Topward Instruments, Nidec Shimpo, High
Voltage and Midtronics. We distribute products such as;
Multimeters, Voltage Testers, Clamp Meters, Clamp Adapters,
Voltage and Current Loggers, Power Loggers, Power Analyzers,
Insulation Testers, Earth Resistance Testers, Test leads, DC/AC
Hipots, VLF Hipots, TAN Delta Cable Diagnostics, Thumpers,
Cable and Fault locating products, Power Supplies, Spectrum
Analyzers, RF Generators, DDS Generators, Arbitrary Waveform
Generators, Function Generators, LCR Meters, Micro Ohm Meters,
Frequency Counters, DMMs DC Loads, Strobescopes, Hand Held
Tachometers, Panel Mount Tachometers, Data Acquisition Starter
Kits, Stand Alone Data Loggers, Thermocouple Data Acquisition
Systems, DC Connected Data Acquisition Systems and Battery
Testers.

SKM Systems Analysis Inc.


1040 Manhattan Beach Blvd.
Manhattan Beach, CA 90266
USA
Toll Free : 1-800-232-6789
Fax : 1-310-698-4708
E-Mail : sales@skm.com
SKM Power*Tools software helps you design and analyze electrical power systems. Interactive graphics, rigorous calculations and
a powerful database efficiently organize, process and display information. Associate projects with multiple one-line diagrams and
TCC drawings with customized data fields. Generate better design
with 'what if' scenarios by comparing study results in a single table.
Also includes thousands of validated equipment libraries and the
ability to export project data into AutoCAD DXF and XREF format.
Multiple one-line diagrams can be associated with each project for
better systems organization and presentation. Powerful drawing
tools quickly create a structured, interactive one- line diagram system model.

SKM Systems Analysis, Inc. is a California-based corporation


founded in 1972 with a desire to automate electrical design calculations. SKM has been a leader in the electrical engineering software
industry for more than 30 years, providing quality software, training
and support to thousands of satisfied customers throughout the
world. SKM Systems Analysis, Inc. is also chosen by 39 of the top
40 Electrical Engineering firms in the world.

techniCAL Systems 2002 Inc.


436 Jacqueline Blvd.
Hamilton, Ontario L9B 2R3
Canada: 1-86-MEASURE-1 (1-866-327-8731)
Tel: 905-575-1941 Fax: 905-575-0386
E-mail: sales@technical-sys.com
Web-site: www.technical-sys.com
techniCAL provides electrical contractors and utilities with
Test, Measurement, Calibration, Control & Recording
Instrumentation. Representing Best-of-Breed Manufacturers;
techniCAL provides such products as; Power Quality Analyzers,
Micro-Ohmmeters, Megohmmeters, Insulation Testers, Leakage
Current Meters, Ground Resistance Testers, Data Loggers, High
Voltage Ammeters, Power Transducers, Panel Meters, CTs, PTs,
Shunts, etc

You might also like