You are on page 1of 59

Implementation and Simulation of

Communication Network for


Wide Area Monitoring
and Control Systems in
OPNET

Elias Karam

Master of Science Thesis
Stockholm, Sweden 2008



2



























3


Abstract

The electricity market is, unlike other markets, unique in the sense that power has to
be balanced at all times. New upcoming technology, known as Phasor Measurement
Unit (PMU) offers an accurate and timely data on the state of the power system,
providing the possibility to manage the system at a more efficient and responsive
level.
The PMU systems have been researched extensively in their use for power system
management in terms of their contribution to the collected measurements on the states
of the power systems. On the other hand, little research has been made on the ability
of the current IT infrastructure to meet the demands of the PMUs, and vice versa. One
way to contribute to this much needed research is through implementing models of
the PMU system in a simulator and observing the behaviour of these models in
different network parameters.
In this thesis, the PMU requirements are analyzed and implemented in OPNET, which
is a network simulator. Metrics collected from simulations are represented, and
evaluated.














4

Acknowledgements

First, I would like to thank my supervisors, Lars Wallin at Svenska Kraftnt and
Moustafa Chenine at KTH for their solid support. Moustafa, I appreciate you
patience and your insightful comments. Moreover, I would like to thank both my
families; Karam in Lebanon who gave me support all over these years and Skld in
Sweden who made me feel I am a member of their family from the first day. My
sincere thanks and love to my partner Evelina Skld. Last but not least, I would like to
thank Sweden for giving us the opportunity to get a high-quality of education.














5
Contents
1. Introduction ................................................................................................................ 10
1.1 Problem statement............................................................................................. 10
1.2 Aim and objective............................................................................................... 10
1.3 Chapter Overview............................................................................................... 11
2. Background ................................................................................................................. 12
2.1 The electric power system.................................................................................. 12
2.1.1. Power system automation.............................................................................. 12
2.1.2. Control center ................................................................................................ 12
2.1.3. SCADA............................................................................................................ 13
2.2 Phasor Measurement ......................................................................................... 14
2.2.1. Phasor technology.......................................................................................... 14
2.2.2. Time synchronization ..................................................................................... 15
2.2.3. Phasor measurement unit .............................................................................. 16
2.2.4. Phasor data concentrator ............................................................................... 17
2.2.5. Typical network of phasor measurement units ............................................... 17
2.2.6. Standards for phasor measurement communication ...................................... 17
2.3 Wide Area Monitoring and Control systems........................................................ 18
2.4 Communication.................................................................................................. 19
2.5 Network Protocol ............................................................................................... 21
2.6 Reference Architecture....................................................................................... 21
2.7 Routing Protocol................................................................................................. 21
2.8 OPNET................................................................................................................ 21
2.9 North American SynchroPhasor Initiative (NASPI)............................................... 22
2.9.1. Phasor network experience............................................................................ 22
2.9.2. Phasor network response time estimates....................................................... 24
3. Methodology............................................................................................................... 25
3.1 Literature review................................................................................................ 26
3.2 Network Characterization................................................................................... 26
3.3 Implementation.................................................................................................. 26
3.4 Selection of metrics............................................................................................ 27
3.5 Simulation.......................................................................................................... 27
4. Implementation........................................................................................................... 28


6
4.1 Data flow............................................................................................................ 30
4.2 Common architecture for the dedicated and shared models............................... 30
4.2.1. Network model .............................................................................................. 30
4.2.2. Subnets .......................................................................................................... 31
4.2.3. Core subnets .................................................................................................. 32
4.3 Shared model implementation ........................................................................... 32
4.4 Dedicated model implementations..................................................................... 36
5. Simulation results........................................................................................................ 41
5.1 End to End delays ............................................................................................... 41
5.1.1. End to End delays from PMUs to PDC in shared model ................................... 41
5.1.2. End to End delays from PMUs to PDC in dedicated model .............................. 42
5.1.3. End to End delays from PDC to WAMC for dedicated and shared models ....... 43
5.1.4. End to End delays from WAMC to substation switch 3, substation switch 6 and
substation switch 8...................................................................................................... 44
5.2 Link throughput and utilization........................................................................... 45
5.2.1. Shared model ................................................................................................. 45
5.2.2. Dedicated model ............................................................................................ 48
5.3 Response time of the designed models............................................................... 51
5.3.1. In shared environment ................................................................................... 51
5.3.2. In dedicated environment .............................................................................. 52
6. Conclusion and future work......................................................................................... 55
References.......................................................................................................................... 56
Appendix............................................................................................................................. 59
SvKs Network characteristics .......................................................................................... 59










7
List of figures:
Figure 1: Functional structure of Power System Automation. [6]............................. 12
Figure 2: SCADA system architecture [13].............................................................. 13
Figure 3: Phase and Magnitude representation from sinusoidal to phasor
representation [17]................................................................................................... 14
Figure 4: Phasor measurements with respect to a reference [17] .............................. 15
Figure 5: Comparison of SCADA and phasor capabilities [17] ................................ 15
Figure 6: Satellites location for GPS [18] ................................................................ 16
Figure 7: Phasor Measurement Unit block diagram [20] .......................................... 16
Figure 8: PDC placement in a network [20]............................................................. 17
Figure 9: Phasor network architecture [17] .............................................................. 17
Figure 10: Communication network architecture for electric system automation [28]
................................................................................................................................ 19
Figure 11: OPNET hierarchical structure................................................................. 22
Figure 12: Generic architecture of a wide area monitoring, control and protection
system [22].............................................................................................................. 23
Figure 13: Phases of work ....................................................................................... 26
Figure 14: Iterative form sequence........................................................................... 27
Figure 15: PMUs location chosen for implementation [35] ...................................... 29
Figure 16: Network model used in both designs....................................................... 31
Figure 17: Internal representation of subnet_1......................................................... 32
Figure 18: Core_2 internal components ................................................................... 32
Figure 19: Background traffic configuration between regional router 6 and control
center router ............................................................................................................ 33
Figure 20: Shared network link capacity.................................................................. 34
Figure 21: PMU node model ................................................................................... 34
Figure 22 PMU_1 traffic loaded from TCP layer ..................................................... 35
Figure 23 PMU_1 traffic sent from the IP layer ....................................................... 35


8
Figure 24: Configuring the destination address of the transferred data ..................... 36
Figure 25: Link capacities of the dedicated model 64Kb scenario ............................ 37
Figure 26: PMU node model ................................................................................... 38
Figure 27 traffic sent from PMU_1 IP layer............................................................. 38
Figure 28 control center subnet configuration in the 128Kb scenarios...................... 39
Figure 29 control commands sent from WAMC toward substations switch.............. 39
Figure 30 Control command received by substation switch 6................................... 40
Figure 31: ETE delay from PDC to WAMC for the shared model in the 50% and 70%
scenarios. ................................................................................................................ 43
Figure 32: Link throughput and utilization between substation switch to substation
router....................................................................................................................... 46
Figure 33: Link throughput and utilization between the substation router and the
regional router......................................................................................................... 46
Figure 34: Link throughput and utilization between regional router 6 and the control
center router ............................................................................................................ 47
Figure 35: Link throughput and utilization between regional router and substation
router 3.................................................................................................................... 48
Figure 36: Link throughput and utilization between the substation switch and the
substation router...................................................................................................... 48
Figure 37: Link utilization between the substation router and the regional router ..... 49
Figure 38: Link throughput and utilization between the regional router and the core
subnet...................................................................................................................... 49
Figure 39: Link throughput and utilization between regional_router_6 and the control
center router ............................................................................................................ 50
Figure 40: Link utilization between the control center router and the control center
switch...................................................................................................................... 50






9
List of Tables:
Table 1: ETE delays from PMUs to PDC in the shared model ................................. 41
Table 2: ETE delays from PMUs to PDC in the dedicated model............................. 42
Table 3: ETE delay from PDC to WAMC for the shared model............................... 43
Table 4: ETE delay from PDC to WAMC for the dedicated model .......................... 44
Table 5: ETE delays of the control commands for the 50% and 70% scenarios........ 44
Table 6: ETE delays of the control commands for the 64Kb and 128Kb scenarios ... 45
Table 7: Response time of the shared model in 50% scenario .................................. 51
Table 8: Response time of the shared model in 70% scenario .................................. 52
Table 9: Response time of the dedicated model in 64Kb scenario ........................... 53
Table 10: Response time of the dedicated model in 128Kb scenario ....................... 53
















10
1. Introduction
The continuing growth in electricity consumption without a parallel increase in
transmission has led to condensed operational margins for many power systems. As a
result, they are operating towards unexpected power flow patterns and close to their
stability limits. In order to avoid these consequences, the rate of power system
efficiency and reliability should be increased [1]. One way to do so is by employing
Wide Area Monitoring and Control (WAMC) systems based on Phasor Measurement
Unit (PMU) which provides dynamic coverage of the network. The functions of the
WAMC systems are generally grouped into two categories. First, functions based on
PMUs located in a few key locations of the network. Second, functions based on
sufficient numbers of PMUs covering the whole network.
PMU is considered as the most promising measurement technology for monitoring the
power systems. This unit is composed of a number of phasors that capture
measurements of analog voltage, current waveform and the line frequency.
The Phasor Data Concentrator (PDC), which is located in the control center, gathers
all the phasor data received from geographically distributed PMUs via the
communication network.
The communication network is one of the components that can be investigated and
upgraded, to provide more accurate real-time information between the PMU and the
control center. In this way, the observance of the power balance can be facilitated [2].
Without the communication technology, it is difficult, if not impossible, to efficiently
automate and control the power system.
The communication network is an important component that supports the IT
infrastructure for wide area monitoring and control systems. Communication
networks utilized in the power system automations are extremely heterogeneous,
ranging from different communication mediums to different proprieties. Demands for
faster and more accurate real-time communication for various critical and non-critical
operations over wide geographical areas are generally increasing.
1.1 Problem statement
PMU is considered as a promising technology for monitoring the electric systems,
however extensive research is lacking within the communication network field of
PMUs requirements which is a predicament that lays as a foundation for this thesis
and future research. In general the communication network delays are regarded as a
challenging factor that affect the speed and accuracy of PMUs transferred data
towards the wide area monitoring and control systems and that will be examined in
this thesis.
1.2 Aim and objective
The main aim and objective of this master thesis is to consolidate the performance
requirements of PMUs as well as evaluate it in order to improve the performance and
reliability to meet the needs of wide area monitoring and reach the capability of
applying control functions. Moreover, the data collected are implemented in dedicated
and shared network models and then simulated with OPNET modeller with the aim of
observing the communication delays from the PMUs to the control center and vice
versa.


11
The workflow is as follows:
Derive the requirements for PMU systems and correlate these requirements
with the possible monitoring and control functions that the PMUs are part of.
Implement communication networks in OPNET.
Simulate the communication networks in OPNET.
Make evaluations on the outcomes.
1.3 Chapter Overview
Chapter 1: Introduction
This chapter presents the focus area, the problem that this thesis tries to solve as well
as the aim and objective of it.
Chapter 2: Background
The background chapter introduces a theoretical overview of electric power systems,
communication links, protocols and experiences within phasor technology.
Chapter 3: Methodology
This chapter describes the methodology applied to attain the aim of the thesis.
Chapter 4: Implementation
This chapter lists the parameters and presents the OPNET implementations of the
dedicated and shared models.
Chapter 5: Simulation results
This chapter lists the End to End delay, throughput and utilization results. Besides, the
results regarding the time estimates introduced in the North American SynchroPhasor
Initiative (NASPI) section are also discussed.
Chapter 6: Conclusion and future work
This chapter concludes all the work done and draws attention to areas that can be
further researched to achieve more accurate times.












12
2. Background
In this chapter a description of the electric power system components that are directly
related to implementing the phasor measurement unit networks is presented. In
addition, a brief introduction of protocols and communication links used for design
and implementation reasons is included. At the end of this chapter, a description of
the experiences in phasor network components is introduced.
2.1 The electric power system
Power systems nowadays are more than just a single generating plant. They play an
essential role in the world by providing high reliable energy source. However,
infrequently they face outages that affect residential areas and industries around the
world. The number of outages has been increasing, due to increasing consumption and
no upgrade in generation, transmission and distribution lines for economic costs and
environmental reasons. The frequently asked questions are: Are these the real root
causes of the blackouts? [1]. Can we prevent outages from happening by upgrading
the system generation and grid? Investigations on these questions showed that the
reason behind the numerous blackouts in the U.S. and Italy [4] was the lack of
information. If power system operators were able to monitor a wide area of the grid,
the number of outages would have been reduced, since recent network events showed
that anomalies indicating power system instability started to occur days before the
instability became critical [4] [5].
2.1.1. Power system automation
Power system automation refers to the various measurement devices connected to the
network, intelligent electronic devices and the use of computer. Power system
automation offers to utilities a set of benefits in monitoring, remote control,
automation power delivery, reduced operation, and maintenance costs. Moreover, the
same information gathered can provide better planning, system design enhancement
and an increase in customer satisfaction. These benefits also rely on data acquisition,
power system supervision and power system control working together. More readings
on the subject of power system automation, its architecture and components can be
found in [3].

Figure 1: Functional structure of Power System Automation. [6]
2.1.2. Control center
In the 1950s, analogue communications were employed to collect real-time data of
MW power outputs from power plants as well as tie-line flows to power companies


13
for operators who used analogue computers to conduct Load Frequency Control
(LFC) and Economic Dispatch (ED) [7]. When digital computers were introduced in
the 1960s, Remote Terminal Units (RTUs) were developed to collect real-time
measurements of voltage, real/reactive powers and status of circuit breakers at
transmission substations. This was done through dedicated transmission channels to a
central computer equipped with the capability to perform necessary calculations for
Automatic Generation Control (AGC) a combination of LFC and ED [8]. The
capability of control centers was pushed to a new level in the 1970s with the
introduction of the concept of system security, covering both generation and
transmission systems [9]. The security purpose was to resist disturbances or
contingencies. In the second half of the 1990s, a trend began to fundamentally change
the electric power industry. This came to be known as industry restructuring or
deregulation [10], the restructuring was done in a manner to be distributed, fully
decentralized, integrated, flexible and open. Details about functions and architectures
of control centers can be found in [8].
2.1.3. SCADA
Nowadays, the Supervisory Control and Data Acquisition System (SCADA) is the
major communication technology used for monitoring and controlling the electric
power network. The characteristics of SCADA are star-connected and point-to-point
links connecting substations to control centers. Comparing with todays networks,
SCADA links are slow. The control center asks each substation for updated data once
every 2 to 4 seconds. As a result, the control centers picture of the operational status
of the power network is not enough to warn disordered events [11]. SCADA
assembles data from various locations through sensors at a factory, plant or in other
remote locations and then forwards the data to a central computer which runs different
applications. SCADA system architecture is shown in Figure 2. SCADA systems have
the following base functions [12]:
- Data acquisition
- Monitoring and event processing
- Control
- Data storage archiving and analysis
- Reporting
Each of these functions is described in details in [12]. SCADA functions are listed in
order to compare them with the benefits of a system based on phasor measurements.

Figure 2: SCADA system architecture [13]


14
2.2 Phasor Measurement
The phasor technology is considered as the most promising measurement technology
for power systems. Although SCADA has been the essential components for
monitoring and control in power system for approximately 50 years, phasor
technology is considered as the next generation that is needed for increasing the
reliabilities of wide area monitoring and control in the network [14].
2.2.1. Phasor technology
Phasor image is the representation of a sinusoidal signal in the form of magnitude and
phase with respect to a reference. The phase is the distance between the signals
sinusoidal peak and a specified fix point in time, as a reference. In Figure 3, the
reference time is equal to zero and calculated in angular measure. The magnitude is
correlated to the amplitude of the sinusoidal signal. The use of phasor technology
simplifies the mathematics and electronics required for power systems. This
simplification facilitates the PMU monitoring on a wide grid [15]. Moreover, phasors
can also be represented in the complex plane by real and imaginary components.


Figure 3: Phase and Magnitude representation from sinusoidal to phasor representation [17]
The expectation beyond the deployment of phasor technology is to have a dynamic
view of the system behavior. This reduces the problems that have risen in most of the
major blackouts that have occurred around the world. Examples of some major
blackouts include: the August 2003 Eastern Interconnection Blackout in the US; the
August 16 Western Interconnection Blackout in the US and the summer 2003 and
2004 blackouts in Europe (in Italy specifically) [4] [8].
To calculate two or more phase angles of phasor measurements, one of the phasors
should be chosen as a reference and all the others phase angle measurements are
computed with respect to the chosen reference [16]. Figure 4 illustrates phasor
measurements with respect to a reference.


15

Figure 4: Phasor measurements with respect to a reference [17]
Figure 5 below shows a comparison between SCADA system and phasor technology.
The parameters that each attribute can hold with the two different systems are
presented. It is important to know that the phasor technology was not developed to
replace the SCADA system, but to complement it.

Figure 5: Comparison of SCADA and phasor capabilities [17]
2.2.2. Time synchronization
Signal synchronization is needed to supply a common timing reference for the phasor
measurements in an electric system network. It can be provided by two sources, local
or global. The signal reference should be, according to Coordinated Universal Time
(UTC), with a repetition of 1 pulse per second at all measurement sites through
synchronism accuracy within 1 microsecond of UTC.
Local source signal synchronization is broadcasted from a central station through
transmission systems like AM radio broadcast microwave and fiber optics. Details
about local broadcasting are not covered in this paper since it is not deployed in
PMUs, due to the fact that these transmission systems either have low accuracy, or
acquire high installation cost [10].
Global source signal synchronization is broadcasted from satellites, to be more exact,
by Global Positioning System (GPS). The GPS is a U.S. Department of Defense
(DoD) satellite based on radio-navigation system. It consists of 24 satellites arrayed to
provide a minimum worldwide visibility of four satellites at all times. Each satellite
transmits a timed navigation signal from which a receiver can decode time
synchronized to within 0.2 ms of UTC. The inherent availability, redundancy,
reliability and accuracy make it a system well suited for synchronized phasor


16
measurement systems [18]. Figure 6 illustrates the locations of the satellites around
the earth.

Figure 6: Satellites location for GPS [18]
More details about PMU synchronization, on whether they receive signal directly or
from an independent receiver, and PMUs that use fixed frequency synchronized
sampling can be found in [18].
2.2.3. Phasor measurement unit
Installation of this unit is in its experimental stage in many power systems. The unit is
composed of a number of phasors that capture measurements of analog voltage,
current waveform and the line frequency. After that, the phasor measurements are
digitized by an analog to digital converter and stamped with the creation time
provided by a GPS clock. GPS clocks are used for synchronization of multiple PMUs
with a precision of maximum 1 microsecond difference. Afterwards, the data are
transferred to a phasor data concentrator, which is explained in the next section. PMU
provides a dynamic system observation of the network, because the measurements are
taken with a high sampling rate from geographically distant locations and they are
then grouped together according to the time stamp provided by the GPS. Phasor
measurement unit transmits samples in different sizes. The sample size depends on
the number of phasors in a unit. Sample and packet have the same meaning when
talking about PMU transfer rate. The required transfer rate differs from a 50 Hz
system to a 60Hz system. For example, a 60 Hz system has a rate up to 60 samples
per second, while the 50 HZ one has up to 50 samples per second [19]. Figure 7
shows the block diagram of a PMU.

Figure 7: Phasor Measurement Unit block diagram [20]


17
2.2.4. Phasor data concentrator
It is at a PDC that collection, concentration, correlation and synchronization of phasor
data samples take place. Samples with the same creation time are encapsulated in one
packet, and then transmitted as a single stream to the phasor data server. In addition to
that, PDC performs a number of quality checks, such as inserting the appropriate flag
in the correlation of data, checking for disturbance flags and recording the data for
offline analysis. The PDC information can also be an input to the SCADA system.
Super PDC collects data from all PDCs in the network, and then treat the collected
data the same way as a normal PDC does [17]. Figure 8 shows the placement of a
PDC in a phasor network.

Figure 8: PDC placement in a network [20]
2.2.5. Typical network of phasor measurement units
PMUs located at various locations of the electric system network gather real-time data
and transmit them. A PDC at the control center receives the data and aggregates them.
A computer which is connected to the output of the PDC provides the users with
software applications that display measured frequencies, primary voltages, currents
and MWs for the operators [17]. Figure 9 shows phasor network architecture.

Figure 9: Phasor network architecture [17]
2.2.6. Standards for phasor measurement communication
To be able to interchange phasor measurements between a variety of devices and
users that utilize PMU bought from different vendors and different communication
protocols, standards need to be set up for all vendors to follow. The standard for
phasor network messaging is called synchrophasor for power system.


18
Synchrophasors address issues like [18]:
- Synchronization of data sampling
- Data to phasor conversions
- Formats for timing input and phasor data output

The synchrophasor standards for power systems give us the opportunity to
synchronize input and output data for phasor measurements as well as anticipate in
adding assessments to developers and users of digital computer-based substation
system. Two types of synchrophasors standards are available, the IEEE 1344 and the
IEEE C37.118. Both standards have a common base to start with, such as time
synchronization and phasor calculation, but with the latter one coming as an updated
version to enhance the accuracy of measurements. The message format has been
modified in IEEE C37.118 in order to improve information exchange with other
systems and to add more value to the total process.
IEEE C37.118 improves PMU interoperability with the following three major
contributions [16]:
- Refined definition of an Absolute Phasor referring to GPS-based and nominal
frequency phasors, as well as time-stamping rule;
- Introduction of the TVE (Total Vector Error) to quantify the phasor measurement
errors;
- Introduction of the PMU compliance test procedure.

For more reading about these topics, refer to IEEE publications concerning standards
for synchrophasors (can be found in [18] [21]).
2.3 Wide Area Monitoring and Control systems
The usage of electric power system is a major concern of the utilities and grid
operators, due to a continuous load growth without an increase in transmission
resources. Many of the power systems are operating close to their stability limits [22]
and unexpected power flow patterns. Implementation of advanced control and
supervision systems could prevent such events, which was where the idea of WAMC
came from. WAMC functions depend on different application requirements, thus
these applications depend on the number and locations of the PMUs installed in the
system.
WAMC system networks are exclusive for each electric power system company.
WAMC system technology platform enables the development of new applications for
enhancement of power system control and operation. A WAMC system is considered
exclusive because it is designed based on the needs of the company, taking into
consideration the available technology, the economical constraints, the number of
PMUs installed in the power system and which applications have the priorities to be
improved [23].
WAMC applications are divided into two different types:
First, applications implemented based on PMUs installed in few key locations of the
network, given by that a partial observation of the power system state. Examples of
applications:
Voltage stability monitoring for transmission corridors [25]


19
Oscillatory stability monitoring [26]
Coordination of FACTS control using feedback from remote PMU
measurements, to enhance transmission capacity restricted by, for instance,
voltage stability [27].
Each application has its own requirements in terms of the number of required
measurement samples. However, the same measurements can often be used for more
than one application.
Second, a range of more advanced applications based on a detailed network model
view, given that a sufficient number of PMUs have been placed so that the network
state can be completely calculated. For example:
Loadability calculation using OPF or other optimization techniques [27]
Topology detection and state calculation [24]
2.4 Communication
The main component of providing monitoring and control of the electric grids
nowadays is the SCADA communication technology. It connects the substation to the
control center, with a link capacity of thousands of bits per seconds and an updated
rank every 2 to 4 seconds. This is considered a very slow rate of exchanging data
comparing with modern communication networks. Due to this limited
communication, the control of the power grid is held locally, i.e. on a substation basis.
Therefore, making the relation between the WAMC applications and PMUs more
accurate depends on the use of new technologies. For example, increasing the storage
capacities in computers, deploying faster communication transportation, and on
software level making it more flexible. These three factors, together with the
Internets already existing communication infrastructure, make us look forward to a
real time monitoring and control of the electric power system. To realize this in an IT
point of view, the hybrid network should be used. Figure 10 shows the
communication network architecture for electric power system. The hybrid network is
divided into two parts [28]:

Figure 10: Communication network architecture for electric system automation [28]
High speed communication core network: Depending on industrial needs, it can be a
private network or public network. For example, Internet based Virtual Private
Network (VPN) can be considered as a cost-effective high speed communication core


20
network for electric system automation. A detailed explanation of VPN performance
can be found in [28].
Last mile connectivity: It includes the communication media between the
substations and the high-speed communication core network. The most competent
communication media for last mile connectivity are I-Power line communication, II-
Satellite communication, III-Optical fiber communication and IV-Wireless
communication. In the following subsections, the advantages and disadvantages of
each media are introduced [28].
I-Power Line Communication: Within PLC, there is no need to construct
communication infrastructure since the theory of this communication is to transfer
data, with a rate of 4Mbps and electricity of medium 15/50 kV, simultaneously. PLC
can also be deployed in low voltage power line 110/220 V. This provides the users
with a wide coverage and reduced installation cost since there is no need to build
communication infrastructure with the power lines already existing. On the other
hand, a wide range of drawbacks are also present in PLC, ranging from low security,
small capacity, limited energy and frequencies employment, open circuit problem
signal attenuation and distortion, to high noise sources over power lines which result
in high bit error rates during the communication [29].
II-Satellite communication: Satellite communication also has the advantage of not
having to install wired network. A substation can profit from a high-speed service and
global coverage provided by satellite communication, once the necessary technical
equipments are in place. As for drawbacks, firstly, they are represented on the long
delay basis in the round trip time; second, the weather plays a role in the characteristic
of the communication; and last, the satellite communication has a high usage cost
compared with other communications [29].
III-Optical fiber communication: In electric system automation, the optical fiber is
one of the technically attractive communication infrastructures, because of the high
performance provided by its extremely high bandwidth capacity. Whereas a single
wavelength offers transmission rate up to 10Gbps; multiple wavelength, known as
wavelength division multiplexing (WDM), offers from 40Gpbs to 1600Gbps
transmission rate. In addition, compared with other wired infrastructures, where a
repeater is present every 100 to 1000 Km, optical fiber communication systems
require a smaller number of repeaters [30]. More advantages in optical fiber
communication systems, compared with other communication infrastructures, are
related to their low bit error rates (BER = 10 ^-15) as well as their immunity
characteristics against electromagnetic interference (EMI) and radio frequency
interference (RFI). This makes optical fiber communication an ideal communication
medium for high voltage operating environment in substations [31]. The disadvantage
of the current media is characterized by its expensive installation cost. However, as a
result of its high performance and capabilities, once this infrastructure is found, it can
be shared among a number of users and networks. This in turn opens a discussion of
whether the cost is a disadvantage.
IV-Wireless Communication: In wireless communication two choices can be
deployed; first, by using an existing communication infrastructure of a public
network, e.g., cellular network. Recently, Short Message service (SMS) has been


21
applied to remote control and monitor substations [32]; second, by installing a private
wireless network which allows electric utilities to have more control over their
communication networks. The relatively low cost and rapid installation are considered
the advantages of wireless communication; while the disadvantages are present in
three factors: limited coverage, capacity and security [28].
2.5 Network Protocol
Transport Control Protocol/Internet protocol (TCP/IP) is known as low level protocol,
and is used mainly on Ethernet. TCP/IP is used to transport data, because it provides a
highly reliable connection, using checksums, congestion control and automatic
resending of bad or missing data. TCP/IP can have a problem with streaming
continuous data, because an error will cause the data stream to be backed up for a
period of time while TCP/IP protocol attempts retransmission of the missed data [36].
A detailed explanation of TCP/IP packet structure, connection establishment, and its
layers can be found in [37].
2.6 Reference Architecture
Enhanced Performance Architecture (EPA) describes a method in which only the
application, data link and physical layer are used for data communication. EPA is
used when devices are equipped with identical or very similar operating systems, and
data coding is either nonexistent or very simple [40]. EPA is designed to provide
faster message response times by skipping the upper layers, and connecting the
application directly to the data link layer in an effort to streamline critical
communications [39].
2.7 Routing Protocol
The routing protocol identifies the way routers communicate with each other. Open
Short Path First (OSPF) is a routing protocol that was developed by Internet
Engineering Task Force (IETF). OSPF is a link state, hierarchical routing protocol. It
is capable of doing neighbor discovery on different types of networks with minimal
need for configuration. OSPF simplest interface type is a point to point interface.
Point to point interface is the case when there is only one neighbor on the other side
of the link [41].
2.8 OPNET
OPNET modeler is a very large and powerful software specialized for network
research and development. OPNET offers the possibility to design and study
communication networks, devices, protocols, and applications with great flexibility.
OPNET contains a huge library of accurate models of commercially available fixed
network hardware and protocol, it hierarchical structure modeling is divided into three
main domains which are illustrated in Figure 11.
Network model (highest level): entire network, sub-networks, network
topologies, geographical coordinates.
Node model: single network node, e.g. routers workstations, switches,
servers and mobile devices.
Process model: Single modules and source code inside networks nodes,
e.g. data traffic source model, MAC, IP, and TCP.


22

Figure 11: OPNET hierarchical structure
OPNET will never provide the best degree of accuracy, as simplifying assumptions
are required in order to implement and simulate a network in a reasonable amount of
time. More reading about OPNET can be found in [42].
2.9 North American SynchroPhasor Initiative (NASPI)
Throughout the study, research related to WAMC systems were examined. Valuable
knowledge was obtained from online and published resources provided by NASPI
[33]. The resources describe all guides and test results concerning PMU installation
procedures, as well as application developments in wide area monitoring, analysis,
control and protection.
NASPI is an extension of the Eastern Interconnect Phasor Project (EIPP) of North
America. The vision was to improve power system reliability through wide-area
measurement, monitoring and control. The organization is divided into tasks and
teams from the Western Electricity Coordinating Council (WECC). The NASPI tasks
titles are business management, data & network management, equipment placement,
operations implementation, performances and standards, planning implementation and
research initiatives. The listed titles are accessible on the organizations website [33]
where a number of documents about each task and reports describing their practices
within the field are available.
2.9.1. Phasor network experience
The first monitor developed to display phasor measurements was presented in the mid
1980s, and it was funded by the Department of Energy (DOE). Moreover, after
launching several projects to enhance the system, the researchers in 1996 were able to
record real-time measurements of the power system breakups and blackouts that


23
occurred in the western U.S with a wide area monitoring system based on GPS
synchronized measurements.
Currently, more than twenty North American utilities have PMUs installed in their
substations, but their level of experiences are not the same. The Eastern
Interconnection utilities are still in the initial stage of implementing and networking
PMUs, while the utilities in the Western Interconnection, motivated by the wide area
monitoring project, have already developed a wide area phasor network in
combination with monitoring and post-disturbance tools. Moreover, plans are
established to identify and set out prototypes for wide area real time control and
protection systems using phasor technology infrastructures [22]. Figure 12 shows a
generic architecture of Wide Area Monitoring, Control and Protection (WAMCP)
system.

Figure 12: Generic architecture of a wide area monitoring, control and protection system [22]
Layer 1, Phasor Data Acquisition Presents the PMUs and Digital Fault
Recorders (DFRs) that are located in substations to measure voltage, current and
frequency.
Layer 2, Phasor Data Management Where the PDC collects the data sent from
PMUs and other PDCs and correlates them into a single data set. It is then streamed to
applications via the applications data buffer.
Layer 3, Data Services Data are served to different applications. Some of the
services include supplying the data in the proper format required for applications and
fast execution to leave sufficient time for running the applications within the sampling
period. Moreover, system management occurs in data services layer by monitoring all
the input data for loss, errors and synchronization.
Layer 4, Applications Phasor data applications are divided into three parts [22]:
1. Monitoring and Analysis Real time wide-area load generation balance,
ACE-Frequency, wide area real time grid dynamics monitoring.
2. Real Time Control Wide area remedial action, emergency frequency
control, oscillation damping


24
3. Adaptive Protection Coordinated adaptive protection, dynamic settings for
local protection using phasor measurements
2.9.2. Phasor network response time estimates
After presenting the generic architecture of the WAMCP system, it can be seen that
communication infrastructures and delays from different hardware/software platforms
play a crucial role in the response time of a phasor network. This is because a
significant protection depends on the speed at which the control center can identify
and analyze an emergency, in addition to the time needed before a control action takes
effect. It has been researched and documented in [20] that the total process of
obtaining a consistent system involves the 6 following activities with some time
estimates for each:
1. Sensor Processing Time 5 ms
2. Transmission Time of Information 10 ms
3. Processing Incoming Message Queue 10 ms
4. Computing Time for Decision 100 ms
5. Transmission of Control Signal 10 ms
6. Operating Time of Local Device 50 ms

TOTAL Time 185 ms [20]
Sensor processing time represents the time taken for a signal to be captured, digitized
and ready to be transferred in the IEEE 1344 or C37.118 packet. This activity is
hardware dependent and processing time differs according to the PMU manufacturers.
Transmission time of information represents the delay time from when data are
transferred by PMUs until they are received by the PDC. This activity is network
dependent and transmission time differs according to network communication media,
communication protocols and network usages.
Processing incoming message queue represents the time that the PDC needs to sort
the received data and be ready to transfer them again. This activity is hardware
dependent and processing time also differs according to the developer of the PDC.
The computing time for decision activity is also software dependent. Known as the
WAMC system, it is either bought or developed in-house. Computing time for
decision is different for most applications, and depends also on the workstation speed.
Transmission of control signal represents the delay time from when a monitoring and
control system sends a command through the network to when the command is
received by a device in the substation. This activity is network dependent and
transmission of the control signal differs according to network communication media,
communication protocols and network usages.
Operating time of local device represents the time needed for a device to take action
when it receives a command. This activity is hardware dependent and the operating
time is different for each device.


25
The presented time estimations are based on the assumption that the utilities have a
complete fiber optic network with dedicated channels, which is necessary for high
priority communication and control signals [20].
In the OPNET implemented models, it was possible to investigate the time that was
network dependent. However, the time that depended on software and hardware was
hard to attain because of the time limitation of this work. The network dependent time
captured from the simulations was used to replace the estimated time listed in the
transmission time of information activity as well as in the transmission of control
signal activity in order to estimate the designed models consistency.




















3. Methodology
In this chapter, the method of working is presented.


26
Figure 13 shows the phases followed to achieve the desired goals. These phases are
described in detail in the following sections.

Figure 13: Phases of work
3.1 Literature review
The review was first carried out to understand the architecture and components of
electric power systems as well as its behavior, most of the material is included in
chapter 2 as background material. The resources for the literature review were
collected from internet through published technical papers, and experiences related to
such systems, especially by NASPI [33].
3.2 Network Characterization
After the literature review the draft communication network models were formulated.
The models were then verified by contacting Svenska Kraftnt (SvK) for
understanding the general configuration of a utility communication network, such as
traffic levels, communication capacities, geographical distances etc. This was an
important phase since it led to more specific insight on the architecture of typical
utility networks which could then be generalized into simulation models.
3.3 Implementation
In the implementation phase, the simulation models were implemented in OPNET and
the network characterizations collected from the previous phase were applied on the
models as well as general information from reviewed literature.


27
The implementation phase was iterative. Enhancements and expansion of the models
was done gradually. The iterative sub phases are illustrated in Figure 14.
Specification
Simulation Analysis

Figure 14: Iterative form sequence
The first part of the implementation was to implement a prototype with initial
specifications. This was followed by simulating the prototype and analyzing the
collected data. A re-specification of the prototype was then done by adding more
advanced characteristics to the models, followed by the simulation and analysis part.
3.4 Selection of metrics
The selection of metrics for this work included:
- End to End Delay
Delay measures the time difference in the transmission of information
across a system [38]. End to End (ETE) delay is the time taken for a
packet to reach its destination. This metric was collected separately for
each source and destination pair.
- Throughput
The throughput is the rate of the average number of delivered data.
This metric is a good sign for the efficiency of size data operation [39].
- Utilization
The utilization is the consumption in percentage of a link capacity.
3.5 Simulation
There are several techniques to evaluate the performance of network architecture such
as statistical analysis, network monitoring or simulation. In this work, computer
simulation was chosen as the evaluation tool since it has the great advantage of being
less expensive than building up a network and performing the monitoring evaluation
on it. Different simulators are available, such as ns2/ns3, NetSim, OPNET and
QualNet. It is usually up to the designer to choose the modeler that best fulfills his/her
specific needs. In this thesis, OPNET modeler version 14.5 was used as simulator for
the following reasons. Firstly, OPNET is a very user-friendly tool with a library that
contains wide choices of components. This can help in making the implementation
phase easier for the designer. Secondly, with OPNET, different simulations can be run
at the same time, which makes the analysis of the results more comparable and easier
to understand.
The simulation was needed because of the following reasons:


28
- PMUs have not been installed on sites yet, so there was a need to investigate
their behavior in a shared environment. In addition, it was important to
observe the consumption of PMUs data in a dedicated environment.
- The simulation helped in understanding networking and modeling.




















4. Implementation
This chapter presents the implementation of the networks. Two network models were
implemented, a dedicated model and a shared model. The implementation followed
the iterative sequence listed in the methodology chapter section 3.3.
To have a considerable architecture for the network models and to fulfill the
objectives of the thesis, requirements of the PMU were collected from literature
reviews. They included the PMU transfer rate, connection to the network and packet


29
size. Whereas for topologies, protocols and communications between the PMUs and
the control center, the requirements were obtained from both network characterization
and literature reviews. The former was performed to check the states of the problems
faced by existing networks, while the literature review was done to compare new
technologies with the already existing ones used in electric system networks to
conclude if the new technologies add any enhancement to the networks.

Figure 15: PMUs location chosen for implementation [35]
Figure 15 shows the locations of installed and planned to be installed PMUs in the
Nordic region [35]. In this work ten PMUs locations were chosen for the
implementation. The choosing criterion was the geographical locations of the PMUs.
In a way, they covered all electric transmission paths between Sweden and the
neighboring countries. These locations were implemented using distance based delay
option in OPNET which calculates the delay based on the geographical distance
between two nodes. Regarding the control center the location was chosen to be in
Stockholm.
Two network models were implemented, with the first one using shared
communication link and the second using dedicated communication channels. In the
shared model, two different percentages of background traffic were introduced. In the
dedicated model two different communication channels capacities were presented.
The transfer rate of the PMUs was set to 30 samples/second, which was chosen
because it was found during the literature review that the maximum rate needed for
wide area monitoring functions was 30 samples/second. The OSPF routing protocol
was used for the two implemented models, with the static routing tables used in the


30
dedicated model and the dynamic routing tables used in the shared model. The
TCP/IP protocol was chosen following the idea that switched Ethernet using TCP/IP
protocol could fulfill the real time requirement of the implemented applications.
Another analysis proving that switched Ethernet using TCP/IP is able to fulfill the real
time requirement needed in electric power systems can be found in [34]. In some
implementations, the parameters were simplified or modified. The simplification was
due to time limitation while the modification was done to examine the differences
when implementing different network configurations.
4.1 Data flow
The data flow section refers for getting, or collecting data of the implemented models
using OPNET. The data was generated from computer components named as PMU.
These computers were configured to generate samples rate similar to a real PMU,
including the same packet size. Created data was transferred to a substation switch,
and then to the substation router that in turn transferred the received data to the
regional router. From regional router, the data were transferred to the core network.
From core network, the data were transferred to the PDC. The PDC was located in the
control center subnet. The implemented models had a sampling rate of 30 samples per
second transferred from each PMU with a packet size of 76 bytes. The 76 bytes
represented the size of C37.118 packet, containing the data of 10 phasor
measurements, six analogue measurements and one digital measurement. The PDC
was implemented as a computer component and was configured to generate the same
rate as a PMU, 30 samples per second with a packet size equal to 760 bytes
containing the measurements taken from 10 PMUs. The PDC transferred the data
towards the WAMC. The WAMC was implemented as a computer component and
was configured to send control commands to three different PMUs at three different
times. The WAMC configuration was done to make it possible to capture the time
taken for a control command to be received by a substation.
4.2 Common architecture for the dedicated and shared models
This section lists the common architecture that was implemented for the dedicated and
shared models. It included the network model, subnets and core subnets
implementations.
4.2.1. Network model
The geographical locations of the PMUs and routers inside the subnets, the subnets
themselves, the core subnets and the control center subnet were common for both the
dedicated and the shared models. Figure 16 shows the OPNET implemented locations
of the subnets, core subnets and control center subnet. The contents of the subnets are
described in the subsection 4.2.2. The dedicated and shared models shared the same
locations and distances for the communication links connecting the subnets. The link
capacity, however, was different and is described in details in sections 4.3 and 4.4.


31

Figure 16: Network model used in both designs
4.2.2. Subnets
The location of the PMUs was chosen from Figure 15. This figure listed the already
installed PMUs and the PMUs planned-to-be-installed in the near future. Ten PMU
locations were chosen to be implemented for the dedicated and shared models. The
PMU was connected to a substation switch which was then connected to a substation
router. The substation router was connected to a regional router. Figure 17 shows the
internal representation of subnet_1. All subnets were implemented in the same form
except subnet_5 which was implemented with one PMU, one substation switch and
one substation router. The link between the PMUs and the substation switch, as well
as the link between the substation switch and the substation router was through a
100baseT link. The 100baseT link was used in both the dedicated and the shared
models. Whereas the link between the substation router and the regional router was
unique for each model and is described in details in sections 4.3 and 4.4. The link
between the substation router and the regional router has been mentioned in the
common architecture section because the locations and distances between those two
components were similar for both models.


32

Figure 17: Internal representation of subnet_1
4.2.3. Core subnets
An additional common design that was implemented for the dedicated and shared
models was the core subnets. Figure 18 shows the architecture of the core models
implemented in OPNET. They were considered as a small model compared to a real
core network model, with the latter composed of hundreds of routers. This was one of
the limitations of this thesis work due to time limitation. The core models were
designed in order to use mesh topology. The core_2 subnet was shown in
Figure 18. The core_2 subnet was composed of routers and communication links.
Routers were common for the dedicated and shared models while the links were
unique according to each models specifications.

Figure 18: Core_2 internal components
4.3 Shared model implementation
The goal behind implementing the shared model in OPNET was attempting to create a
scenario of a real network, where existing traffic characteristics were modeled to
share the possible traffics introduced by a phasor network. To fulfill the goal, the


33
background traffic was introduced in the communication links. This background
traffic represented the existing traffic of a network, and it was configured within the
attributes of the communication link as shown in Figure 19. The background traffic
flow was present from the substation switch of each PMU toward the control center
switch. The shared model was compiled in two different scenarios. In the first
scenario, the percentage of the background traffic was 50%, which was chosen to
check how the model would behave in moderate traffic; in the second scenario, 70%
background traffic was chosen, to check the accuracy of the system when the network
was heavily used.

Figure 19: Background traffic configuration between regional router 6 and control center router
As for the background traffic flow from the control center toward the substations
switch, the background traffic was 20% in both scenarios. This was because it was
assumed that the traffic sent from the control center would be much less than the
traffic the control center received from the remote devices and components.
Shared model implementation using OPNET was done according to the following. A
2Mb communication media was used as the link between subnets, core subnets and
the control center. The 2Mb capacity was chosen, because in fiber optical networks
using standards such as Synchronous Digital Hierarchy (SDH), the fiber optic link can
sometimes be channelized into multiple fixed rate channels, and 2Mb channel
capacity is a frequent size.
Figure 20 shows the 2Mb link capacity between all subnets implemented in OPNET.


34

Figure 20: Shared network link capacity
For the communication protocol configuration of the shared model, TCP/IP was used.
The TCP/IP detailed configuration was beyond the focus of this thesis, so the default
configuration was left. The adoption of TCP/IP was to detect if the flow of data
through TCP layers would affect the ETE delay.
Once the PMU packet is created, it will be transferred according to the TCP/IP
protocol, as shown in the node model in Figure 21. The blue line shows the path of
the created packets which are created in the application module and transferred from
ip_tx_0_0.
For routing protocol, the OSPF was used in the shared model.

Figure 21: PMU node model


35
The transfer rate of the PMUs was 30 packets/second. In the application definition,
the transfer rate, the destination address and the packet size (assumed to be 76 bytes in
this thesis) were configured. Each packet was encapsulated in a TCP packet. If the
packet size of the PMU exceeded the size of the data field of a TCP packet, the PMU
data would be divided into two or multiple TCP packets. Figure 22 shows the data
rates of the packets leaving the TCP module node in PMU_1. The same rate was used
for the ten PMUs.

Figure 22 PMU_1 traffic loaded from TCP layer
Arriving at the IP module node, the number of packets was increased to 34
packets/second as shown in Figure 23. This increase in the number of packets was
because of the fragmentation in the IP layer. An IP packet data field size was not
enough to encapsulate a TCP packet, so the IP fragmented the 30 TCP packets into 34
IP packets.

Figure 23 PMU_1 traffic sent from the IP layer
The destination address of all PMUs was the PDC. The PDC was located in the
control center subnet. The destination address was configured within the PMU
attributes as shown in Figure 24. The symbolic name shown in the figure can be
exchanged by the PDC IP address.


36

Figure 24: Configuring the destination address of the transferred data
The PDC had the same configuration as the PMUs, but in the PDC the transfer packet
size was ten times larger than the PMU. The destination address of data transferred
from PDC was the WAMC located in control center subnet.
Three control commands were configured in the application called profile control
used by the WAMC component. The WAMC communication protocol was similar to
the PMU and PDC. It received packets from the PDC; and in turn, sends a defined
packet rate to different applications. In a real network, the WAMC is supposed to
send commands to the substations whenever there is a disturbance in the electric
system.
The WAMC was configured to send 10 packets at three different times, each time to a
different PMU. The first 10 packets were sent to substation switch 3, the second 10 to
substation switch 6 and the third 10 to substation switch 8. The implementation of the
commands was done to be able to measure the time taken for the data to reach a
substation to deliver a command. The control command packet size was 10 bytes.
Switches and routers in the shared model were kept with default configurations. The
dynamic routing tables were used for the routers.
4.4 Dedicated model implementations
Two scenarios were implemented for the dedicated model. The first scenario was
carried out by setting the transmission capacity of the communication link
implemented in the network to a multiple of 64Kb channels; while in the second
scenario the communication link was the multiple of 128Kb channels. The two
scenarios were implemented in order to compare the ETE delay with different
transmission capacities and to observe the link utilizations. No background traffic was
introduced in the dedicated model for both scenarios.


37
The implementation in OPNET was done as follows: a dedicated channel was
assigned between each PMU and PDC. The dedicated channels were implemented in
the core network level. On substation level where the 100BaseT was used, the PMUs
had for both models (dedicated and shared) a personal link connecting them to the
substation routers. Comparing the dedicated model with the shared model, it could be
seen that the 2Mb link capacity between the substation router and the control center
router in the shared model was exchanged to a multiple of 64Kb or 128Kb in the
dedicated model.
Figure 25 shows the link capacity between the subnets and the core subnets of the
dedicated model, for a channel capacity equaling to 64Kb. The data rate shown in
Figure 25 is in bytes, but throughout the explanations it is referred to as Kilobytes for
simplification. The differences in link capacities explain how many PMUs data were
passing through the communication link. The link capacity between subnet_5 and
core_1 was equal to 64Kb, which mean that only one PMU data was passing through
this link. While for the link capacity between core_1 and the control center, it was
equal to 192Kb, which mean that three PMUs data were passing through the link. The
link capacity between core_2 and the control center was equal to 384Kb, which mean
six PMUs data were passing through the link.

Figure 25: Link capacities of the dedicated model 64Kb scenario
Here is a general description of the dedicated model: when two or more PMUs data
meet at a specific path, the transmission capacity of the link in that path would be the
multiple of the number of PMUs, until the data arrived at the PDC in one path with
capacity equaling to 10*(64Kb or 128Kb). The 10 represented the number of
implemented PMUs.
The routing protocol used in the dedicated model was OSPF routing. The stack used
in the dedicated network implementation was EPA which was mostly used in
industrial networks [39].


38
The implementation of the PMU transfer rate and packet size was done in the traffic
generator node model. Once a packet left the traffic generator, it went to the IP
encapsulation module where it would be encapsulated in an IP packet and sent out to
the network. The blue arrows in Figure 26 shows the passage that the data follow
when they are transferred from the PMUs.

Figure 26: PMU node model
The packet size was set to 76 bytes in order to represent a C37.118 [21] packet
containing the measurements of 10 phasors. The transfer rate was 30 samples/second,
as this rate was considered enough for most of the already developed applications,
which has been mentioned in section 2.3.
Since the size of the PMU packet was less than the maximum size of the data field of
the IP packet, the number of packets leaving the IP layer was 30 packets/second, as
shown in Figure 27

Figure 27 traffic sent from PMU_1 IP layer
The PDC and the WAMC nodes were implemented the same way as a PMU in the
dedicated model. PDC received the data from the PMUs, and then transferred them
towards the WAMC. The PDC transfer rate was 30 packets/second.


39
Figure 28 shows the link capacity between regional router 6 and the control center
router. The data rate showed in the figure is in bytes, representing 10*128 Kb. In this
path, all the links channels gather towards the PDC.

Figure 28 control center subnet configuration in the 128Kb scenarios
The role of the WAMC in the dedicated model was similar to its role in the shared
model. Three control commands, each composed of 10 packets, were sent to three
PMUs at three different times. The configuration of the WAMC packet size and
sample rate was done in the process model of the traffic generator. While the passage
that the data should follow towards the substations switch was defined in the static
routing tables of each router, the commands were sent to substation switch 3,
substation switch 6 and substation switch 8 in the dedicated model.

Figure 29 control commands sent from WAMC toward substations switch


40
Figure 29 shows the three control commands sent from the WAMC. The first
command is sent towards substation switch 3, the second towards substation switch 6
and the third towards substation switch 8. Each control command was composed of 10
packets. Figure 30 shows the control command received by substation switch 6.

Figure 30 Control command received by substation switch 6
Default configuration was kept for switches in the dedicated model, while the routing
tables in the routers were configured manually, which is known as static routing.
Static routing was used in order to control the data flow from the PMUs toward the
PDC. When the data flow was known, it became capable to increment the link
capacity when more than one PMU data meets on the same path.














41
5. Simulation results
A summary of the main points of the models implementation and configuration have
been presented in the implementation chapter. In this chapter the results of the
simulations for the shared and dedicated models are presented and discussed. The
collected results were ETE delays, link throughput and utilization. The simulations
were compiled for ten hours duration.
5.1 End to End delays
The ETE delay represents the time (in seconds) taken for a packet to reach its
destination. In other words, it is the difference between the time a packet arrives at its
destination and the time when the packet is created. The statistics were collected
separately for each source and destination pair. Each simulated scenario had fourteen
ETE delays. Among them, ten were captured from the links from PMUs to PDC; one
was captured from the link from PDC to WAMC; and three were captured from the
control commands.
5.1.1. End to End delays from PMUs to PDC in shared model
The shared model ETE delay results were collected from two scenario simulations. In
the first scenario, 50% background traffic was introduced in the path from PMU to
PDC; and in the second scenario, the background traffic was increased to 70%. The
path from PDC to PMU had 20% background traffic for both simulations. During the
simulations, all PMUs were generating constant traffic. In addition, constant
background traffic was added to the network, so that differences between the delays
could be related to the distances between PMUs and PDC. Here the usage of constant
background traffic was a way to observe the effects of distances in causing delays. In
a real network, PMUs generate constant traffic in all cases, but the background traffic
may vary.
Table 1: ETE delays from PMUs to PDC in the shared model
ETE delays
from PMUs to PDC (sec)
50% Background traffic 70% Background traffic
PMU_1
PMU_2
PMU_3
PMU_4
PMU_5
PMU_6
PMU_7
PMU_8
0.016
0.016
0.012
0.012
0.019
0.019
0.018
0.017
0.028
0.028
0.021
0.021
0.033
0.033
0.031
0.029
PMU_9 0.013 0.023
PMU_10 0.005 0.011


42
Table 1 shows the ETE delays collected from the PMUs to PDC links for the 50% and
70% scenarios. The most important measurement from the ETE delay measurements
between PMUs to PDC was the highest delay measurement in each scenario. It was
important because of the PDC aggregation of data, as mentioned in section 2.2.4. In
the 50% scenario, the highest delay was identified in PMU_5 and PMU_6 with ETE
delays equaling to 19ms; and in the 70% scenario, the highest delay was identified
also in PMU_5 and PMU_6, but with ETE delays equaling to 33ms.
5.1.2. End to End delays from PMUs to PDC in dedicated model
The dedicated model ETE delay results were collected from two scenarios
simulations. In the first scenario, 64Kb channel capacity was established between
each PMU to PDC path; and in the second scenario, the channel capacity was 128Kb.
Ten PMUs were generating the same packet rate and size for the 64Kb and 128KB
scenarios. Table 2 shows the collected ETE delays from PMUs to PDC in the 64Kb
and 128Kb scenarios. The differences between ETE delays in the same scenario were
related to geographical distances between PMUs and PDC. Whereas when comparing
the ETE delays collected in the 64Kb and the 128 Kb scenarios, the delays were
reduced by half. In other words, when the channel capacity was doubled, the ETE
delay was reduced by half.
Table 2: ETE delays from PMUs to PDC in the dedicated model
ETE delays
from PMUs to PDC (Sec)
Channel capacity
64Kb
Channel capacity
128Kb
PMU_1
PMU_2
PMU_3
PMU_4
PMU_5
PMU_6
PMU_7
PMU_8
0.041
0.045
0.037
0.031
0.065
0.072
0.042
0.046
0.021
0.024
0.020
0.016
0.035
0.039
0.022
0.025
PMU_9 0.039 0.020
PMU_10 0.015 0.008

In both scenarios, the highest delay, which was also the most important one, was
identified between PMU_6 and PDC. The highest delay represented the PDCs
waiting time until the ten PMUs transferred packets reached PDC. The PDC method
of working was explained in section 2.2.4. In the first scenario, the highest delay is
equal to 72ms; and in the second scenario, the highest delay is equal to 39ms.


43
5.1.3. End to End delays from PDC to WAMC for dedicated and shared
models
The purpose of presenting the PDC to WAMC link was to observe the impact of
throughput capacity on the result of delays. Since the PDC to WAMC link was
implemented with the same communication media and distance for the dedicated and
shared models, the only difference between the two models was that for the shared
model, the transfer of PMUs data was through TCP/IP; while for the dedicated model,
the transfer of PMUs data was according to EPA. The ETE delay in this path was
affected neither by the background traffic added in the shared model, nor by the
dedicated channel capacities used in the dedicated model.
Table 3: ETE delay from PDC to WAMC for the shared model
ETE delays
from PDC to WAMC (sec)
50% Background
traffic
70% Background traffic
WAMC 0.00056 0.00056

Table 3 shows the ETE delay from PDC to WAMC for the shared model simulations.
The ETE delay was equal for the 50% and 70% scenarios, which can be explained by
the absence of background traffic on this link. PDC and WAMC were located in the
control center subnet and were connected to the same LAN. The limit of the
background traffic was the control center switch, because after the switch each data
leave for its destination. The ETE is equal to 0.56ms as it is shown in Figure 31. The
link capacity was 100baseT, and it only contained PDC data going towards WAMC at
a rate of 30 samples/sec.

Figure 31: ETE delay from PDC to WAMC for the shared model in the 50% and 70% scenarios.
Table 4 shows the ETE delays from PDC to WAMC for the dedicated model
simulations. The ETE delays for the 64Kb and the 128Kb scenarios are equal to 54ms.
Once again, the reason behind this equality was because the PDC and WAMC shared
the same LAN in the control center subnet. The LAN capacity was 100baseT, and was


44
not affected by the dedicated channels which were used to connect substations routers
to control center router.
Table 4: ETE delay from PDC to WAMC for the dedicated model
ETE delays
from PDC to WAMC (Sec)
Channel capacity
64Kb
Channel capacity
128Kb
WAMC 0.00054 0.00054

In the dedicated and shared models, the same communication media and geographical
locations were used for the PDC to WAMC link. The difference between the
dedicated and shared models delays was related to the stack used in each model. The
stack in the shared model used more time to transfer and receive data between
different components than the stack used in the dedicated model did.
5.1.4. End to End delays from WAMC to substation switch 3, substation
switch 6 and substation switch 8
The control commands of the ETE delays for the dedicated and shared models are
presented in this section. Three substations switches (substation switch 3, substation
switch 6 and substation switch 8) were configured to receive commands from the
WAMC. The control commands were composed of ten packets each and were sent
from the WAMC at three different times during the simulations.
In shared model
For the simulations of the 50% and 70% scenarios, 20% background traffic was
introduced in the direction from the control center to the substations. The background
traffic was introduced in the 2Mb link, and in the 100bateT link located between the
substations routers and switches.
Table 5 shows the ETE delays of the control commands for the 50% and 70%
scenarios, the collected data was shown in one row, because the configurations and
results were the same in the two scenarios. The ETE delay of WAMC to substation
switch 3 is equal to 7.1ms, to substation switch 6 is equal to 10.7ms and to substation
switch 8 is equal to 9.3ms.
Table 5: ETE delays of the control commands for the 50% and 70% scenarios
ETE delay
(sec)
substation switch 3 substation switch 6 substation switch 8
WAMC
50% & 70%
0.0071 0.0107 0.0093

Transferring the same amount of data through the same communication links, together
with introducing same constant background traffic, could relate the differences
between the delays to the geographical distances between the WAMC and the
substation switches.


45
In dedicated model
For the simulations of 64Kb and 128Kb scenarios, the path from WAMC to substation
switches was using the opposite side of the links connecting the PMUs to the PDC.
The control commands paths were implemented using static routing tables. The
communication link used from substation switches to substation routers, and from
WAMC to the control center router was 100BaseT for both scenarios.
Table 6 shows the ETE delay of the control commands for the 64Kb and 128Kb
scenarios. The ETE delay in the 64kb scenario is equal to 42ms for the path from
WAMC to substation switch 3, 43ms for the path from WAMC to substation switch 6
and 42ms for the path from WAMC to substation switch 8. Whereas in the 128kb
scenario the ETE delay is 21ms for the path from WAMC to substation switch 3,
22ms for the path from WAMC to substation switch 6 and 22ms for the path from
WAMC to substation switch 8.
Table 6: ETE delays of the control commands for the 64Kb and 128Kb scenarios
ETE delay (sec) Substation switch
3
substation switch
6
substation switch
8
64Kb 0.42 0.43 0.42
128Kb 0.21 0.22 0.21

Distances separating the substation switches from WAMC did not affect the ETE
delays belonging to the same scenario, because only a small amount of data was
transferred through the links. When increasing the channel capacity to 128Kb, the
ETE delays were reduced by half compared with the 64Kb scenarios.
5.2 Link throughput and utilization
Throughout the following section the throughput and utilization of the communication
links used in the dedicated and shared models are presented. The throughput
represented the number of packets which were transmitted or received through a link.
The unit used was sample per second. Utilization represented the percentage of
consumption of a link. The statistics were collected separately for each scenario. In
the figures in this section, the blue line represents the 50% scenario of the shared
model, while the red line represents the 70% scenario of the shared model. Whereas
for the dedicated model, the blue line represents the 64Kb scenario, and the red line
represents the 128Kb scenario. When a result figure shows only a blue line, it means
that the two scenarios of a model were equal. Since the ten PMUs were generating the
same amount of data, and PMUs configuration and links were the same, it was enough
to show the measurement of one path from PMU to PDC for simplicity.
5.2.1. Shared model
The shared model throughput and utilization presented in this section were taken from
four locations wherever there was a change in the communication link capacity when
changing from 100BaseT to 2Mb, or vice versa. The locations were between
substation switch and substation router, substation router and regional router, regional


46
router 6 and control center router and between regional router 6 and substations
router. The last location was taken to measure the throughput and utilization of
control commands.
The connection between PMU and regional router was composed of two
communication links. The First link was 100baseT and it was from PMU to the
substation switch and from the substation switch to the substation router. The second
link was 2Mb, and it was located between substations router and regional router 6.

Figure 32: Link throughput and utilization between substation switch to substation router
Figure 32 shows the throughput and utilization between the substation switch and the
substation router. The throughput shown on the left hand side of the figure is equal to
265 packet/sec for the 50% scenario, and 355 packets/sec for the 70% scenario. The
throughput capacity was the summation of the two traffic sources. The transfer rate of
a PMU was equal to 34 packets/sec, as discussed earlier in the implementation chapter
section 4.3. The rest of the packets corresponded to the background traffic. The
background traffic added to the network represented 50% and 70% of the 2Mb
network. The right side of Figure 32 shows the utilzation in a 100BaseT link between
the substation switch and the substation router. The utilization is equal to 1.1% in the
50% scenario, and 1.5% in the 70% scenario.

Figure 33: Link throughput and utilization between the substation router and the regional router


47
Figure 33 shows the throughput and utilization between the substation router and the
regional router. The left side of the figure shows the throughput which was equal to
the throughput shown in Figure 32. The utilization shown on the right hand side of
Figure 33 is equal to 50% in the 50% scenario, and 69% in the 70% scenario. The
utilization was measured on the 2Mb link.
The throughput was increased each time an additional PMU data joined the flow of
data towards the PDC, which resulted in increasing the utilization of the link. On the
2Mb link between regional router 6 and the control center router, shown on the left
hand side of Figure 34, the throughput capacity is 571 packets/sec for the 50%
scenario and 661 packets/sec for the 70% scenario. These numbers symbolized the
introduced background traffic and 10 PMUs data. The utilization between regional
router 6 and the control center router is shown on the right hand side of Figure 34 is
equal to 64% in the 50% scenario, and 84% in the 70% scenario.

Figure 34: Link throughput and utilization between regional router 6 and the control center
router
340 packets/sec was the traffic added by 10 PMUs to the network. This addition was
equal to 14% of a 2Mb Link. PMUs can share up to 86% of the background traffic in
a 2Mb link, but sharing a high percentage of background traffic would result in
increasing the ETE delay, as shown in Table 1 (the ETE delay increased when the
background traffic was increased from 50% to 70%). An increase in the ETE delay
would lead to an increase in the processing time of the PDC. Increasing the
processing time would result in increasing the response time of the control
commands, which was something not esteemed, because it would be too late for
applications to act in urgent situations.
For the throughput and utilization for the path between WAMC and the substation
switches, 20% background traffic of a 2Mb was presented during the whole
simulation time, while 10 packets representing control commands were sent in three
different times. The importance of this path was that the data sent on this path was
critical and had to arrive within minimum time to take action in an emergency. In real
networks, control commands data were not encouraged to share network traffic
because of its high sensitivity and security requirements.


48

Figure 35: Link throughput and utilization between regional router and substation router 3
Figure 35 shows the throughput and utilization between the regional router and
substation router 3. The throughput is shown on the left hand side of Figure 35, and
represented 20% background traffic. In addition to the 10 packets/sec of a control
command, this resulted in a total of 110 packets/sec for the 50% and 70% scenarios.
The utilization shown on the right hand side of Figure 35 is 20% of 2Mb link.
5.2.2. Dedicated model
The dedicated model throughput and utilization presented in this section were taken
from six locations wherever there was a change in the communication link capacity
when changing from 100BaseT to 2Mb link, or vice versa. The locations were
between the substation switch and the substation router, the substation router and the
regional router, the regional router and the core subnet, the regional router 6 and the
control center router, and between the control center router and the control center
switch.
The connection from PMU to the regional router was composed of two
communication links. The First link was 100baseT, and it was from PMU to the
substation switch and from the substation switch to the substation router. The second
link was 64kb in the first scenario and 128 Kb in the second scenario.

Figure 36: Link throughput and utilization between the substation switch and the substation
router


49
Figure 36 shows the throughput and utilization between the substation switch and the
substation router. The throughput capacity shown on the left hand side of Figure 36 is
equal to 30 packets/sec for the 64Kb and 128Kb scenarios. The transfer rate of PMU
was equal to 30 packets/sec as discussed earlier in the implementation chapter section
4.4. The utilization shown on the right hand side of Figure 36 shows the utilization of
one PMU data in the 64Kb and the 128Kb scenarios, and the utilization is equal to
0.029% in 100BaseT link.

Figure 37: Link utilization between the substation router and the regional router
Figure 37 shows the utilization between the substation router and the regional router.
The throughput between them was equal to 30 packets/sec. The utilization shown in
Figure 37 is equal to 38% for the 64Kb scenario, and 19% for the 128Kb scenario.

Figure 38: Link throughput and utilization between the regional router and the core subnet
Figure 38 shows the throughput and utilization between the regional router and the
core subnet. On the left hand side of Figure 38, the throughput is 60 packets/sec,
because two PMUs data were passing through this link. On the right hand side of
Figure 38, the utilization is 38% for the 64Kb scenarios, and 19% for the 128Kb
scenario. The throughput of the links was increased each time an additional PMU data
joined the flow toward the PDC. While the utilization of the links was kept the same,


50
because the link capacity was increased each time a PMU data was added, as
explained in the implementation chapter section 4.4.

Figure 39: Link throughput and utilization between regional_router_6 and the control center
router
Figure 39 shows the throughput and utilization between regional router 6 and the
control center router. The throughput capacity was shown on the left hand side of
Figure 39. It is equal to 300 packets/sec which represented the traffic of 10 PMUs.
The utilization was shown on the right hand side of Figure 39, the utilization is equal
to 38% for the 64Kb scenario, and 19% for the 128Kb scenario. The link capacity
between regional router 6 and the control center router was equal to 640Kb for the
64Kb scenario, and 1280Kb for the 128Kb scenarios.

Figure 40: Link utilization between the control center router and the control center switch
Figure 40 shows the utilization between the control center router and the control
center switch. The utilization is equal to 0.29% of a 100BaseT. The throughput
between control center router to control center switch is equal to the throughput
shown on the left hand side of Figure 39.


51
5.3 Response time of the designed models
Communication infrastructures and delays presented by different hardware/software
platforms play a crucial role in the whole response time of a network, because a
significant protection depends on the speed at which the control center could identify
and analyse an emergency. As mentioned in section 2.9 the total process of making a
consistent system involves six activities. Through the ETE delays captured in our
simulations, we were able to collect results related to two of those activities. The first
activity was transmission time of information and the second activity was
transmission of control signal. For the transmission time of information activity, the
largest ETE delay found from PMUs to PDC was used. For the transmission of
control signal, the ETE delay collected from WAMC toward substation switch 3,
substation switch 6 and substation switch 8 was used. For the other activities the times
estimates stated in section 2.9.1 were used, because these activities were machine and
middleware related.
5.3.1. In shared environment
This section lists the response time of a phasor network implemented in a shared
environment. Table 7 shows the response time of the network in 50% background
traffic scenario, and Table 8 shows the response time of the network in 70%
background traffic scenario. Since the control commands ETE delays were the same
for the 50% and 70% scenarios, as was shown in section 5.1.4, the transmission of
control signal activity had the same values in Table 7 and Table 8, with the
transmission of control signals toward substation switch 3 equaling to 7.1ms, towards
substation switch 6 being 10.7ms and towards substation switch 8 being 9.3ms.
The transmission time of information activity shown in Table 7 was the time of the
largest ETE delay found between PMUs to PDC in the 50% scenario. This delay was
equal to 19ms.
Table 7: Response time of the shared model in 50% scenario
For 50% background traffic (ms) Substation_3 Substation_6 Substation_8
Sensor processing time 5 5 5
Transmission time of
information
19 19 19
Processing incoming message
queue
10 10 10
Computing time for decision 100 100 100
Transmission of control signal 7.1 10.7 9.3
Operating time of local device 50 50 50
Total 191.1 194.7 193.3



52
The response time for a phasor network composed of ten PMUs with a transfer rate of
30 packets/sec and in the presence of 50% background traffic is equal to 191.1ms for
substation 3, 194.7ms for substation 6 and 193.3ms for substation 8.
For the 70% background traffic scenario, the largest ETE delay found from PMUs to
PDC was equal to 33ms, this delay was used in the transmission time of information
activity shown in Table 8.
Table 8: Response time of the shared model in 70% scenario
For 70% background traffic (ms) Substation_3 Substation_6 Substation_8
Sensor processing time 5 5 5
Transmission time of
information
33 33 33
Processing incoming message
queue
10 10 10
Computing time for decision 100 100 100
Transmission of control signal 7.1 10.7 9.3
Operating time of local device 50 50 50
Total 205.1 208.7 207.3

The response time for a phasor network composed of ten PMUs and sharing 70%
background traffic is equal to 205.1ms for substation 3, 208.7ms for substation 6 and
207.3ms for substation 8.
5.3.2. In dedicated environment
This section lists the response time of a phasor network implemented in a dedicated
environment. Table 9 shows the response time of the network with 64Kb dedicated
channels, and Table 10 shows the response time of the network with 128Kb dedicated
channels.
The transmission time of information activity shown in Table 9 was equal to the
largest ETE delay (72ms) found from PMUs to PDC in the 64Kb scenario. For the
transmission of control signal activity shown in Table 9, the ETE delays from WAMC
to substation switches found in the 64Kb scenario were used. These ETE delays were
equal to 42ms towards substation switch 3, 43ms towards substation switch 6 and
42ms towards substation switch 8.




53
Table 9: Response time of the dedicated model in 64Kb scenario
For 64Kb channel (ms) Substation_3 Substation_6 Substation_8
Sensor processing time 5 5 5
Transmission time of information 72 72 72
Processing incoming message
queue
10 10 10
Computing time for decision 100 100 100
Transmission of control signal 42 43 42
Operating time of local device 50 50 50
Total 279 280 279

The response time for a phasor network composed of 10 PMUs with a transfer rate of
30 packets/sec and using 64Kb dedicated channels is equal to 279ms for substation 3,
280ms for substation 6, and 279ms for substation 8.
For the transmission time of information and transmission of control signal shown in
Table 10. The transmission time of information activity was equal to 39ms which
represents the largest ETE delay found from PMUs to PDC in the 128Kb scenario.
The transmission of control signal activity was equal to the ETE delays from WAMC
to substation switches in the 128Kb scenario. These ETE delays were equal to 21ms
towards substation switch 3, 22ms towards substation switch 6 and 21ms towards
substation switch 8.
Table 10: Response time of the dedicated model in 128Kb scenario
For 128Kb channel (ms) Substation_3 Substation_6 Substation_8
Sensor processing time 5 5 5
Transmission time of information 39 39 39
Processing incoming message
queue
10 10 10
Computing time for decision 100 100 100
Transmission of control signal 21 22 21
Operating time of local device 50 50 50
Total 225 226 225



54
The total response time for a phasor network composed of 10 PMUs with a transfer
rate of 30 packets/sec and using 128Kb dedicated channel is equal to 225ms for
substation 3, 226ms for substation 6 and 225ms for substation 8.




























55
6. Conclusion and future work
This work is a contribution to the ongoing project of installing PMUs in the electric
power system industry for wide area monitoring and control purpose. The results
obtained from the simulations have contributed to a preliminary understanding of the
performance and requirements of the PMU in a shared and dedicated environment.
A large fraction of the project was dedicated to the studies of PMUs in wide area
monitoring and control systems. The relevant factor was the PMU transfer rate that is
enough to fulfill the necessities for the majority of the wide area monitoring and
control applications. The transfer rate was 30 packets/sec.
The results shown in chapter 5 showed the performance of PMUs data in shared and
dedicated network environments. In the shared model simulations the effect of the
background traffic on the ETE delays was illustrated. While, in the dedicated model
simulations the effect of the channel capacity on the ETE delays was illustrated.
Whereas for the estimations of the response time of the total process, the designed
models simulations showed that an action towards unbalanced system can be initiated
in a matter of milliseconds.
As a conclusion, we can affirm that the simulations showed satisfactory results, even
though the implementation was faced by some simplified assumptions as mentioned
in chapter 4 due to the theoretical composition of the thesis and time limitation.
Hence, the author is totally aware of the fact that this is only a one case simulation of
a complex reality.
From the hardware aspect, the models were simplified by using already built
workstations provided by OPNET, and they were configured according to the need of
the models. Concerning the configurations, the models were simplified by using the
default configurations of the TCP/IP protocol and the OSPF routing protocol.
The simulation running time was 10 hours. According to the configurations and
chosen metrics, no supplementary behaviors can be shown with additional simulation
time.
Consequently, further research can be conducted by studying the Quality of Service
(QoS) mechanism, which controls the real-time stack processing time.
This thesis work can also be extended through implementations of the TCP/IP and the
OSPF protocols with advanced configurations.
Another interesting aspect concerning further research is to re-estimate the response
time of the total process, due to the dynamic development process of communication
networks for wide area monitoring and control systems.






56
References
[1] S. Daniel, Do Investments Prevent Blackouts? IEEE Power Engineering
Society General Meeting, Volume, Issue, 24-28 June 2007 Page(s):1 5, 2007.
[2] S.H Horowitz, A.G Phadke, Boosting immunity to blackouts, IEEE Power and
Energy Magazine, Volume 1, Issue 5, Page(s): 47 53, Sept.-Oct. 2003
[3] D. J. Dolezilek, Power System Automation, Schweitzer Engineering
Laboratories, Pullman, WA USA, 1999.
[4] G. Andersson, P. Donalek, R. Farmer, N. Hatziargyriou, I. Kamwa, P. Kundur,
N. Martins, J. Paserba, P. Pourbeik, J. Sanchez-Gasca, R. Schulz, A. Stankovic, C.
Taylor, V. Vittal, Causes of the 2003 Major Grid Blackouts in North America and
Europe, and Recommended Means to Improve System Dynamic Performance, IEEE
Power Systems, Volume 20, Issue 4, Page(s): 1922 1928, Nov. 2005.
[5] R. Moxley PE, C. Petras PE, C. Anderson, KF odero II, Display and Analysis of
Transcontinental Synchrophasors, Schweitzer Engineering Laboratories, Inc,
Pullman, WA, USA,2004
[6] K. S. Swarup, P. Uma Mahesh, Computerized data acquisition for power system
automation, IEEE Power India Conference, Volume, Issue, Page(s): 7, April 2006
[7] T. E. Dy-Liacco, Control centers are here to stay, IEEE Comput. App. Power,
vol. 15, no. 4, pp. 1823, Oct. 2002.
[8] F. F. Wu, A. bose, and K. Moslehi, Power System Control Centers: Past, Present
and future, Proc. IEEE, vol. 93, pp.1890, Nov 2005.
[9] F. F. Wu, Real-time network security monitoring, assessment and
optimization, Elect. Power Energy Syst., vol. 10, pp. 83100, Apr. 1988.
[10] P. Joskow, Restructuring, competition and regulatory reform in the U.S.
electricity sector, J. Econ. Perspectives, vol. 11, no. 3, pp. 119138, 1997.
[11] K. Tomsovic, D. E. Bakken, V. Venkatasubramanian, A. Bose, Designing the
Next Generation of Real-Time Control, Communication, and Computations for Large
Power Systems, Sch. of Electr. Eng. & Comput. Sci., Washington State Univ.,
Pullman, WA, USA.
[12] J. Northcote-Green, R. Wilson, Control and Automation of Electrical Power
Distribution Systems, Taylor & Francis Group, 2007.
[13] www.aclaratech.com/twacs/support/specsheets/SCE.pdf , Retrieve June 23, 2008.
[14] M. Parashar, J. Dyer, T. Bilke Real-Time Dynamics Monitoring System, EIPP
Performance Requirements Task Team,


57
[15] G. H. David, U. David , G. Vasudev, N. Damir, K. Daniel, K. Mehmet, PMUs
A new approach power network monitoring, ABB review 1, 2001.
[16] J.Y. Cai; Z. Huang; J. Hauer; K. Martin, "Current Status and Experience of
WAMS Implementation in North America, IEEE/PES Transmission and Distribution
Conference and Exhibition: Asia and Pacific, Volume, Issue, 2005 Page(s):1 7,
2005.
[17] The Consortium for Electric Reliability Technology Solutions (CERTS)
Homepage: www.phasor-rtdms.com/phaserconcepts/phasor_overview.html
[18] IEEE Standards for Synchrophasors for Power Systems, IEEE std 1344-1995 ed.,
IEEE Reaffirmed March 2001.
[19] A.G. Phadke, Synchronized Phasor Measurements in Power Systems," IEEE
Computer Applications in Power, April 1993.
[20] B. Naduvathuparambil, M. C. Valenti, and A. Feliachi, Communication Delays
in Wide Area Measurement Systems, System theory, 2002. Proceedings of the
Thirty-Fourth Southeastern Symposium on, pp. 118-122, March 2002.
[21] IEEE Standards for Synchrophasors for Power Systems, IEEE std C37.118-2005
ed., IEEE March 2006.
[22] M. Larsson, P. Korba, M. Zima, Implementation and Applications of Wide-area
monitoring systems, IEEE Power Engineering Society General Meeting, Volume,
Issue, Page(s):1 6, 24-28 June 2007.
[23] J. Bertsch, C. Carnal, D. Karlson, J. McDaniel, K Vu, Wide-Area Protection
and Power System Utilization IEEE Power Technol. Syst., ABB Autom., Baden,
Switzerland, Volume: 93, Issue: 5 On page(s): 997-1003, May 2005.
[24] M. Larsson, R. Gardner, and C. Rehtanz, Interactive simulationand visualization
of wide-area monitoring and control applications,in Proc. Power Systems
Computation Conf., Lige, Belgium, 2005, submitted for publication.
[25] M. Larsson, C. Rehtanz, J. Bertsch, Real-timeVoltage Stability Assessment for
Transmission Corridors, Proceedings of IFAC Power Plants and Power Systems
Control Conference, Seoul, Korea, 2003.
[26] P. Korba, M. Larsson, C. Rehtanz, Detection of Oscillations in Power Systems
Detection of Oscillations in Power Systems using Kalman Filtering Techniques,
IEEE Conference on Control Applications, Istanbul, Turkey, 2003.
[27] M. Larsson, C. Rehtanz, D. Westermann, Improvement of Cross-border Trading
Capabilities through Wide-area Control of FACTS, Proc. of Bulk Power System
Dynamics and Control VI, 22 27 August, Cortina DAmpezzo, Italy, 2004.


58
[28] V.C. Gungor, F.C. Lambert, A survey on Communication networks for electric
system automation, Computer Networks 50, pp. 877-897, 2006.
[29] D.J. Marihart, Communications Technology Guidelines for EMS/SCADA
Systems, IEEE Transactions on Power Delivery, Vol. 16, No. 2, April 2001.
[30] A.L. Garcia, I. Widjaja, Communication Networks: Fundamental Concepts and
Key architectures, McGraw-Hill, 2004.
[31] F. Goodman et al, Technical and system requirements for advanced distribution
automation, Electric Power Research Institute Technical Report 1010915, June 2004.
[32] T. Tommila, O. Venta, K. Koskinen, Next generation industrial automation
needs and opportunities", Automation Technology Review 2001.
[33] The North American SynchroPhasor Initiative Homepage: www.naspi.org
[34] T. Skeie, S. Johannessen, O. Holmeide, Timeliness of real-time IP
communication in switched industrial Ethernet networks, IEEE Industrial
Informatics, Volume 2, Issue 1, Page(s): 25 39, Feb. 2006
[35] Nordic WAMS, FINGRID, PMU workshop, Stockholm 2008.
[36] M. Ken, C. Ritchie, H. Henry, C. Virgilio, N.Damir, H. Yi, A guide for PMU
Installation, Commissioning and maintenance, Eastern Interconnection Phasor
Project, Part I, May 2006.
[37] D. Comer, Internetworking with TCP/IP, Prentice Hall, 2006.
[38] J.D. McCabe, Network analysis, Architecture, and Design. Third edition, Morgan
Kaufmann publisher, 2007.
[39] W.T. Stayer, A.C. Weaver, Performance measurement of data service in MAP,
IEEE Network, Volume 2, issue 3, Page(s): 75-81, May 1988.
[40] B.G. Liptak, Process Software and Digital Networks, ISAThe
Instrumentation, systems, and Automation Society, 2002.
[41] C. Jeker, Design and Implementation of OpenOSPFD, Internet Business
Solutions AG.
[42] The OPNET homepage: www.opnet.com





59
Appendix
SvKs Network characteristics
The data listed in this appendix represents a summary of the relevant characteristics of
SvKs networks; this data was provided by the IT department in order to implement a
realistic network.
The PMU measurements are expected to be transferred through the following
networks: the Ethernet LAN inside the substation, through coaxial cables in
the WAN, and throughout optical fiber in the SDH. The longest distance from
a PMU to the control center is 1000 Km to 1200 Km.
The topology used in the WAN and SDH is meshed topology.
Network protocols for the Ethernet network
o Transport, internet, network layer: IP , OSPF
o Data layer: PPP
o Physical layer: 100BaseT
Network protocols for Wide Area Network
o Transport, internet, network layer: IP , OSPF
o Data layer: PPP
o Physical layer: Coaxial cable
Network protocols for SDH
o STM-1, STM-4
o Dual node architecture , multiple rings
o The SDH node is considered as a repeater
All components in a substation are connected to a switch and then to a router.
Response time for a control function already deployed in the system is 2
seconds
Critical traffic in shared network is prioritized through QoS, but the ideal case
is to have a dedicated network.

You might also like