You are on page 1of 8

1

Small-Signal Stability Assessment for large Power Systems using Computational Intelligence
S. P. Teeuwsen, Student Member, IEEE, I. Erlich, Member, IEEE, and M. A. El-Sharkawi, Fellow, IEEE
The use of on-line tools is even more complicated because Transmission System Operators (TSO) exchange only a restricted subset of the needed information. Each TSO controls a particular part of the power system, but the exchange of data between different parts is limited to a small number because of the competition between the utilities. Because the focus is on inter-area modes, there is no need to track all system eigenvalues, but only a small number of dominant eigenvalues, which are characterized by poor damping below 4% and frequencies below 1 Hz. The eigenvalue positions in the complex plain depend highly on load flow and operating point in the power system. A changing power flow will result in eigenvalue shifting. Since the underlying relationships are highly complex and nonlinear, it is not practicable to draw conclusions from, for example, the power flow directions or the voltage level only [2] and [3]. In a continuously changing liberalized power market environment, more advanced methods are needed for OSA. II. COMPUTATIONAL INTELLIGENCE APPLICATIONS Usually, the complete system model is not available for online OSA. Besides, the computations are time consuming and require expert knowledge. Therefore, the use of robust Computational Intelligence (CI) methods is highly recommended for fast on-line OSA. Since CI methods are based only on a small set of data they do not require the complete system information. Furthermore, the CI outputs are directly related to the stability problem. Suitable CI methods for OSA are Neural Networks (NN), Neuro Fuzzy methods (NF), and Decision Trees (DT). Neural Networks are trained by a set of training patterns and store the input-output relationships in weights. Once a multilayer feed-forward NN is properly trained, it is able to approximate highly non-linear functions. It is continuous over the complete instance space and shows a good interpolation behavior. Another type of NN is the Probabilistic NN (PNN), which is used for classification. The PNN is trained in order to assign presented inputs to a given class. Neuro Fuzzy methods are based on Fuzzy Logic systems using rules and membership functions to describe input-

Abstract--This paper introduces newly developed methods for on-line oscillatory stability assessment in large interconnected power systems. Special interest is focused on the fast prediction of critical inter-area oscillatory modes. Instead of eigenvalue computation using the complete power system model, the new proposed OSA methods are based on computational intelligence such as neural networks, neuro fuzzy methods, and decision trees. Computational intelligence needs only a small set of selected system information and can easily be implemented as fast on-line assessment tool. Index Terms--Oscillatory Stability Assessment, Interconnected Power Systems, Computational Intelligence Methods, On-Line Tool

I. INTRODUCTION The European power system has grown very fast in a short period of time due to recent east expansions. This extensive interconnection alters the stable operating region of the system, and the power network experiences inter-area oscillations associated with the swinging of many machines in one part of the system against machines in other parts. These inter-area oscillations are slow damped oscillations with quite low frequencies. In the European system, the stability is largely a problem of insufficient damping. The problem is termed small-signal stability or oscillatory stability, respectively. Inter-area oscillations in large-scale power systems are becoming more common nowadays, and especially in the European interconnected power system UCTE/CENTREL, they have been observed many times [1], [2]. Because of the increasing number of long distance transmissions, the system steers closer to its stability limits. Thus, the operators need real-time computational tools for enhancing system stability. Of main interest in the European power system is the oscillatory stability assessment (OSA).
S. P. Teeuwsen, University of Duisburg-Essen, Germany, (teeuwsen@uni-duisburg.de) I. Erlich, University of Duisburg-Essen, Germany, (erlich@uni-duisburg.de) M. A. El-Sharkawi, University of Washington, Seattle, USA, (elsharkawi@ee.washington.edu)

Authorized licensed use limited to: UNIVERSITY TECNOLOGI MARA. Downloaded on August 27, 2009 at 21:41 from IEEE Xplore. Restrictions apply.

output relationships. However, since one might have no exact knowledge about the rules in advance, the NF systems are trained similarly to NNs by a training algorithm in order to find the rules. Since both the NN and the NF methods are based on training, they may require very long training times depending on the given problem. The most common NF method used is the Adaptive NN based Fuzzy Inference System (ANFIS). In contrast to trained systems, the Decision Tree method is based on a learning or growing process, which ends once the full DT is set up. Therefore, the DT method is much faster to apply to given data. The DT method is rule based and splits the data into subsets depending on their characteristics. DTs can be grown for both classification and regression. When the CI based OSA is used in real-time, it has to be designed as a robust tool that is not influenced by the time of the day, the season, the topology and the quality of data (missing or bad data). CI methods always need feature extraction or feature selection before implementation. Aspects of feature selection/extraction for OSA concern not only the information content of features but also their availability due to the fact that they must be exchanged between utilities. Some examples for stability assessment based on CI methods are given in [4] and [5]. Moreover, CI methods can also be used off-line for fast stability studies instead of time-consuming eigenvalue computations. In power system planning there may not always be a detailed dynamic system model available, which is necessary for analytical and computational OSA. Under such circumstances, CI gives fast results and benefits from its accurate interpolation ability without detailed knowledge of all system parameters. III. FEATURE SELECTION Large power systems provide many variables describing the system state. This includes load flow information such as voltages, real and reactive power line flows, voltage angles, generated powers, and demands. The information may also include parametric (topological) data such as transformer tapsettings, switch positions, and system topology. For large interconnected power systems, the complete state information is too large for any effective CI method [6]. Therefore, the data must be reduced to a smaller number of information, which can be used for CI input. When used with too many inputs, CI methods will lead to long processing times and may not provide reliable results. According to the CI literature, the input variables are characterized as attributes or features. In general, the reduced set of features must represent the entire system, since a loss of information in the reduced set results in loss of both performance and accuracy in the CI methods. The feature selection process proposed in this study is shown in Figure 1.

Fig. 1 Basic Concept of the Feature Selection Procedure

In large interconnected power systems, it is difficult to develop exact relationships between features and targeted oscillatory stability. This is because the system is highly nonlinear and complex. For this reason, feature reduction cannot be performed by engineering judgment or physical knowledge only; rather, it must be implemented according to the statistical property of the various features and the dependency among them. Before an advanced mathematical selection method is applied, the initial feature set is pre-selected by engineering judgment, and the data are normalized. The pre-selection in the beginning is necessary to collect as many data from the power system, which are assumed to be of physical interest for OSA. The focus is hereby on those features, which are both measurable in the real power system and available from the power utilities. Besides the typical power system features such as real and reactive power or voltages, the rotating generator energy is used as a feature as well. The power flow through transformers is not taken into account in this study since the complete transmission line data are computed and the transformer power is redundant information. However, the features used in this study are generator related features, namely the generated real and reactive power of individual machines and their summation per area. The rotating generator energy is defined as the installed MVA of running blocks multiplied by the inertia constant. Since the number of running blocks is adjusted during the computation depending on the generated power, this feature provides information about the rotating mass in the system. The rotating generator energy is computed for both individual machines and all machines in the same area. Moreover, the real and reactive power on all transmission lines in the system and the voltages as well as the voltage angles on all bus nodes are used as features because they contain important information about the load flow in the system. The features used are listed in Table I.

Authorized licensed use limited to: UNIVERSITY TECNOLOGI MARA. Downloaded on August 27, 2009 at 21:41 from IEEE Xplore. Restrictions apply.

3 TABLE I FEATURES USED FOR CI TRAINING

# 1 2 3 4 5 6 7 8 9 10

Feature Description Sum Generation per Area Individual Generator Power Rotating Generator Energy Sum Rotating Generator Energy per Area Real Power on Transmission Lines Reactive Power on Transmission Lines Real Power exchanged between Areas Reactive Power exchanged between Areas Bus Voltages Bus Voltage Angles

P, Q P, Q E E P Q P Q V

original feature vectors and p is the number of patterns. The empirical covariance matrix C of the normalized F is computed by 1 (1) C = FT F
p 1

Let T be a n n matrix of the eigenvectors of C, and the diagonal variance matrix 2 is given by (2) 2 = TT C T
2 . Notice that the eigenvalues 2 includes the variances x

k of the covariance matrix C are equal to the elements of the


variance matrix 2 . The standard deviation k is also the singular value of F: (3) k2 = k (1 k n) The n eigenvalues of C can be determined and sorted in descending order 1 2 K n . While T is an ndimensional matrix whose columns are the eigenvectors of C, Tq is a n q matrix including q eigenvectors of C corresponding to the q largest eigenvalues of C. The value of q determines the size of the new dimension of the features and is smaller than n. It also determines the retained variability of the features, which is the ratio between the first q eigenvalues and the sum of all n eigenvalues [8].
=

This study uses the PST 16-Machine Test System shown in Figure 2 for stability analysis [7].
SM SM SM SM SM

380 kV 220 kV 110 kV 15.75 kV

AREA A
SM SM

AREA B
SM

5 2 1
SM

6 7 7
SM SM

4 5

8 7 1 2
SM

9 9 10
SM

8 3 4 6 5

11


i =1 i =1 n

(4)

10
SM

19 16

15

11
SM

12 14 17
SM

13

18

AREA C

Fig. 2 PST 16-Machine Dynamic Test System

From the basic equation of the PC transformation F ( PC ) = F T follows


F Fq( PC) TqT

(5) (6)

The system consists of 3 areas with 16 generators. The total number of features listed in Table I is 301. Considering the fact that the voltages at the 16 generator bus nodes (PV) and the voltage angle at the slack bus are constant, there are 284 features remaining. Furthermore, the 110 kV voltage level is only represented in one part of the power system (Area C) and therefore 32 features related to the 110 kV voltage level are also excluded from the feature selection process. Thus, the remaining number of features for the selection is 252. This number is then reduced down to a small set of features, which can be managed well by any CI technique. A. Selection by PCA and Clustering In the first step, the data are reduced in dimension by the Principal Component Analysis (PCA), which is characterized by a high reduction rate and a minimal loss of information. The PCA technique is fast and can be applied to large data sets [8]. However, a simple projection onto the lower dimensional space transforms the original features into new ones without physical meaning. Because feature selection techniques do not have this disadvantage, the PCA is not used for transformation but for dimensional reduction only, followed by a feature selection method. Let F be a feature matrix of dimension, p n where n is the number of the

where F

( PC )

contains the selected PC feature vectors.

According to Equation (6), Tq can be interpreted as loadings matrix, and F ( PC) as not normalized factor matrix. Now, the idea is to use the columns of TqT instead of the high dimensional original feature vectors F for clustering. The value for q is chosen depending on the required variability. The n columns of TqT are vectors, which represent the projection of the i-th feature of F onto the lower qdimensional space. Therefore, the used k-Means cluster algorithm shows with TqT a better performance and accuracy as applied to the original features directly. This is why both techniques, principal component analysis and clustering, are used in combination. Nevertheless, it is even possible to skip the PCA computation altogether, which means, that the original features are clustered directly from the beginning. This way (approach) is preferable for features with only few patterns. Once the columns of TqT are computed, the k-Means algorithm clusters them into k groups, whereby k is independent of q. Because of the similarity between the features within a cluster, one can be selected and the others

Authorized licensed use limited to: UNIVERSITY TECNOLOGI MARA. Downloaded on August 27, 2009 at 21:41 from IEEE Xplore. Restrictions apply.

can be treated as redundant information. The feature in one cluster, which is closest to the center of this cluster, will be chosen. Thus, a group of k features will be maintained. The process of data clustering must be followed by a final selection to obtain the input features for the OSA methods. The k-Means cluster algorithm provides a feature ranking based on the distance between the cluster center and the features within the cluster. When used automatically, the algorithm selects the feature from one cluster, which is close to the center of the cluster. This way of selection is based on the mathematical relationship only and does not include some physical or engineering intention. To include engineering knowledge in the selection process, the features inside a cluster are treated as physical measurements from the power system and judged by criteria such as technical measurability, availability from the utilities, and expected usefulness for CI based OSA. For example, the transmitted real power between two network areas might be much more useful as input feature than a voltage angle on a bus. The voltage on a bus might be more accessible than the generated real power on a particular generator because the voltage is both measurable and shared. B. Selection by a Decision Tree In the previous section a cluster algorithm for the selection of CI input features was introduced. However, the process of clustering is deficient in the sense that it is not concerned with the CI target. The clustering algorithm leads to a result depending on the natural structure of the presented input data. However, this result is not impacted by the power system stability assessment problem. In other words, a certain feature showing no correlation to other features may be selected in a single cluster, but that does not necessarily imply that this feature is important for OSA. In order to improve the result, another way of feature selection is introduced in this section. It is based on a DT and the key idea is to grow a DT with the full input data set. The tree is grown by an algorithm, which determines the tree nodes of highest data separability and places them on the top of the tree. In other words, the top nodes of the tree give the most important information about how to separate the data set and therefore they can be used as input features. The DT is grown with the entire data set as input and features corresponding to the k nodes from the top are selected as input features [6]. The tree output is assigned to the minimum-damping coefficient. DT techniques belong to CI methods and became highly popular in the age of modern computers. They are based on a sequence of questions that can be answered by either yes or no. Each question queries whether a predictor satisfies a given condition or not, whereby the condition can be both continuous and discrete. Depending on the answer to each question, one can either proceed to another question or arrive at a response value. DTs can be used for nonlinear regression (Regression Tree) when using continuous variables, or they can be used for classification (Classification Tree) when using discrete classes. When used for feature selection, the DT is grown as a regression tree [9]. Without prior knowledge of the

nonlinearity, the regression tree is capable of approximating any nonlinear relationship using a set of linear models. Although regression trees are interpretable representations of a nonlinear input-output relationship, the discontinuity at the decision boundaries is unnatural and brings undesired effects to the overall regression and generalization of the problem [10]. To construct a tree, the data is divided into two sets. One set is used to learn the tree and the other set is used to test it afterwards. For tree growing, there are different algorithms available depending on the kind of tree desired. Regardless of the algorithm, the first task is to find the root node for the tree. The root node is the first node splitting the entire data set into two parts. Therefore, the root must do the best job in separating the data. The initial split at the root creates two new nodes, called branch nodes. The algorithm searches at both branch nodes again for the best split to separate the sub sets, and following this recursive procedure, the algorithm will continue to split all branch nodes by exhaustive search until either a branch node contains only patterns of one kind, or the diversity cannot be increased by splitting the node. The nodes, where the tree is not further split, are labeled as leaf nodes. When the entire tree is split until only leaf nodes remain, the final tree is obtained [9]. For the selection purposes, the DT is grown with the entire data set as input. There is no need for a testing set because the focus is on the branch splitting criteria and not on the tree accuracy. The DT is pruned to a level, where only the desired number of features remains. Hereby, the repeated selection of the same feature is prevented. However, the features corresponding to the k nodes from the top, which do not contain any feature more than once, are selected as input features. However, both selection methods are applied to the features listed in Table I. Then, different feature sets of varying size are selected to show the impact on the number of selected features on the OSA methods. IV. PROPOSED ASSESSMENT METHODS Apparently, the coordinates of eigenvalues can be calculated with CI directly. This method is applicable with the restrictions that the locus of different eigenvalues may not overlap each other and the number of eigenvalues must always remain constant. These follow from the fact that eigenvalues are assigned directly to CI outputs. Another approach discussed in the following sections considers fixed rectangular regions, which depending on the existence of eigenvalues inside are identified and classified, respectively. A more generalized method considers the activations of sampling points without any fixed partitioning of the complex plain into regions. Generally, the advantage of each method using classification or activation values is that eigenvalues do not need to be separable. Therefore the number of eigenvalues can vary and overlapping does not lead to problems. In the next sections, the following OSA methods based on CI will be discussed in detail:

Authorized licensed use limited to: UNIVERSITY TECNOLOGI MARA. Downloaded on August 27, 2009 at 21:41 from IEEE Xplore. Restrictions apply.

Classification of System States Estimation of Power System Minimum Damping Direct Eigenvalue Prediction Eigenvalue Region Classification Eigenvalue Region Prediction

sufficiently damped and insufficiently damped cases. Table II shows the classification errors, when classified by a PNN and Table III shows the errors for classification by a DT.
TABLE II CLASSIFICATION ERROR BY PNN FOR DIFFERENT INPUT DIMENSIONS

A. Classification of System States The simplest step in OSA is the classification of a load flow scenario into sufficiently damped or insufficiently damped situations. According to [2], the minimum acceptable level of damping is not clearly known, but a damping ratio less than 3% must be accepted with caution. Further, a damping ratio for all modes of at least 5% is considered as adequate system damping. In this study, the decision boundary for sufficient and insufficient damping is at 4%. To generate training data for the proposed CI methods, different load flow conditions under 5 operating points are considered in the PST 16-Machine Test System resulting in a set of 5,360 patterns [7]. The dominant eigenvalues for all cases are shown in Figure 3. 10% 6% 4% 2% 0%

Inputs 5 10 20 30 40 50

Training 0.60 % 0.44 % 0.15 % 0.04 % 0.04 % 0.00 %

Testing 1.12 % 1.31 % 1.49 % 1.31 % 1.31 % 1.31 %

TABLE III CLASSIFICATION ERROR BY DT FOR DIFFERENT INPUT DIMENSIONS

Inputs 5 10 20 30 40 50

Learning 0.25 % 0.31 % 0.17 % 0.25 % 0.21 % 0.25 %

Testing 0.93 % 0.93 % 0.93 % 1.12 % 0.93 % 0.75 %

B. Estimation of Power System Minimum Damping The classification of the power system state is based on the computation of the damping coefficients . For a given scenario, the damping coefficients corresponding to the n dominant eigenvalues are computed and the minimumdamping coefficient is given by Winter Spring Summer Winter S1 Winter S2

min = min( i )

1 i n

(7)

Fig. 3 Eigenvalues for Different Operating Conditions of the PST 16-Machine Test System; Classification Border at 4% Damping

The slant lines in the figure are for constant damping at 0% to 10%. As seen in the figure, most of the cases are for welldamped conditions, but in some cases the eigenvalues shift to the low damped region and can cause system instability. When a load flow scenario includes no eigenvalues with corresponding damping coefficients below 4%, the load flow is considered as sufficiently damped. When at least one of the modes is damped below 4%, the load flow is considered insufficiently damped. When the classification of the system state is implemented together with other OSA methods, the classification may be used in a first assessment step to detect insufficiently damped situations. However, the only information is that these situations are insufficiently damped. For further investigation, another OSA method may be used. The classification of the system states is performed by common classifier techniques. The patterns are divided into

In classification, min is compared to the classification border and the situation is determined as sufficiently damped or insufficiently damped. However, instead of separating the scenarios into sufficiently and insufficiently damped cases, a CI method can be applied to estimate directly the value of min . The advantage is that there exists no crisp classification border, whose discontinuity automatically leads to errors for load flow scenarios near the border. In addition, the exact minimum-damping value can be used as a stability index that provides the TSO with much more information about the distance to blackouts. The power system minimum-damping can be assessed using an ANFIS. The STD of the difference between computed and predicted minimum-damping is shown in Table IV. The mean is always zero.
TABLE IV STD OF DIFFERENCES BETWEEN PREDICTED AND COMPUTED DAMPING COEFFICIENTS FOR ANFIS UNDER DIFFERENT INPUT DIMENSIONS

Inputs 5 6 7 8 9 10

Training 0.20 % 0.17 % 0.16 % 0.14 % 0.18 % 0.17 %

Testing 0.32 % 0.17 % 0.17 % 0.16 % 0.18 % 0.17 %

Authorized licensed use limited to: UNIVERSITY TECNOLOGI MARA. Downloaded on August 27, 2009 at 21:41 from IEEE Xplore. Restrictions apply.

However, ANFIS cannot be designed for a large number of inputs, therefore, the DT method is more suitable for high number of inputs. The DT results are shown in Table V.
TABLE V STD OF DIFFERENCES BETWEEN PREDICTED AND COMPUTED DAMPING COEFFICIENTS FOR DT UNDER DIFFERENT INPUT DIMENSIONS

Inputs 5 10 20 30 40 50

Learning 0.11 % 0.12 % 0.10 % 0.09 % 0.09 % 0.09 %

Testing 0.20 % 0.38 % 0.23 % 0.22 % 0.23 % 0.23 %

coordinates requires a constant number of coordinates. Moreover, it is required that the locus of the dominant eigenvalues may not overlap. For CI training, the eigenvalues need to be computed analytically to generate training data. In analytical eigenvalue computation, the order of computed eigenvalues may vary. Therefore, it is not possible to track the coordinates of particular eigenvalues but only as a complete vector of all computed eigenvalues. The direct eigenvalue prediction leads to highly accurate results as shown in Figure 3, but it is only applicable under some constraints, for example in small power systems with few dominant eigenvalues that do not overlap. CI methods applied to large power systems must be designed for a variable number of dominant eigenvalues. D. Eigenvalue Region Classification The eigenvalue region classification method is an expansion of the previously introduced 2-class classification to a multi-class classification with more than two classes and was proposed in [7]. First, the area within the complex plain, where dominant eigenvalues typically occur, is defined as observation area. Then, the observation area is split into smaller regions. The borders of these regions are determined by certain values for the damping and the frequency. For each region in the observation area, classification is performed independently. The classification is based on the existence of eigenvalues inside these regions. Thus, a more detailed picture of the power system is achieved. The dominant eigenvalues from the PST 16-Machine Test System under 5 operating conditions and the observation area, divided into 12 regions, is shown in Figure 5. The regions overlap slightly to avoid a high error for false dismissal at the classification borders. When the number of regions is increased, the accuracy of the eigenvalue region classification method is higher because smaller regions lead to more exact information about the location of eigenvalues. But a large number of regions will cause high misclassification errors at the boundaries.

C. Direct Eigenvalue Prediction Instead of classification and damping estimation, the positions of the dominant eigenvalues within the complex plain can be predicted. An OSA application for direct eigenvalue prediction was introduced first in [11] and [12]. In this work, a NN is used to predict the eigenvalue positions. Hereby, the NN outputs are directly assigned to the coordinates of dominant eigenvalues. 3% 2% 1% 0%

10%

Fig. 4 Comparison of Eigenvalues of the European Power System UCTE/CENTREL under various Load Flow Conditions. The accurate Eigenvalues are marked with x, the Eigenvalues predicted by the NN are circled.

Figure 4 shows a comparison of eigenvalues received from detailed eigenvalue analysis and based on the NN approach respectively. This example has been calculated for the European interconnected power system UCTE/CENTREL under various load flow situations. The figure shows two dominant eigenvalues with damping around and below 3%. A third eigenvalue with damping above 10% becomes dominant only for certain load flow conditions. However, when the CI method is directly assigned to the coordinates of dominant eigenvalues, these dominant eigenvalues must remain dominant under all possible load flow conditions. However, the number of dominant eigenvalues is not necessarily constant for all load flow scenarios, but a direct assignment of the CI method to the

Winter Spring Summer Winter S1 Winter S2

Fig. 5 Eigenvalues for Different Operating Conditions of the PST 16-Machine Test System and 12 Overlapping Regions for Eigenvalue Region Classification

Authorized licensed use limited to: UNIVERSITY TECNOLOGI MARA. Downloaded on August 27, 2009 at 21:41 from IEEE Xplore. Restrictions apply.

The classification errors for eigenvalue classification using 12 PNNs are listed in Table VI.
TABLE VI EIGENVALUE REGION CLASSIFICATION ERRORS BY PNN FOR DIFFERENT INPUT DIMENSIONS

region 4% 3% 2% 1% 0% -1%

Inputs 5 10 20 30 40 50

Training 3.87 % 2.01 % 1.07 % 0.75 % 0.59 % 0.41 %

Testing 4.83 % 4.37 % 3.12 % 3.59 % 3.58 % 3.58 %

Winter Spring Summer Winter S1 Winter S2

E. Eigenvalue Region Prediction All methods described above have some disadvantages. The direct eigenvalue prediction is highly accurate, but it lacks the dependency of a fixed number of eigenvalues to predict since it is directly related to the eigenvalue coordinates. Large power systems experience more than one or two dominant eigenvalues and the number of dominant eigenvalues may vary for different load flow situations. Moreover, eigenvalues always show overlapping. Every classification method lacks at its crisp classification boundaries, where usually high number of misclassifications occur. However, the eigenvalue region prediction method is the logical consequence of the combination of the direct eigenvalue prediction and the eigenvalue region classification. Eigenvalue region prediction shows none of the mentioned drawbacks and benefits from both techniques. The method is independent of the number of dominant eigenvalues and the predicted eigenvalue regions are in contrast to the eigenvalue region classification not fixed and predefined. Predicted eigenvalue regions are flexible in their size and shape and given by the prediction tool. This method was first proposed in [13] and [14]. For eigenvalue region prediction, the observation area is defined within the complex eigenvalue plain similar to the region classification method. Then, the entire observation area is sampled in direction of the real axis and the imaginary axis. The distances between sampling points and dominant eigenvalues are computed and the sampling points are activated depending on these distances. The closer an eigenvalue to a sampling point, the higher the corresponding activation. Once this is computed for the entire set of patterns, a CI method, e.g. NN, is trained using these activations. After properly training the CI method, it can be used in real-time to compute the sampling point activations. Then, the activations are transformed into predicted eigenvalue regions where the eigenvalues are obviously located. These predicted eigenvalue regions are characterized by activations higher than a given limit. The dominant eigenvalues from the PST 16-Machine Test System under 5 operating conditions and the sampled observation area is shown in Figure 6. The sampling points are marked by circles.

Fig. 6 Eigenvalues for Different Operating Conditions of the PST 16-Machine Test System and Sampling Point Locations in the Observation Area

When the observation area is sampled as shown in Figure 6, the activations of the sampling points are used to set up predicted eigenvalue regions. An example for eigenvalue region prediction is shown in Figure 7. 4% 3% 2% 1% 0% -1%

Eigenvalue Sampling Point Predicted Region

Fig. 7 Real Eigenvalue Positions inside the Predicted Eigenvalue Regions

Eigenvalues outside a corresponding region are counted as errors. The errors for eigenvalue region prediction by NN are shown in Table VII. The errors for a set of DTs are shown in Table VIII. The errors for eigenvalue region prediction are worse than the errors for classification. The reason is that a simple classification is much easier to perform than a prediction of an eigenvalue region. However, most errors arise due to the fact that eigenvalues occur very close to the predicted eigenvalue regions and therefore the actual number of errors is much smaller. Note that the error percentage for classification, eigenvalue region classification, and eigenvalue region prediction errors is not to be confused with the STD of the damping percentage for computed and predicted minimum-dampings.

Authorized licensed use limited to: UNIVERSITY TECNOLOGI MARA. Downloaded on August 27, 2009 at 21:41 from IEEE Xplore. Restrictions apply.

8 TABLE VII EIGENVALUE REGION PREDICTION ERRORS BY NN FOR DIFFERENT INPUT DIMENSIONS [5] Y. Mansour, E. Vaahedi, M.A. El-Sharkawi, Large Scale Dynamic Security Screening and Ranking using Neural Networks, IEEE Trans. on Power Systems, vol. 12, no. 2, May 1997 L.A. Wehenkel, Automatic Learning Techniques in Power Systems, Kluwer Academic Publishers, Boston, 1998 S.P. Teeuwsen, I. Erlich, M.A. El-Sharkawi, Neural Network based Classification Method for Small-Signal Stability Assessment, IEEE PowerTech, Bologna, Italy, June, 2003 I.T. Jolliffe, Principal Component Analysis, Springer, New York, 1986 Breiman, et al., Classification and Regression Trees, Wadsworth Int., Calif., 1984 R. Jang, Neuro-Fuzzy and Soft Computing, Prentice Hall, NJ, 1997 S.P. Teeuwsen, A. Fischer, I. Erlich, M.A. El-Sharkawi, Assessment of the Small Signal Stability of the European Interconnected Electric Power System Using Neural Networks, LESCOPE 2001, Halifax, Canada, June 2001 S.P. Teeuwsen, I. Erlich, M.A. El-Sharkawi, Feature Reduction for Neural Network based Small-Signal Stability Assessment, PSCC 2002, Sevilla, Spain, June 2002 S.P. Teeuwsen, I. Erlich, M.A. El-Sharkawi, Small-Signal Stability Assessment based on Advanced Neural Network Methods, IEEE PES General Meeting, Toronto, Canada, July, 2003 S.P. Teeuwsen, I. Erlich, U. Bachmann, Small-Signal Stability Assessment of the European Power System based on Advanced Neural Network Method, IFAC 2003, Seoul, Korea, September, 2003

Inputs 5 10 20 30 40 50

Training 11.28 % 5.96 % 3.87 % 3.41 % 3.01 % 3.34 %

Testing 10.95 % 7.10 % 3.55 % 2.66 % 3.25 % 2.96 %

[6] [7]

[8] [9] [10] [11]

TABLE VIII EIGENVALUE REGION PREDICTION ERRORS BY DT FOR DIFFERENT INPUT DIMENSIONS [12]

Inputs 5 10 20 30 40 50

Learning 1.65 % 1.46 % 1.29 % 1.36 % 1.39 % 1.32 % V. CONCLUSION

Testing 4.14 % 5.03 % 5.03 % 5.62 % 4.73 % 3.85 %

[13]

[14]

VII. BIOGRAPHIES The classification of the system states gives an overview over the stability situation. But in the case of insufficient damping, the TSO will need more detailed information. This might be given by the estimation of the power system damping. However, there is still no information about the number of dominant eigenvalues and the frequency of oscillation. The direct eigenvalue prediction provides both, but it lacks the dependency on the number of eigenvalues. It is therefore mostly applicable in small power systems with few dominant eigenvalues. The eigenvalue region classification method is independent of the number of dominant eigenvalues and provides information about the approximate position of eigenvalues. However, each classification method shows high errors on the decision boundaries. Therefore, the most flexible method is the eigenvalue region prediction method. It does not depend on any predefined regions and provides the TSO with detailed and accurate information about number and position or dominant eigenvalues within the complex plain. For a small number of input features, the errors usually increase. However, in the investigated test system, a number of 2030 features lead in combination with effective CI implementation to accurate and applicable results. VI. REFERENCES
[1] U. Bachmann, I. Erlich and E. Grebe, Analysis of interarea oscillations in the European electric power system in synchronous parallel operation with the Central-European networks, IEEE PowerTech, Budapest 1999 H. Breulmann, E. Grebe, et al., Analysis and Damping of Inter-Area Oscillations in the UCTE/CENTREL Power System, CIGRE 38-113, Session 2000 M. Kurth, E. Welfonder, Oscillation Behaviour of the Enlarged European Power System under Deregulated Energy Market Conditions, IFAC 2003, Seoul, Korea, September 15-18, 2003 M.A. El-Sharkawi, Neural network and its ancillary techniques as applied to power systems, IEE Colloquium on Artificial Intelligence Applications in Power Systems, pp. 3/1 - 3/6, 1995 Simon P. Teeuwsen (1976) is presently PhD student in the Department of Electrical Power Systems at the University of Duisburg-Essen/Germany. He started his studies at the University of Duisburg in 1995. In 2000, he went as exchange student to the Unversity of Washington, Seattle, where he performed his Diploma Thesis. After his return to Germany in 2001, he received his Dipl.-Ing. degree at the University of Duisburg. He is a member of VDE, VDI, and IEEE. Istvan Erlich (1953) received his Dipl.-Ing. degree in electrical engineering from the University of Dresden/Germany in 1976. After his studies, he worked in Hungary in the field of electrical distribution networks. From 1979 to 1991, he joined the Department of Electrical Power Systems of the University of Dresden again, where he received his PhD degree in 1983. In the period of 1991 to 1998, he worked with the consulting company EAB in Berlin and the Fraunhofer Institute IITB Dresden respectively. During this time, he also had a teaching assignment at the University of Dresden. Since 1998, he is Professor and head of the Institute of Electrical Power Systems at the University of Duisburg-Essen/Germany. His major scientific interest is focused on power system stability and control, modelling and simulation of power system dynamics including intelligent system applications. He is a member of VDE and IEEE. Mohammed A. El-Sharkawi received the B.Sc. degree in electrical engineering in 1971 from Cairo High Institute of Technology, Egypt, and the M.A.Sc. and Ph.D. degrees in electrical engineering from the University of British Columbia, Vancouver, B.C., Canada, in 1977 and 1980, respectively. In 1980, he joined the University of Washington, Seattle, as a Faculty Member. He served as the Chairman of Graduate Studies and Research and is presently a Professor of Electrical Engineering. He is the Vice President for Technical Activities of the Neural Networks Society. He organized and taught several international tutorials on intelligent systems applications, power quality and power systems, and he organized and chaired numerous sessions in IEEE and other international conferences. He is a member of the editorial board and Associate Editor of several journals, including the IEEE TRANSACTIONS ON NEURAL NETWORKS.

[2]

[3]

[4]

Authorized licensed use limited to: UNIVERSITY TECNOLOGI MARA. Downloaded on August 27, 2009 at 21:41 from IEEE Xplore. Restrictions apply.