286

IEEE TRANSACTIONS ON POWER DELIVERY, VOL. 21, NO. 1, JANUARY 2006

A Self-Organizing Learning Array System for Power Quality Classification Based on Wavelet Transform
Haibo He, Student Member, IEEE, and Janusz A. Starzyk, Senior Member, IEEE
Abstract—This paper proposed a novel approach for the Power Quality (PQ) disturbances classification based on the wavelet transform and self organizing learning array (SOLAR) system. Wavelet transform is utilized to extract feature vectors for various PQ disturbances based on the multiresolution analysis (MRA). These feature vectors then are applied to a SOLAR system for training and testing. SOLAR has three advantageous over a typical neural network: data driven learning, local interconnections and entropy based self-organization. Several typical PQ disturbances are taken into consideration in this paper. Comparison research between the proposed method, the support vector machine (SVM) method and existing literature reports show that the proposed method can provide accurate classification results. By the hypothesis test of the averages, it is shown that there is no statistically significant difference in performance of the proposed method for PQ classification when different wavelets are chosen. This means one can choose the wavelet with short wavelet filter length to achieve good classification results as well as small computational cost. Gaussian white noise is considered and the Monte Carlo method is used to simulate the performance of the proposed method in different noise conditions. Index Terms—Noise, power quality (PQ), self-organizing learning array (SOLAR), support vector machine (SVM), wavelet transform.

Fig. 1. Two-stage wavelet analysis filter bank.

I. INTRODUCTION OWER QUALITY (PQ) is becoming prevalent and of critical importance for power industry recently. The fast expansion in use of power electronics devices led to a wide diffusion of nonlinear, time-variant loads in the power distribution network, which cause massive serious power quality problems. At the same time, the wide use of accurate electronic devices require extremely high quality power supplies. According to the data provided by Electrical Power Research Institute (EPRI), the US economy is losing between $104 billion and $164 billion a year to outages, and another $15 billion to $24 billion to PQ phenomena [1]. Therefore, the research of power quality issues has captured exponentially increasing attention in the power engineering community in the past decade. This paper is focused on the PQ disturbances classification problem. Artificial intelligence (AI) and machine learning is one of the most powerful tools to deal with this issue. Ibrahim et al. provided an excellent survey of the advanced AI techniques for PQ application [2]. The most interesting AI tools for PQ
Manuscript received May 24, 2004; revised December 16, 2004. This work was supported in part by School of Electrical Engineering and Computer Science, Ohio University, Athens, OH. Paper no. TPWRD-00314-2004. The authors are with the School of Electrical Engineering and Computer Science, Ohio University, Athens, OH 45701, USA (e-mail: haibohe@ bobcat.ent.ohiou.edu; starzyk@bobcat.ent.ohiou.edu). Digital Object Identifier 10.1109/TPWRD.2005.852392

P

problems include Expert Systems (ES) [3], Fuzzy Logic (FL) [4]–[7], Artificial Neural Networks (ANNs) [5], [8]–[10] and Genetic Algorithms (GA’) [11]. Recent advances in wavelet transforms provide another powerful tool for PQ classification. Unlike the Short-Time Fourier Transform (STFT) with a fixed window function, the wavelet transform involves a varied time-frequency window, which yields nice performance in PQ classification. A two-stage system that employs the wavelet transform and the adaptive neuro-fuzzy networks for power quality identification is proposed in [5]. Gaouda et al. proposed an effective wavelet multiresolution signal decomposition method for analyzing the power quality transient events based on the standard deviation [12] and root mean square value [13]. Several typical power quality disturbances are correctly localized and classified in these papers. Wavelet based online disturbance detection for power quality applications are discussed in papers [14] and [15]. It is shown that these methods have the advantages of speed and precision discrimination in the type of transient event over conventional approaches. Huang et al. proposed an arithmetic coding approach based on wavelet packet transform to compress the power quality disturbance data in paper [16]. Since noise is omnipresent in a real electrical power distribution network, Yang and Liao presented a de-noising scheme for enhancing wavelet-based power quality monitoring system [17]. In this scheme, Gaussian white noise is considered and a threshold to eliminate the noise influence is determined adaptively according to the background noise. Although lots of research achievements have been reported, the objective of classifying different kinds of power quality disturbances is still both difficult and challenging. Motivated by the recent research in the area of intelligent system and wavelet analysis, this paper aims to propose an effective classification method for power quality disturbances based on the wavelet transform and the self organizing learning array (SOLAR) system. Although wavelet transforms show nice performance in PQ analysis, direct use of it involves the problem of too large

0885-8977/$20.00 © 2006 IEEE

after using multiresolution analysis on the original sampled power waveform. This means we can choose the wavelet with short wavelet filter length to achieve nice classification results as well as small computational cost. Section IV presents simulation results based on the method proposed in this paper. This choice will provide a dyadic-orthonormal wavelet transform and provide the basis for multiresolution analysis. By choosing connections for each neuron. Based on the entropy estimation. provided by the wavelets . Finally. In this way. data set size [4]. and the latest literature reports are discussed in detail in this section. This is implemented by using discrete values of the scaling parameter and translation and . noise is taken into consideration and the Monte Carlo method is used to show the effectiveness of the proposed method in a noisy environments. but also keeps the necessary PQ characteristics for future classification. provided by scaling functions and the details. can be completely decomposed In MRA. the feature size of each sampled waveform is dramatically reduced and is only decided by the number of decomposition levels. the support vector machine (SVM). The multiresolution analysis was introduced by Mallat and a detailed study about MRA can be found in [20]. For an level wavelet multiresolution analysis (MRA). In Section V. It is shown in this paper that the combination of the wavelet transform and the SOLAR system can achieve a good PQ classification performance. In Section II. we will use the discrete wavelet transform (DWT) instead of the CWT. 2. In a practical application. an dimensional feature vector is constructed. By using the hypothesis test of the average values. In the proposed method. a wavelet is a function average (1) The continuous wavelet transform (CWT) of a signal then defined as is (2) where is called the mother wavelet. with a zero Basically. MRA analysis and feature construction. and are scaling (dilation) and translation parameters. and indicating frequency localization and indicating time localization. a conclusion is given in Section VI. neuron parameters and connections that correspond to minimum entropy are adaptively set for each neuron. the translation parameter will decide its shifting position. multi layer and information theory based learning system. the system sets up its wiring and completes its self-organization. the energy of the detail and approximation information at each decomposition level is calculated to construct the feature vector for future training and testing. This not only dramatically reduces the feature size. Comparison research between the proposed method. Further research about the relationships of the classification performance with the wavelet decomposition levels is presented in this part. a wavelet based feature extraction scheme is proposed. where and are defined as the following: (4) . The scale parameter will decide the oscillatory frequency and the length of the wavelet. any time series in terms of the approximations. the asterisk denotes complex conjugate. II. WAVELET BASED FEATURE EXTRACTION The mathematics of the wavelet transform were extensively studied and can be referred in papers [18] and [19].HE AND STARZYK: SELF-ORGANIZING LEARNING ARRAY SYSTEM 287 Fig. To do so. we showed that there is no statistically significant difference in performance of the proposed method for PQ classification when different wavelets are chosen. SOLAR is a sparsely connected. respectively. then we get parameter . Section III discusses the self organizing learning array system for classification. we can choose and . set (3) where . Generally. The rest of this paper is organized as follows.

the decomposition coefficients of MRA analysis can be expressed as (14) is the wavelet decomposition level from level where is the number of the coefficients of detail or 1 to level . [22]. we propose to use the energy at each decomposition level as a new input variable for SOLAR classification. we construct dimensional feature vector for future analysis. SOLAR FOR CLASSIFICATION A self-organizing learning array (SOLAR) system was proposed in [21]. 2. In order to reduce the feature dimension. is the coefficient index at each decomposition level. Multiresolution analysis leads to a hierarchical and fast scheme. for a level wavelet decomposition. we will not directly and approximate information for use the detail future training and testing. As a parallel learning structure. Fig. The so-called Two Scale Equations (TSE) give rise to these filters. Wavelet based feature extraction. Considering the filter bank implementation in Fig. JANUARY 2006 The scaling function is associated with the low-pass filters . 1. respectively. NO. The decomposition procedure starts with passing a signal through these filters. Instead. and the wavelet function with filter coefficients is associated with the high-pass filters with filter coefficients . In this way. (5) There are some important properties of these filters • and • and • Filter is an alternating flip of the filter means there is an odd integer such that (7) . Since both the high pass filter and the low pass filter are half band. The approximations are the low-frequency components of the time series and the details are the high-frequency components. and is where called the detail at level . is the energy of the the detail at decomposition level and approximate at decomposition level . 3 a shows data flow in the proposed wavelet feature extraction. The energy at each decomposition level is calculated using the following equations: (13) (11) which correspond to the decomposition of signal as (12) . sampling with a factor of 2. which (8) Daubechie’s [18] gives a detailed discussion about the characteristics of these filters and how to construct them. 1. where means the down and high-pass filters as defined in (5)–(8). is the energy of approximate at each decomposition level. SOLAR has several advantages over a typical neural network: 1) Data driven learning. This can be implemented by a set of successive filter and are the low-pass banks as shown in Fig. Each neuron performs its specific arithmetic or logic function concurrently. VOL.288 IEEE TRANSACTIONS ON POWER DELIVERY. the MRA decomposition in frequency domain for a signal sampled with the sample frequency can be demonstrated in Fig. (6) is called the approximation at level . Fig. In this way. 21. 3. III. 1. the relationship of the approximation coefficients and detail coefficients between two adjacent levels are given as (9) (10) where and represent the approximation coefficients and detail coefficients of the signal at level .

The optimized configuration information is stored inside each neuron in order to provide the best separation of various classes in the neuron’s input training data. The classifier chooses a class with the maximum weight . neurons self-organize by adapting to information included in their input data. Each neuron learns data distribution in its input space and adjusts its connections and function to maximize the local space information index. In this way. The and information index defined in (15) is normalized to a (0. harmonics. Different combinations of data inputs and transformation operations result in different information index values. As a result of training. Neurons exchange information during learning and in operation. IV. which forms a local input space for the neuron. If an input data point falls in a voting neuron’s input subspace. Each neuron performs a simple operation like adding. is the bility of a class not satisfying threshold. which quantifies the amount of information left in the subspaces (16) Information deficiency helps to self-organize the learning process. With control input high. extracting features from the processed signals.HE AND STARZYK: SELF-ORGANIZING LEARNING ARRAY SYSTEM 289 2) Local interconnections. As subsequent neurons extract information from the input data. SOLAR neurons are event-driven processors responding to their selected data and control inputs. The information index (INI) is maximized in each neuron based on probabilities of data from various classes and subspaces in neuron’s input space. 1) interval. or a simple approximation of logarithm and exponent functions. A detailed discussion about the structure and operation of the SOLAR system can be found in paper [21] and [22]. were considered. 3) Entropy based self-organization. named undisturbed sinusoid (normal). and is the class probability. Every neuron of the system is prewired to its local neighbors and the final wiring configuration is decided during the learning stage. that is to say. The learning array grows by adding more neurons until the information deficiency in the subsequent neurons falls below a set threshold value. INI (15) . The voting scheme combines all the information and classifiers the input signal using a weight function designed after the Maximum Ratio Combination (MRC) technique as presented in [23] (17) where is each vote’s correct classification for or as defined in (15)]. shown is the probability at the bottom of the page where is the probaof a class satisfying threshold. there is less and less independent information left in the data. probability of an input satisfying threshold. and . is the total number of input data. sag with harmonic and swell with harmonic. is the number of input data that satisfy threshold. a neuron only reacts to data from a selected part of the entire input space. shifting. Two hundred cases of each class with different parameters were generated for training and another 200 cases were generated for testing. it is assumed that the input information deficiency is one. SIMULATION AND ANALYSIS A. it is going to vote for this data using its estimated probabilities for that class. is the number of input data that belong to class . Each neuron selects a specific arithmetic (linear or nonlinear) or logic function and applies it to its input data to reach the maximal information index in its learning space based on data entropy. outage. Table I gives the signal generation models and their control parameters. Here is the number of input data that belong to class and satisfy threshold while are those that do not satisfy threshold. a control input plays the role of an inhibitory connection in biological neuron. A subspace with zero information deficiency does not require any learning. This weight function provides a statistically robust fusion of individual neurons vote. Seven classes (C1–C7) of different PQ disturbances. swell. The SOLAR implemented in this paper employs a feed forward (FF) structure with all neurons arranged in multiple layers. The INI is locally defined as (15). . neurons internally save the correct recognition probabilities of all the classes. data is already well classified in the subspace. sag. The physical meaning of the information index is that of a normalized measure of completeness for the problem solution. The INI is also related to the definition of information deficiency (IND) introduced in (16). while INI that data entropy in neurons’ input space was reduced to 0. Both the training and testing signals are sampled at 256 points/cycle (the same as in [24] for results comparison) and the normal frequency is 50 Hz. In SOLAR. subtracting. Data Generation The simulation data was generated in MATLAB based on the model in paper [24]. is the number of class [it is voting neurons. and is a small number preventing division by zero. the value of information index measures the quality of a neuron’s subspace separation. Ten power frequency cycles which contain the disturbance are used for a total of 2560 points. In this way. It is an opposite of entropy that measures disorder. preventing a controlled neuron from firing. When INI indicates there was no reduction in data entropy. At the first layer of neurons. it is directed toward a specific problem solution.

NO. in some dimensions. JANUARY 2006 TABLE I POWER QUALITY DISTURBANCE MODEL Fig. 11-dimensional feature sets for training and testing data were constructed. Two-dimensional projections of the feature vector.290 IEEE TRANSACTIONS ON POWER DELIVERY. VOL. Simulation Results Daubechie’s 4 (Db4) wavelets with ten levels of decomposi. (b) Dimension 8 and dimension 6. the following four kernel functions are taken into account. 1. The dimensions here describe different features resulting from the wavelet transform. where 1400 comes from 200 cases per class multiplied by 7 classes and 11 is the dimension of the feature size of each case. B. 4(a) and 4(b) shows two two-dimensional projections of the training set. such as dimension 8 (x axes) and dimension 10 (y axes) shown in Fig. 4. 4. the data sets are mixed up. Figs. Two types of SVM were implemented in our simulation: C-Support Vector Classification and -Support Vector Classification. the data are better separated. (a) Dimension 8 and dimension 10. 4(a). the total size of the training data or testing data set is 1400 11. Based on the feature extion were used for analysis traction shown in Fig. while in some other dimensions. we compared the classification result with the recent results reported in literature [24] and the classification using the Support Vector Machine (SVM) method. As we can see from Fig. For each type of SVM. 3. Simulation of the SVM for classification is based on the modification of the Ohio State University (OSU) SVM Classifier Matlab Toolbox [25]. 4(b). All data sets were scaled to the range of (1–255) before being applied to SOLAR for training and testing. such as dimension 8 (x axes) and dimension 6 (y axes) shown in Fig. Since SOLAR will dynamically choose its functionality and connection structure based in its learning stage. where and are the feature vectors in the input space ( and denotes the index of the features) and is the mapping function . In order to evaluate the performance of the proposed method. that is to say. it will easily handle this kind of data classification problem. 21.

the increase racy. 5 provides a reference for trade off between the decomposition levels and classification performance. how should a reasonable number of decomposition levels be chosen? In this simulation. One thing that should be noted here is that. For each type of PQ class (C1 to C7 as in Table I). the method reported in paper [24] (inductive inference approach). but the narrowness in time domain implies a large width in frequency domain. Obviously. two more questions should be investigated: 1) What is the relationship between the classification performance and the choice of the decomposition levels? 2) How to choose a suitable wavelet for analysis in the proposed method? The following parts (C) and (D) will discuss these two issues.HE AND STARZYK: SELF-ORGANIZING LEARNING ARRAY SYSTEM 291 Linear: (18) Polynomial: (19) Radial Basis Function (RBF): (20) Sigmoid: (21) and are kernel parameters. D. Daubechie’s wavelet. The diagonal elements represent the correctly classified PQ types. we choose Db4 wavelet and scan the decomposition levels from 1 to 10. In this part. kernel function and parameter setting may have better results than those listed in Table IV. When the decomposition level is greater than 6. such as . we can achieve about 90% classification accu. the higher the computational cost. a 7 7 confusion matrix is constructed to show the classification performance for each method. A wavelet with small support width is fast to compute. more levels of decomposition will increase the computational cost. Symlets and Coiflets wavelet are taken into account. respectively. the overall classification accuracy is about 60%. These parameters will influence the frequency characteristics of the wavelet transform. four commonly used wavelets. The off-diagonal elements represent the misclassification. and SVM classification. named Haar wavelet. 5 gives the simulation results. Obviously. wavelets with large compact support are smoother and have finer frequency . The proposed method in this paper can automatically provide statistically stable results. To verify conjecture 1. 200 cases were generated with different parameters for training and another 200 cases were generated for testing. Fig. the method proposed in this paper can effectively classify different kinds of PQ disturbances. The detailed discussion where about these types of SVMs and the kernel functions can be referred in [26]–[28]. The compact support and support width is used to measure the domain of wavelet function with nonzero values on it. C. As the results in Tables II–IV. Discussion About the Classification Performance and Wavelet Decomposition Levels As discussed in Section II. Further discussion about the selection of SVM kernel functions and parameters can be found in papers [26]–[28]. Conjecture 1: No single wavelet transform has a statistically significant advantage over other wavelets in performance of the proposed method for PQ classification. So. Further investigation shows that when of the classification probability is small. when the wavelet decomposition levels are relatively small. Compact support means these nonzero values are only for a finite duration and support width means how long this nonzero duration is. level wavelet decomposition will dimensional feature vector for future analdecrease the ysis. it is recognized that an optimum selection of the SVM type. However. Tables II–IV gives the simulation result for this seven-class PQ disturbance problem based on the proposed method (SOLAR based on wavelet feature extraction). 5. Table V shows their corresponding characteristics. it is not easy to choose the optimum SVM kernel functions and parameters in advance. TABLE II CLASSIFICATION RESULT FOR THE PROPOSED METHOD IN THIS PAPER (SOLAR BASED ON WAVELET FEATURE EXTRACTION) TABLE III CLASSIFICATION RESULTS AS REPORTED IN PAPER [24]: INDUCTIVE INFERENCE APROACH As we can see from Fig. Conversely. the longer the wavelet filter length. Sample frequency was 256 points/cycle. Fig. we investigate the influence of different kinds of wavelets to the classification accuracy of the proposed method. In order to get the detailed performance evaluation. Here the orthogonal means that the inner products of wavelet basis functions are zero. Choice of Suitable Wavelet Another import issue related to the proposed method is the choice of a suitable wavelet. As we can see here.

the larger the computational cost. The only filters with properties of both orthogonality and symmetry are Haar filters (with 2 filter coefficients). we fix the decomposition levels at 6.292 IEEE TRANSACTIONS ON POWER DELIVERY. The tradeoff between the decomposition levels and classification performance is also discussed in part C of this section. The averaged accuracy over the ten runs for each choice of wavelet were presented in Table VI. The reason we choose six levels of decomposition is based on the results in Fig. in which we can see that six levels decomposition can provide nice classification accuracy (about 90%) as well as relatively small computational cost compared to higher levels of decomposition. To test if there is any significant difference among different wavelet families. we took ten simulation runs. 5. In order to evaluate the performance of different kind of wavelets. 21. symmetry prevents orthogonality. 5. A detailed discussion about these wavelet characteristics can be found in paper [18] and [19]. VOL. For each choice of wavelet. 1. Seven classes of different PQ disturbances as defined in Section IV part A were considered here. Relationship between the wavelet decomposition levels and overall classification accuracy. resolution. In each run. JANUARY 2006 TABLE IV CLASSIFICATION RESULTS FOR SVM METHOD TABLE V WAVELET CHARACTERISTICS Fig. such as nine levels or ten levels. This parameter will affect the computational cost of the wavelet transform and the longer the wavelet filters length. Symmetry is another desired property for wavelets. Filter length is the number of filter coefficients as discussed in Section II. however. or among different wavelets within one . NO. 200 cases of each class (seven classes total) of PQ disturbances based on the model in Table I were generated for training and another 200 cases of each class were generated for testing.

96 For a two-tailed test. [31]. Based on the results in this part. Fig. the larger the computational cost. The mean and the standard deviation of the population are calculated using the following equations: (22) TABLE VII PERFORMANCE OF WAVELETS (23) where is the classification accuracy of different wavelets in each wavelet family in Table VI and is the number of wavelets in each family. From the analysis results in Table VIII we can see. namely Daubechies wavelet and their corresponding wavelet filter length. we will ac. different levels of noises with the signal to noise ratio (SNR) values .05). 6 shows the analysis results for different wavelets within one wavelet family. we analyze whether the proposed method is still effective in a noisy environment. Gaussian white noise is widely considered in the research of power quality issues [17]. Db2 wavelet to achieve both good classification results as well as small computational cost. (1. The same test can be performed for the case of different wavelets within one wavelet family and we can get the same results. NOISE ANALYSIS FOR THE PROPOSED METHOD Since noise is omnipresent in an electrical power distribution network. This means the proposed method has good robustness characteristics and has no strict requirements for the choice of a particular wavelet. To test the proposed method performance in different noise environments.HE AND STARZYK: SELF-ORGANIZING LEARNING ARRAY SYSTEM 293 TABLE VI AVERAGED CLASSIFICATION RESULTS FOR DIFFERENT WAVELETS wavelet family. Here we show the case of the testing among different wavelet families. 6. We now formulate the hypothesis: Null hypothesis: (24) Alternative hypothesis: (25) The test statistic is calculated as follows: (26) if . Table VIII shows the hypothesis testing result. we have shown that there is no statistically significant difference in performance of the proposed method for PQ classification when different wavelets are chosen. This shows that there is no statistically significant difference in the classification performance when different wavelet families are chosen. different wavelets have different filter lengths. we will reject is for a two-tailed test where the results are significant at a level of 0. we can choose the wavelet with short wavelet filter length. V. which means there is no difference cept the null hypothesis in the mean values. TABLE VIII WAVELET FAMILY HYPOTHESIS TEST Fig. So far. However. Classification results and their corresponding wavelet filter length for different wavelets within one family (Daubechies wavelet). [30]. Table VII gives the mean and standard deviation results of each family of wavelet as presented in Table VI. such as Haar wavelet. we use hypothesis testing of the average values [29]. The longer the wavelet filters length.

2002. Bhatt. an initiative by EPRI and the Electrical Innovation Institute. Fig. The ideas of the combining wavelet transform with self organizing learning array system could potentially be used in other problem domains. Eng. We choose the Db4 wavelet for analysis with six and ten levels of decomposition. 1. 17. In addition. the proposed method has a robust anti-noise performance and we can still achieve high classification accuracy in a noisy environment. respectively. A. 738–743. Grady. and M. vol. Distrib. Elmitwally. 148. S. J. 8. 2002. S. VOL. J. Powers.” Proc. 1... 15–20. Power Del. Lamoree. May 2000. Abdelkader. 21. W. Electr. 44–49. Eng. 8. 668–673. S. Negnevitsky. etc. Apr. A. pp. at least for the PQ disturbance classification problem. 149. M. Transm. CONCLUSION A novel PQ classification approach based on the wavelet transform and self organizing learning array system is proposed in this paper. vol.” IEEE Trans. Jun. “Artificial intelligence and advanced mathematical tools for power quality applications: A survey. 609–616. Gener. Fig. Inst. Fig. the support vector machine (SVM) and existing literature reports show that the proposed method can effectively classify different kinds of PQ disturbances. Power Del. and S.” Proc. E. This is also consistent with our previous results in Section IV part D. pp. ranging from 10 to 50 dB are considered here. The value of SNR is defined as following SNR (27) where is the power (variance) of the signal and is that of the noise. 4. 4. even in very low SNR condition.” IEEE Trans. pp. Eng. Apr. Distrib. [3] S. Several typical PQ disturbances are taken into consideration in this paper. 17. Elmitwally. NO. Kandil. 183–189. “Universal power quality manager with a new control scheme.. Grady. ACKNOWLEDGMENT Fig.. and M. [8] S. The ten levels of decomposition classification result is slightly better than that of 6 levels decomposition. We choose ten levels of wavelet decomposition here. Apr. such as financial data analysis. 15. Santoso. E. [7] A. no. although Db4 wavelets show slightly better . Jan. but the improvement is not very big. 2000. vol. 200 cases were generated for training and another 200 cases were generated for testing. S. Different wavelet classification results under different SNR conditions. The Monte Carlo method is used to generate the simulation data set with different parameters as show in Table I. [6] S. REFERENCES [1] M. Comparison between the proposed method. and A. Power Del. the proposed method can still achieve high overall classification accuracy. no. 2002.. Elkateb. M. no. J. Electr. This means that the simplest wavelet to implement the proposed method will do as good a job as any other wavelets. Elmitwally. “Proposed wavelet-neurofuzzy combined system for power quality violations detection and diagnosis. Transm. pp. 2. “The Cost of power disturbance to industrial and digital economy companies.. Inst. Power Del. 5. “Power quality disturbance waveform recognition using wavelet-based neural classifierpart I: Theoretical foundation. For each class of PQ disturbance (C1 to C7 in Table I). [4] J. 15. Kandil. Samotyj. vol. R. W.” IEEE Trans. pp. “A neural-fuzzy classifier for recognition of power quality disturbances. no. pp. simulation results under different noise levels show that the proposed method works well in noisy environment. M. 7. Nguyen. Electr. [5] A.. Wavelet transform is utilized to construct the feature vector based on the multiresolution analysis method. Ibrahim and M. Gener. JANUARY 2006 classification results among the chosen wavelets. vol. “A scalable PQ event identification system. Parsons. We have shown that there is no statistically significant difference in performance of the proposed method when different wavelets are chosen. 2001. Classification results under different SNR condition. vol. C. automatic target recognition. Farghal. 2000. Farghal. This is consistent with our previous result as shown in Fig. Inst. Gener. Jan. pp. M. Morcos. 2001. 147. [2] W.” Proc. 7 shows the simulation results. M. T. M. Primen. and D. Jan. These feature vectors are then applied to a SOLAR system for training and testing. Santoso. 8 shows the overall classification performance for different wavelets in a noisy environment. Elkateb. Huang.294 IEEE TRANSACTIONS ON POWER DELIVERY. As we can see. Abdelkader. In addition. VI. The authors would like to thank the reviewers for their helpful comments and suggestions.” IEEE Trans. vol. Powers.” Consortium for electrical infrastructure to support a digital society. Transm. and A. “Quantifying electric power quality via fuzzy modeling and analytic hierarchy processing. there is no statistically significant difference in performance of the proposed method within different wavelets. C. Distrib. As we can see from Fig. 222–228.

M. [21] J. power quality and powerline communication. Mallat. 3rd ed. 16. [24] T. Janusz A. pp. Wijayakulasooriya. and reconfigurable design for wireless communication. Symp. New York: Wiley. A. Power Del. “Detection and classification of power quality disturbances in noisy conditions. D. [13] A. P. vol. Starzyk. pp.” Proc. May 2000.” IEEE Trans. Y. E. no. [23] J. Distrib. pp. Liao. G. pp. Zhu. Williamson. and A. Transm. pp. China. Guo. 1965. degree in applied mathematics and the Ph. Miller and J. Huang. 1995. [28] C. T. pp.S. 1989.” Mach. [27] V.. Mokhtari. M. Kung. “Electric power quality disturbance classification using self-adapting artificial neural networks.. and O. 1998. 1212–1220. rough sets. vol. Huang and M. [22] J. Y. C. Inst.” IEEE Trans. neural networks. Inst. Jul.” IEEE Trans. 2005. C. 1207–1245. Electr. 1995. Aug. Kung. Neural Netw. pp. Digital Communications. low power VLSI. “A high-resolution spectral analysis algorithm for powersystem disturbance monitoring. R. 2001. 1999. Cortes and V. 149.. E. Galil. H.” IEEE Trans. Electr.” IEEE Trans. pp. no. 11. Power Del. 10. “Power quality disturbance waveform recognition using wavelet-based neural classifier-part II: Application. Gener. Power Del. Circuits Systems. no. 17. pp. Proakis. Kanoun. Jan. Learning. vol. F. Gopinath. K. no. 355–363. Thailand. Sultan. and C. and H. 7.” Proc. [12] A. M. [15] H.. J. and M. M. M. Mokhtari. Gaouda. Salama. “Wavelet based on-line disturbance detection for power quality application. Smola. Available: http://www. no. 1992. 1998. 98–101. vol. pp. V. 2. Sep. Devaney. He has been a consultant to AT&T Bell Laboratories. “Self-organizing learning array. Statistical Learning Theory. M. Haibo He (S’04) received the B.” IEEE Trans..D. 676–680. 2000. 7. pp. Power Del. pp. “Fuzzy-based adaptive digital power metering using a genetic algorithm. and reconfigurable design for wireless communication. J. 1812–1818. Santoso. A. Introduction to Wavelets and Wavelet Transforms: A Primer. Gaouda. A. Athens.S. 12. M.osu.. Scholkopf. 15. pp. 17. Eng. H. 20. Vapnik. Power Del. E. VLSI design and test of mixed-signal CMOS circuits.ece. Jul. 1998. Youssef. vol. Salama. no. Power Syst.. 149. vol. 8. Z. [14] M. Powers. “A theory for multiresolution signal decomposition: The wavelet representation. vol. no. E. Yang and C. 674–693. O’Shea. “Application of arithmetic coding for electric power disturbance data compression with wavelet packet enhancement.S. Inst. J. 2002. 10. IEEE Int. Intell. Distrib. pp. 19.D. Karimi. “Power quality detection and classification using wavelet-multiresolution signal decomposition. R. vol. 183–188.” Neural Computation.” Proc. Distrib. 1. 1. 15. “Design of a self-organizing learning array system. respectively. “Support-vector network. vol. Warsaw. and M. vol. Oct. 273–297. and Magnetek Corporation.” IEEE Trans. [11] C. C. no. Minns. Instrum. Jan. and M. . in 1971 and 1976. Ohio University. Putrus. 10. Mach. Chikhani. A. Salam. 801–805. K. M.. 161–172. [20] S. 229–235.” IEEE Trans. Sverdrup Technology. P. “Wavelet-based signal processing for disturbance classification and measurement. vol. NJ: Prentice-Hall. degree in electrical engineering from Warsaw University of Technology.. New York: McGrawHill. G. A. and T. Liu. 16. 2002. R. Starzyk and T. He is currently pursuing the Ph. [16] S. Englewood Cliffs. 2002.” IEEE Trans. S. Iravani. H.. Aug. Jan. Power Syst. M.. Ghartemani. “Ten lectures on wavelets.HE AND STARZYK: SELF-ORGANIZING LEARNING ARRAY SYSTEM 295 [9] S. [17] H. [10] J. Iravani. Grady. and P.” in Proc. L. and A. Gener. PA: Society for Industrial and Applied Mathematics. Jou. M. M. K. [18] I. 310–318. 2003. 1469–1476. pp. S. no. [29] I. Eng. M. Sarnoff Research. 47. Pattern Anal. 1. Liu. Electr. J. 4. [25] Ohio State University SVM Classifier Matlab Toolbox [Online]. [19] C. pp.” IEEE Trans. M. 353–360. Fruend. Chikhani. His research interests include machine learning. His current research is in the areas of self-organizing learning machines. Englewood Cliffs.” . Poland.. Parsons. Eng. Meas. Bangkok. Zhang. Probability and Statistics for Engineers. R. degree in electrical engineering at Ohio University. May 2002. Oct. He is currently a Professor of the School of Electrical engineering and Computer Science (EECS).-C. Transm. H. 567–572. Daubechies. 14. vol. Sep. Transm. liu. 2004. “A de-noising scheme for enhancing waveletbased power quality monitoring system. Athens. R. degrees in electrical engineering from Huazhong University of Science and Technology (HUST). Wuhan. pp. vol. no. Jan. Vapnik. Bartlett. Kamel. Gener. [31] P.. 2000. no. vol. and M.” IEEE Trans. 1334–1341. [30] H. 1. Burrus. W. M. vol. vol.edu/-maj/osu_svm/ [26] B. Magnolia Broadband. May 2003. Philadelphia.. “Experimental performance evaluation of a wavelet-based on-line voltage detection method for power quality application. and A. Starzyk (SM’83) received the M. and P. M. Malik. Power Del. Saadany. 150. NJ: Prentice-Hall. 2004. in 1999 and 2002 respectively. Mar. Oct. “Power quality disturbance classification using the inductive inference approach. A. 19. A. no. A. “New support vector algorithms.