P. 1
Electrical Machines

Electrical Machines

|Views: 243|Likes:
Published by henquadi
Electrical Machines
Electrical Machines

More info:

Published by: henquadi on May 09, 2013
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

06/26/2015

pdf

text

original

Aneuralnetworkcanalsolearnunsupervised[9].Comparedwithsupervisedlearning,
anunsupervisedsystemprovidesasignificantlydifferentneural networkapproach
totheidentificationof abnormal operationandthelearningcanbemoreprotracted.
Thesystemcanproceedthroughthelearningstagewithoutprovisionofcorrectclas-
sificationsfor eachinputsetof datawhereastheconnectionweightadaptationin
themultilayeredperceptronisdrivenprecisely by just suchknowledge. Thepos-
sibility of training anetwork on aset of data, with no labelled inputs or only a
fraction of which arelabelled clearly, offers significant advantages. However to
interpret theoutputs so formed and to usethemfor classification, knowledgeof
thetrainingdatasetisstill required. Nevertheless, unsupervisednetworksrequire
fewer iterationsintraining, sincethey do not requireexact optimisationfor their
decision-making.

Theunsupervisednetworkimplementationdescribedhereisbasedonthewell-
knownKohonenfeaturemap. Thisfeaturemapapproach, presentedby Kohonen
in the1980s, derives fromtheobservation that thefinestructureof thecerebral
cortex inthehumanbrainmay becreatedduringlearningby algorithmsthatpro-
moteself-organisation. Kohonen’s algorithmmimics such processes. It creates a
vector quantiser by adjusting theweights fromcommon input nodes to theout-
put nodes, whicharearrangedinatwo-dimensional grid-likenetwork, asshown
inFigure11.8.Suchamappingisusuallyfromahigher-dimensionalinputspaceinto
alow-dimensional outputspace.

Table11.1

Weightmatrixforatrainedmulti-levelperceptronnetwork[10]

Inputtohiddenconnection

Hiddentooutput

Inputlayerneurons

connection

Hiddenlayer

1(s)

2(s/sr)

3(Nlcomp/NmI)

4(I/Ir)

5(Pa/Pr)

6(Pr/Pb)

7(bias)

Hiddenlayer

Outputneuron

neurons

neurons

(n/nmax)

1

0.129764

0.28964

1.15775−0.40374−0.75320

−0.02838−0.348574

1

1.00545

2

−0.02786

0.30341

1.03738−0.36965−0.72451

−0.18457

−0.27801

2

0.97695

3

1.74813

2.50970

−1.87589

2.51257

4.86890

1.47383

−0.17406

3

−3.99733

4

0.99559

2.99486

5.98608

1.95823

1.19316

0.75494

0.37961

4

4.47211

5

−1.62955−2.80110

1.83440−2.33895−3.92445−1.828498

−0.11627

5

3.44327

6

0.43743

0.39590

−1.39143−0.47084

0.26666

0.26027

−0.60810

6

−1.39635

7

−0.49268

0.27708

−0.04326−0.15611

-0.25054

0.21176

−0.42608

7

−0.06189

8

0.35087−0.26896

−0.47892−0.44522

0.50462

0.53946

0.21018

8

−0.89739

9

0.34187

0.116279

−0.52417

0.27934

0.28161

0.31263

−0.65641

9

−0.32928

10

0.20322−0.04357

−0.22008

0.38878−0.02762

−0.49045

−0.32304

10

−0.03285

11

−0.35480−0.13310

−1.33146−0.71986

0.80925

0.19947

−0.23927

11

−1.51690

12

0.00613

0.64995

1.24199−0.65056−1.71787

0.05540

−0.39384

12

1.77122

13

0.33766

0.47155

1.57529

0.70208

0.04751

−0.18017

−0.22157

13

1.47977

14

−0.64187

0.68858

−14.79206

0.96879−1.58881

−0.54323

−1.64841

14

−9.19390

15

−0.02807−0.09264

−1.25613−0.30073

0.29854

0.27405

−0.39073

15

−1.26909

16(Bias)

−0.85704

258 Condition monitoring of rotating electrical machines

Table11.2 Test of a trained multi-level perceptron network
[10]

Slip(%) Numberofbroken Targeted

Obtained
rotorbars,n output,n/nm output,n/nm

0.1

0

0.1

0.10031

0.4

0

0.1

0.10017

0.6

0

0.1

0.10020

0.4

1

0.2

0.20200

0.6

1

0.2

0.20000

0.4

2

0.3

0.30500

x

1

x

2

...

x

N

Input

nodes i

Output

nodes j

Connection

weights w

ij

Figure11.8 Structure of a self-organising feature map

AsshowninFigure11.8, N inputsX1,X2, . . .,XN, aremappedontoM output
nodesY1,Y2,YM whichhavebeenarrangedinamatrix; forexample, dimensioned
20× 20shownas5× 5only infigure11.8. Eachinputuniti isconnectedtoan
outputunitj throughweightWij.Thecontinuousvaluedinputpatternsandconnection
weightswill givetheoutputnodej continuousvaluedactivationvalueAj:

Aj =

N

i=1WijXi = WjX

(11.3)

whereX = [X1,X2, . . .,XN] istheinputvectorandWj = [W1j,W2j, . . .,WNj]T

is

theconnectionvectorforoutputnodej.
Acompetitivelearningruleisused,choosingthewinnercastheoutputnodewith
weightvectorclosesttothepresentinputpatternvector

X − Wc = min

j X − Wj

(11.4)

Application of artificial intelligence techniques 259

where denotestheEuclideandistance,ornorm,betweenthetwovectors.Boththe
inputandconnectionvectorsarenormalised.
Itisnotedthatthedatapresentedforthelearningpurposeconsistofonlyinputs,not
outputs.Whenaninputpatternispresentedtotheneuralnetworkwithoutspecifying
thedesiredoutput,thelearningprocesswill findawinningoutputnodec according
to(11.4), andsubsequently updatestheweightsassociatedwiththewinningnode
anditsadjacentnodes. Theweightsareupdatedinsuchawaysothatif thesame
inputispresentedtothenetworkagainimmediately, thewinningoutputnodewill
still fall intothesameneighbourhoodof theoutputnodepickeduplasttime, and
withanevengreaterwinningmarginwhen(11.4)isusedtoselectthewinningnode
thistime.Inotherwords,thefeatureoftheinputpatternisenhancedbyupdatingthe
correspondingweights.A typical learningruleis

W(k)

ij = α(k) X (k)

i − W(k)
ij

W(k+1)
ij = W(k)

ij + W(k)

ij

, j ∈ Nc

(11.5)

wheresuperscriptk denotesthenumberof inputpatternpresentedforlearning. Nc
denotestheneighbourhoodaroundthewinningnodec withrespecttothepresent
inputpattern.α(k)

isthelearningratethatisusedinthisstepoflearning.
Asmoreinputpatternsarepresentedforthenetworktolearn, thelearningrate
graduallydecreasestoguaranteeconvergence.Theinitial weightscanberandomly
set, butnormalised, andtheinputpatternscanbeiterativelyusedwithseveral rep-
etitionsuntil convergenceisreached[12]. After thetraining, eachoutputnodeis
associatedwithafixedset of weights; that is, Wj = [W1j,W2j, . . .,WNj]T

. Each
inputpatterncanthenbemeasuredforitsdistancetoalltheoutputnodes.Ifaninput
patternissimilartothosethatcontributedtothedeterminationof theweightsasso-
ciatedwithaparticularoutputnode,thenthedistancebetweenthesetwoinputand
outputnodeswillbesmall.Thefeatureofthisinputpatternisconsequentlyidentified.
Thestructureandthelearningschemeoftheneuralnetworkgiverisetoitsabilityof
self-organisingmapping(SOM).
PenmanandYin[9]describesthedetailsoftheconstructionandlearningprocess
of suchnetworksforconditionmonitoringof inductionmotors. Figure11.9shows
thevibrationsignal featuremapsforthreemachineconditions

• normal,
• unbalancedsupply,
• mechanical loosenessofframemounting.
In each case, theinput to theneural network is alargenumber of vibration
componentsobtainedfromafastFouriertransformofanaccelerometeroutput.These
componentsarethenusedtocalculatethedistancesfromthisinputpatterntoall of
the20× 20 output nodes and theresults areshown in Figure11.9. Theground
planeinFigure11.9representsthe20× 20outputnodes(space), andthevertical
axisindicatesthedistancecalculated. Itcanbeseenthatthethreeconditionsgive
distinctivelydifferentfeaturemaps.Of course,thephysical meaningsof suchmaps
dependonthefurtherknowledgethathasbeenembeddedinthetrainingprocess.

260 Condition monitoring of rotating electrical machines

5

4

3

2

1

5

4

3

2

1

18161412108

6

4

2

0

(a)

Normal

4

3

2

1

4

3

2

1

18161412108

6

4

2

0

(b)

Unbalanced supply

18

10

10

8

6

2

9

6

4

2

16141210

8

6

4

2

0

(c)

Mechanical looseness

Figure11.9 Kohonen feature map of machine conditions

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->