You are on page 1of 78

Genetic Algorithms in Induction Motor

Efficiency Determination

Written by
Nadeeka Yapa

Advised by
Dr. Pragasen Pillay
Electrical Engineering
Clarkson University
Clarkson University

Genetic Algorithms in Induction Motor Efficiency Determination

A Thesis by

Nadeeka Yapa

Department of Electrical Engineering

Submitted in partial fulfillment of the requirements for a

Bachelor of Science Degreee with

University Honors

May 2004

Accepted by the Clarkson University Honors Program

____________________________
Advisor Date

____________________________
Honors Reader Date

____________________________
Honors Director Date
Executive Summary
Many current techniques of calculating induction motor efficiency are difficult,
expensive, or inaccurate in the field. Induction motors consume a large percentage of the
electricity used in the US. Accurate calculations of the efficiency of these motors would
allow savings in both energy and cost. One major obstacle in the calculation of
efficiency is that it is often difficult to measure the output power accurately and safely
while the motor is running, say in a factory. It would be of interest to find a way to
estimate the output power using only easily measured quantities, such as input current
and voltage.
Genetic algorithms (GAs) are often used to estimate quantities from limited
information. They belong to a class of weak search procedures, that is, they do not
provide the best solution, but one close to it. It is a randomized process in which
follows the principles of evolution. Possible solutions to the problem at hand are
encoded in a string of numbers or symbols, analogous to DNA. Strings that show
desirable characteristics are chosen to copy themselves into a new pool of strings, called
the child generation. The children are also formed by mixing two different strings
(crossover) and random change (mutation). This eventually results in an artificial
evolution into a population of solutions that have desirable characteristics and may be
considered reasonable solutions to the problem. GAs are versatile because they use
payoff information (some rule of how to evaluate how suitable a string is for the
environment) rather than direct knowledge of the model itself, such as derivatives. Thus,
lack of continuity or undefined derivatives are not problems.
This thesis shows the results of 25 GAs formulated to solve the induction motor
problem. There is a progression from extremely simple GAs to ones more complicated as
various faults with the original algorithms were uncovered. Early GAs suffered from
premature convergence and lack of constraints, leading to nearly random results that were
of no use. Later GAs show much better behavior. However, even the best GA does not
give precise enough results and show enough robustness for practical use.
Acknowledgements
Thank you to Dr. Pillay, who helped me through my ignorance and patiently corrected me
when I stubbornly insisted on doing things wrong.

Thank you, Dr. Craig, for showing me the flexibility of the Honors Program and giving
me the chance to do things on my own schedule.

This research was made possible by the funding of the Office of Naval Research.

i
Table of Contents

Chapter 1. Introduction 1

Chapter 2. Background 2
2.1 A simple genetic algorithm 2
2.2 GAs and power 5
2.2.1 Arroyo and Conejo: A parallel repair algorithm to solve
the unit commitment problem 5
2.2.2 Park et. al.: An improved genetic algorithm for
generation expansion planning 6
2.2.3 Burke and Smith: Hybrid evolutionary techniques for
the maintenance scheduling problem 7
2.2.4 Damousis, Bakritzis, and Dokopoulos: Network-
constrained economic dispatch using real-coded
genetic algorithm 7
2.2.5 Milosevic and Begovic: Nondominated sorting genetic
algorithm for optima phasor measurement placement 8
2.2.6 Bakirtzis et. al.: Optimal power flow by enhanced
genetic algorithm 9
2.2.7 Tippayachai, Ongsukul, Ngamroo: Parallel micro
genetic algorithm for constrained economic dispatch 9
2.2.8 Wu, Ho, and Wang: A diploid genetic approach to
short-term scheduling of hydro-thermal system 10

Chapter 3. Methodology 11

Chapter 4. Results and Discussion 15

Chapter 5. Conclusions 23

Appendix A

Appendix B

References

ii
Lists of Tables and Figures

Tables

Table 1: Relationship between x and binary representation 3


Table 2: A possible starting population 3
Table 3: Experimental GAs 15

Figures

Figure 1: Parabola y=x(1-x) 2


Figure 2: Flow diagram of a simple GA 4
Figure 3: Induction motor equivalent circuit 11
Figure 4: X1 of GA 6 convergence 18
Figure 5: Convergence values 18

Figure A-1: Pie charts for GA 1 2


Figure A-2: Pie charts for GA 1 3
Figure A-3: Pie charts for GA 1 4
Figure A-4: Pie charts for GA 1 5
Figure A-5: Pie charts for GA 1 6
Figure A-6: Pie charts for GA 1 7
Figure A-7: Pie charts for GA 1 8
Figure A-8: Pie charts for GA 1 9
Figure A-9: Pie charts for GA 1 10
Figure A-10: Pie charts for GA 1 11
Figure A-11: Pie charts for GA 1 12
Figure A-12: Pie charts for GA 1 13
Figure A-13: Pie charts for GA 1 14
Figure A-14: Pie charts for GA 1 15
Figure A-15: Pie charts for GA 1 16
Figure A-16: Pie charts for GA 1 17
Figure A-17: Pie charts for GA 1 18
Figure A-18: Pie charts for GA 1 19
Figure A-19: Pie charts for GA 1 20
Figure A-20: Pie charts for GA 1 21
Figure A-21: Pie charts for GA 1 22
Figure A-22: Pie charts for GA 1 23
Figure A-23: Pie charts for GA 1 24
Figure A-24: Pie charts for GA 1 25
Figure A-25: Pie charts for GA 1 26

iii
Chapter 1. Introduction

Electric motors, and in particular, induction motors, consume a large amount of


the electricity consumed in the US. Accurate calculations of the efficiency of such
motors are of interest in saving both energy and costs. Many of the current techniques of
calculating efficiency suffer from being difficult, expensive, or inaccurate in the field.
The efficiency is defined as the ratio of the output power to the input power, but the
output power is often difficult to measure accurately and safely. Therefore, it is of
interest to find a way to determine efficiency that only requires values of the inputs to the
motor, not the outputs. Genetic algorithms offer a way to do this.
Genetic algorithms (GAs) are a class of evolutionary algorithms. They are
algorithms based on the principles of evolution found in nature. Possible solutions to a
problem are encoded in a string (for example, according to parameter values), which are
analogous to DNA. The fittest strings are chosen to reproduce, mate, and occasionally
mutate, eventually resulting in evolution into a population of solutions that are highly
adapted to the desired environment.
GAs are different from normal search and optimization procedures in several
ways. First, they work with some coding of parameters, not the actual values of the
parameters. Different codings lead to different results. Second, GAs work with a
population of points, not single points. This means that there will be a population of
solutions to chose from. GAs are also very versatile because they use payoff information
(some rule of how to evaluate how suitable a string is for the environment) rather than
direct knowledge of the model itself, such as derivatives. Thus, lack of continuity or
undefined derivatives are not problems. Lastly, GAs use probabilistic rules in each step,
not deterministic rules. Different runs of the same GA will give different results.
A simple GA consists of three basic operations: reproduction, crossover, and
mutation. Reproduction is the process of producing a new generation of population
strings by replicating the members of the old generation. The more fit the member of the
old generation, the more copies of itself will appear in the new generation. Thus, over
time, more fit strings will tend to dominate the population and less fit strings will die out.
Crossover is the mixing of two strings (mating) to produce a completely new string.
Unlike simple reproduction, crossover can produce new individuals that may be more fit
than any in the original population. However, even with crossover, genetic information
from supposedly unfit strings may be lost. Mutation is the random (with a specified, and
usually very low, probability) change of a coding value in a string to another value. This
way, new genetic information is always entering the population.
This thesis proposes a variety of GAs, as applied to the induction motor problem,
and compares the results.

1
Chapter 2. Background

Genetic algorithms (GAs) are currently used to solve a wide variety of problems.
They were developed with the goal of better understanding natural processes such as
evolution/adaptation, and allowing machines to mimic these processes to learn like living
organisms do. Therefore, the natural use of a GA is as an optimization tool.

2.1 A Simple Genetic Algorithm

This example shows how a GA can be used to solve for the maximum point on a
parabola and illustrates many of the basics of choosing a GA. Figure 1 shows the
parabola y = x(1-x). Simple symmetry shows that the maximum point is at x=0.5, so this
is the expected result of the GA.

Figure 1. Parabola y = x(1-x)

At this point, the programmer has many choices about how to formulate the GA.
First, a search area must be defined. For this problem, one might intuitively guess that
the optimal solution lies between x = 0 and x = 1, and define 0x1 as the search space.
Next, a choice must be made about how to encode the search parameter (in this case, x).
The simplest choice, which is used here, is to use binary representations (a string of zeros
and ones). The next choice is how long to make the representative string. A longer string
means that there will be more values for the algorithm to search through. Longer strings
necessarily mean more computation time, but a correspondingly higher chance of having
the algorithm search through values that are close to the desired one. For illustrative
purposes, a relatively short string (5 bits), which corresponds to 32 values, was chosen.
Finally, a relationship between the encoded forms and the actual values of the parameter
x must be defined. One possible relationship is shown in Table 1.

2
Table 1. Relationship between x and binary representation
Binary Integer value x
00000 0 0/31 = 0
00001 1 1/31 = 0.03226
00010 2 2/31 = 0.06452
. . .
. . .
. . .
11101 29 29/31 = 0.93548
11110 30 30/31 = 0.96774
11111 31 31/31 = 1.00000

Notice that with this representation, the number 0.5 (the solution) is not an option.
The two values closest to 0.5 are 15/31 = 0.48387 and 16/31 = 0.516129. Therefore, the
solution will probably consist of a mix of these two answers. This is an example of how
the coding can change how effective the algorithm is. If the coding had included 0.5, it
would have been possible for the GA to spit out the correct answer. Of course, the GA
could be changed to do this. However, in a practical problem, the answer is not known.
In those cases, one can only increase the length of the string to have a higher chance of
hitting close to the optimal solution.
To start the GA, a population size is chosen and the individuals of the population
are initialized randomly. The coded individuals are decoded into their equivalent values
(x). Next, a fitness evaluation function is used to decide which individuals in the
population are best fitted to the criteria. In this simple example, a reasonable fitness
function is the actual function of the parabola, y = x(1-x). Since the objective is to find
the max value, strings that give a higher y value will be deemed more fit than ones with
lower y values. An often used quantity is the proportional fitness, show in (1).

proportional fitness = (fitness value)/(total of fitness values) (1)

Table 2 shows a possible starting population (size 10) with each strings corresponding
value, fitness value, rank, and proportional fitness.

Table 2. A possible starting population


Individual Code Binary Equivalent Fitness Rank Proportional
Value Value Value Fitness (%)
1 11101 29 0.93548 0.06035 9 4.24597
2 10100 20 0.64516 0.22893 1 16.10542
3 10100 20 0.64516 0.22893 1 16.10542
4 11100 28 0.90323 0.08741 8 6.14934
5 00101 5 0.16429 0.13528 6 9.51683
6 11011 27 0.87197 0.11238 7 7.90630
7 00110 6 0.19355 0.15609 5 10.98097
8 01001 9 0.29032 0.20604 3 14.49488
9 11110 30 0.96774 0.03122 10 2.19619
10 00111 7 0.22581 0.17482 4 2.29868

3
A method must be chosen for the first step in a GA, reproduction. Like natural
evolution, strings that are more fit should be better represented in the child population
than they were in the parent population. One very simple scheme would be to simply
take the best 5 out of the 10 strings and make two copies of each of them. Alternatively,
the proportional fitness scheme would reproduce each parent string the appropriate
number of times to make it appear in the child generation the percentage of the time that
is equal to its proportional fitness. For example, 7.91% of the child generation will
consist of parent number 6. Of course, these numbers must be rounded to the nearest
appropriate value. In general, the child population will be the same size as the parent
population (in this case,10 strings).
Reproduction is an effective way of making fitter strings more prominent, but
reproduction alone cannot introduce better solutions into the population, as it is always
limited to making copies of combinations that already exist. Therefore, it is necessary to
introduce a secondary operator, called crossover. The simplest crossover scheme is one-
point crossover, in which two of the strings resulting from reproduction are chosen at
random, and a position on the string is chosen at random (say, the point between the 3 rd
and 4th digit) to be the crossover point. The parts of the strings following the crossover
point are exchanged, resulting in two children strings that are similar but not quite the
same as the parents. Another simple crossover scheme is uniform crossover, in which
each position may be exchanged with some probability and does not affect the digit next
to it. In longer strings, two-point crossover may be used, in which a middle segment of
the strings are exchanged.
Even with crossover, certain combinations may not be possible to achieve in the
child population. For example, if every parent happened to have 1 in the fifth position,
no amount of crossover or reproduction will ever yield a child that has a 0 in the fifth
position. In order to allow the GA to search all possible combinations before choosing
which one is best, a new operator, called mutation, must be introduced. Mutation is the
random toggling of bit values. This occurs throughout the population with a relatively
low probability, usually on the order of 1% or lower.
Once a parent population has undergone reproduction, crossover, and mutation, it
is said to be a new generation. The number of generations needed before the population
will converge to an acceptable value is different for every problem. Once a number is
decided upon, the GA performs as shown in the flow diagram in Figure 2.
Initialize Population

Reproduce

Crossover

Mutation

Enough Generations?
no
yes
Figure 2. Flow diagram of a simple GA
4
After performing this procedure, the population should consist almost entirely of
the string that represents the optimal solution. In the case of the example, there is no
string that is represents exactly 0.5, so the population should be equally split between the
strings 01111 (0.48387) and 10000 (0.516129). An approximation of the maximum point
of the parabola has thus been obtained.

2.2 GAs and Power

GAs have been used extensively within the power field, especially in complex
problems such as unit commitment and generation expansion planning with are highly
constrained, nonlinear, and discrete. Currently, there are no deterministic techniques
capable of picking out the optimal solution in these problems. Each problem has its own
characteristics that make certain modifications to a GA implementation desirable. The
following sections outline some of the progress that has been made on different problems.

2.2.1 Arroyo and Conejo: A parallel repair algorithm to solve the unit
commitment problem

Arroyo and Conejo [1] studied the unit commitment problem for thermal units.
The unit commitment problem is the task of minimizing cost of fulfilling customer
demand while taking into account start up costs, shut down costs, start up and shut down
times, operating costs, and crew costs. The solution will give information about which
units to activate when and in what order. Examples of complications are the fact that
start up costs are modeled as a nonlinear function of how long the unit has been off, and
that operating costs are nonlinear, non-differentiable functions of power output.
Some constraints on this problem are that feasible solutions must be within start-
up and shut down time limits, each unit has a minimum and maximum down time, a
minimum and maximum output, and the desired performance must be possible with the
amount of crew available.
Arroyo and Conejo investigate the use of a repair algorithm to deal with the
constraints. Repair algorithms are sometimes used when the coding allows strings to
evolve that are not physically realizable. There are several advantages to using a repair
algorithm, such as the fact that all proposed strings are guaranteed to be feasible, and
there is no need to come up with penalty functions to make sure that unfeasible strings
have a low fitness function value.
The disadvantage of the repair algorithm is that it takes a substantial amount of
time to repair strings that are unfeasible to convert them into ones that are feasible. In
order to overcome this problem, Arroyo and Conejo implement a parallel structure.
There are three schemes for parallelization proposed and implemented. The first
is global parallelization, in which simultaneous operations are carried out on different
processors. In this scheme, all unfeasible strings are repaired at the same time, which
reduces time of computation since repairing unfeasible strings takes up the majority of
the computation time. The second scheme is coarse-grained parallelization, in which
many populations evolve separately and exchange their best solutions. The idea behind
this is that it gives a broad search space but allows the best individuals to be exploited.
Different populations may get stuck at local optima, but with trading between

5
populations, the global optimum will have higher chance of being discovered. The third
scheme is a hybridized version of the first and second scheme, in which the global
parallelization are carried out in each of the subpopulations. The authors claim that these
schemes have been successfully applied to realistic case studies.

2.2.2 Park et. al. : An improved genetic algorithm for generation expansion
planning

Park et. al. [2] address the generation expansion planning problem, which is the
problem of how to minimize the cost of addition a plant to an existing structure,
accounting for the type and number of plants, and still meet forecasted demand with a
specified reliability. This problem is highly constrained, nonlinear, discrete, and
dynamic. The authors cite previous works on similar problems that have used GAs but
have displayed problem within the GAs such as premature convergence and duplications
among strings. Premature convergence is a common problem with simple GAs in which
there is one string in the initial random population that is so much more fit than all of the
others that it duplicates over and over and takes over the whole population without ever
giving the algorithm a chance to search for better possibilities. Three proposed
improvements are a stochastic crossover scheme, elitism, and artificial initialization of
the population.
In order to counter the problems, the authors propose a stochastic crossover
scheme. They investigate the merits of three different crossover methods: one-point
crossover, two-point crossover, and one-point substring crossover. One-point crossover
switches the bits sequence between two parent strings at a randomly chosen point on the
strings. Two-point crossover essentially does the same thing except that two points on
the string are chosen at random so that a segment in the middle of each parent string is
exchanged with the other parent. In one-point substring crossover, the string is divided at
given intervals into substrings and each substring undergoes one-point crossover. The
one-point substring crossover has the advantage of being able to mix together the parents
sequences well and promote diversity, but it has a high chance of destroying good bit
structures that already exist. The one-point and two-point crossovers, on the other hand,
are not very good at mixing up the bits but they are fairly harmless to already existing bit
structures. In the stochastic crossover scheme, each of three different methods for
performing crossover are given a different probability, and one of the three is chosen and
random. The probabilities were experimentally determined.
Elitism is used to make sure that highly fit strings that occur in early generations
do not get eradicated by simple chance through the reproduction process. Thus, an elitist
scheme would simply copy the most fit string from the parent population to the child
population with no crossover or mutation performed on it. This way, the most fit string is
always preserved. Variations would include preserving several of the most fit strings
instead of just one.
The artificial population initialization algorithm is presented in the paper. It
initializes a portion of the population deterministically, and a portion randomly, with the
intention of making sure that the GA is searching a sufficiently large area of the search
space but without robbing it completely of the random processes on which it is so
dependant.

6
The authors conclude that for the cases they studies, stochastic crossover was
more successful than artificial initial population, but hybridizing the two techniques
yields even more impressive results.

2.2.3 Burke and Smith: Hybrid evolutionary techniques for the maintenance
scheduling problem

Burke and Smith [3] worked with yet another scheduling problem, this time in
thermal generator maintenance. They claim that using an integer representation instead
of a binary representation for encoding possible solutions reduces the execution time of
the GA since the strings are more compact. Unlike Park et. al., they use penalty functions
for unfeasible solutions.
The primary purpose of the paper is to discuss different hybrid techniques of GAs
and other search methods. The search methods were first tried individually on the
problem described. The simple GA performed relatively badly, with simulated annealing
performing better, and tabu-search performing the best. The hybrid tabu and simulated
annealing worked better than simulated annealing on its own but still worse than the
simple tabu. Hybrid hill-climbing and GA improved speed but produced relatively poor
quality. The hybrid simulated annealing and GA algorithm was the worst as it both
increased execution time and produced poor results. Finally, the most effective
combination was found to be the GA with the tabu-search operator. This combination
increased speed significantly. The tabu search was performed in two stages. In the first
stage, penalty factors were made high to enforce feasibility. In the second stage, lower
penalties were used to allow expanding searching.
These authors also used artificial population initialization, which used knowledge
about the problem to seed the population with solutions that would have lower penalty
factors than a randomly generation population. However, they discovered that the initial
population fitness was not as significant in the hybrid methods as in the simple GA.
They also found that penalty factors induced large differences in fitness values so that
probabilistic reproduction caused premature convergence. For this reason, a tournament
selection was used, in which only the individuals fitness ranking in its population is taken
into account, not the actual fitness value.
Crossover and mutation were accounted for as follows. Since different parts of
the coding were fairly irrelevant to each other, the string was divided into subsections and
crossover only took place between the subsections. Also, there were two mutation
operators used. The first was a light mutation, in which any units properties could be
randomly changed. The second was a heavy mutation, which was applied to high penalty
areas of the string in order to increase the chance of making it feasible.

2.2.4 Damousis, Bakirtzis, and Dokopoulos: Network-constrained economic


dispatch using real-coded genetic algorithm

Damousis et.al. [4] investigated the network-constrained economic dispatch


problem, which schedules online generating unit outputs to meet demand while at the
same time operating at a minimum cost and within safety limits. Like Burke and Smith,
they claimed that the binary representation could be improved upon. They went even

7
further than using an integer representation and used a real-coded GA instead. Each
place held a real number instead of a simple binary digit. This formulation allowed more
precision and was found to produce more accurate results and complete faster than the
traditional GA. Also, the quality did not decrease much with decreases in population
size.
Since the coding used real numbers, new crossover schemes had to be
implemented. The paper suggested four different crossover methods that take different
weighted averages of the real values in the parent strings. For each pair of parents, each
of the four crossover methods is tried, and the two that yield the higher fitness are chosen.
The mutation operator was designed to make a uniform search near the beginning of the
run and make progressively more minor modifications as the run progresses.
These authors also used penalty functions in their initial algorithm. Since
calculation of network violations was very time consuming, a further enhancement was
tried. In the child population, only those that had less than 110% of the parent
populations minimum operating cost were evaluated and all remaining children were
tagged with a large, fixed penalty. On average, this yielded 20% faster results and
slightly better solutions.

2.2.5 Milosevic and Begovic: Nondominated sorting genetic algorithm for optimal
phasor measurement placement

Milosevic and Begovic [5] investigated a phasor measurement unit (PMU)


problem. They used a nondominated sorting algorithm to balance the two main
objectives of minimizing the number of PMUs and maximizing the redundancy. These
objectives conflicted with each other since improving one decreased the quality of the
other. Nondominated algorithms are used when there are two objective that conflict and
it is not clear how to weight them against each other. The algorithm suggests several
optimal solutions with different weightings and the user decides between these solutions.
In this algorithm, there is a random initial population of feasible individuals. Any
infeasible solutions will be repaired. The individuals are evaluated based on how
nondominant (diverse) they are. The populations is divided into groups, called fronts,
that are close to each other in nondominant characteristics. Diversity is maintained by
using a sharing function. Sharing functions are based on the idea that there is only a
limited quantity of resources for similar individuals. The fitness evaluation for an
individual will be lower if there are many copies of that individual present.
Reproduction was decided by how diverse each individual is. Crossover was
single-point with a fixed probability. Most results of crossover were infeasible, and were
repaired by a deterministic algorithm (excluding the case when two options are equally
desirable; in that case, one was chosen at random). Individuals sometimes remained
unfeasible after undergoing repair, in which case they were discarded and replaced by a
random sequence. The mutation operator was modified so that it could only increase
redundancy. This meant that it would never disrupt a feasible solution.
Throughout the run, elitism was applied so that the group with the highest
nondominance characteristics was left unchanged. At the same time, the population size
was kept constant and all the lowest fitness (least diverse) strings were dropped as
necessary.

8
The case study results showed that the algorithm gave good performance for high
population sizes and high values of crossover. Also, if crossover was high, the mutation
rate needed to be inversely correlated with the size of the system.

2.2.6 Bakirtzis et. al. : Optimal power flow by enhanced genetic algorithm

Bakirtzis et. al. [6] attack another network constrained economic dispatch
problem (see 2.2.4), this time using the terminology optimal power flow. This a
nonlinear, nonconvex, large-scale, static problem with both continuous and discrete
variables. Several enhancements were made.
Three general enhancements were fitness scaling, elitism, and hill climbing. The
first enhancement was fitness scaling by linear transformation. This is a scheme which
artificially evaluated fitness values of different individuals as closer to each other than
they were at the beginning of the run, while evaluating fitness values to be relatively
further apart at the end of the run. This is often desirable because fitness values are
usually very different in value in the initial population and may cause premature
convergence. On the other hand, fitness values at the end of the run tend to be very close
together so if the differences are not magnified there may be lack of convergence. The
elitism is carried out in its simplest form by just preserving the most fit string from
generation to generation. Hill climbing, due to its time consuming nature, is applied only
to the best of each generation. Each bit of the individual is toggled one by one and the
result is retained if it yields a higher fitness.
There were several problem specific gene (subsection) operators introduced. The
first was a gene-swap, in which similar sections a given individual could be randomly
exchanged. For example, if there were two subsections of the string that represented
voltages, the voltage value at the first and second occurrence could be interchanged.
Switching between different types of genes was not allowed. The second operator was a
gene cross-swap, which was similar to the gene-swap except the genes were exchanged
between different parents. The third operator was the gene copy, in which a gene was
replaced by a copy of the predecessor or successor of that gene, on the same individual,
that was the same type. The fourth operator was the gene inverse, which selected a
random gene and reversed the order of all the bits. The final operator was the gene max-
min, which selected a random gene and set (with equal probability) all bits to zeros or all
bits to ones.
The algorithm uses dynamic crossover and mutation probabilities through
population statistics. If premature convergence was detected, mutation rate increased and
crossover decreased. If high diversity was detected, the mutation rate decreased and the
crossover rate increased.

2.2.7 Tippayachai, Ongsakul, and Ngamroo : Parallel micro genetic algorithm for
constrained economic dispatch

Tippayachai et. al. [7] provided yet another attack on the constrained economic
dispatch problem. The authors suggested a parallel algorithm in which subpopulations
were allowed to evolve and exchange solutions every epoch (a specified number of
generations). Each subpopulation was randomly initialized with a certain bias to increase

9
diversity. Elitism was used. Also, there was convergence checking with re-initialization.
In this scheme, if the best string in a generation had a total number of bit differences with
each other individual that was less than 5% of the total number of bits in the population,
the population was said to have converged and the best individual was maintained while
the rest were re-initialized. This also removed the need for a mutation operator.
The parallel micro GA produced favorable results compared to GA and SA and
GA-SA. The results were higher quality at lower epoch values at the cost of more
computer time.

2.2.8 Wu, Ho, and Wang : A diploid genetic approach to short-term scheduling of
hydro-thermal system

Wu et. al. [8] used a diploid GA to solve a short-term scheduling problem. The
concept of a diploid GA was based upon natural genetic structure. A human, for example,
carries not just one description of its gene structure, but two (one from each parent).
Each description is called a chromosome. Therefore, for each gene, there are two
opposing commands (alleles) on what form the gene should express. The decision of
which one to actually implement is decided by dominance. For example, if a child
inherits a blue eye genes from one parent and brown eye genes from another parent, it
will express brown eyes because brown eyes are dominant over blue eyes.
This GA implemented diploidy by having two strings (chromosomes) associated
with each individual. In order to decide dominance, a dominance map was created at
random for the population. The dominance map was of the same length as the
chromosome. If the dominance map had a 0 in a given location, then 0 was dominant
for that location. Likewise, if the dominance map had a 1 in a given location, then 1
was dominant for that location. The expressed alleles of the string with the highest
fitness were used as the next generations dominance map.
Reproduction and mutation were done by simple correspondence of the fitness
value to a probability and bit flipping, respectively. Crossover was implemented by a
uniform crossover (any bit could cross with the other string with a given probability)
between the two strings that made up the individual. These strings were then separated
and paired up with two strings from another individual that underwent the same uniform
crossover.
In order to test diversity, the number of 0s and 1s at each bit location was
recorded. As a general rule, the diversity is best when there are approximately the same
number of 0s and 1s at each bit location throughout the population.
The results showed that the diploid method retained greater diversity and showed
more robustness than the simple GA. Theory dictates that a diploid scheme allows a
population to try out combinations that may not work ultimately because those
combinations can be stored as recessive alleles. Also, combinations that are not currently
desirable can be stored for a later time when the requirements may change.

10
Chapter 3. Methodology

The induction motor can be modeled by an equivalent circuit. If the parameter


values are known, the efficiency of the motor can be calculated easily. However, in many
real-life situations, such as when the motor is running in a factory, it is not practical to
remove the motor from its environment and perform tests on it to determine the values of
the parameters. Therefore, it would be beneficial to have a method which takes quantities
that are easily measured, such as terminal voltage, input current, and input power, and
estimates the parameter values. The genetic algorithm offers a way to do this.
Figure 3 shows a standard equivalent circuit to model the induction motor.

r1 jx1 r2 jx2
I1

rm jxm r2/s (1-s)


V1

Figure 3. Induction motor equivalent circuit

The voltage, V1, the current, I1, the input power, Pinp, and the power factor, pf are
easily measured quantities. The stator phase resistance, r1, and the slip, s, are also easily
measured quantities. The ratio of stator reactance to the rotor reactance, x1/x2 is a known
quantity provided with the motors specs.
For the test problem, the values were taken from example 9.2 of [9]. In this
problem, a friction and windage loss, loss, of 150 W was assumed. It was assumed that
in a real life problem, there would exist some reasonable estimate of the friction and
windage loss. This left the rotor phase resistance, r2, the equivalent resistance to estimate
both core and mechanical losses, rm, the stator reactance, x1, and the mutual reactance, xm,
as the unknown quantities (where the value of x2 can be calculated from the value of x1).
Therefore, there were four unknown parameter values. Once these parameter values were
known, an approximation of the output power (and therefore the efficiency) could be
made. Hence, the genetic algorithm would determine the most likely parameter values
for r2, rm, x1, and xm using the known inputs V1, I1, Pinp, and pf. Then, these parameter
values could be used to estimate the efficiency.
A number of different GAs were run and their results compared. Each GA
encoded each parameter with a 14 bit unsigned binary number, so that each possible
solution could be represented by a 56 bit string. Different population sizes were studied.
The following equations were needed to calculate the fitness function:

The per-phase applied voltage:

11
V1
V1 (2)
3
The effective rotor impedance as referred to the stator:
r
Z 2 2 jx 2 (3)
s
The stator winding impedance:
Z1 r1 jx1 (4)
The equivalent impedance of rm, jxm, and Z2 is Ze:
1 1 1 1
(5)

Ze rm jx m

Z2
The total input impedance:
Z in Z1 Z e (6)
The stator current:
V1
I 1est (7)
Zin

The power factor:


Re( I 1est )
pf est (8)
I 1est
The power input:
Pinpest 3V1 I 1est pf est (9)
The stator copper loss:
2
Pscl 3I 1est r1 (10)

E 1 V1 I 1est Z 1 (11)
The core-loss current:
E1
Ic (12)
rm
The magnetization current:
E1
Im (13)
jx m
The excitation current:
I Ic I m (14)
The rotor current:
I 2 I1 I (15)
The core loss:
2
Pm 3I c rm (16)
The air-gap power:
Pag Pin Pscl Pm (17)
The rotor copper loss:
2
Prcl 3I 2 r2 (18)
The power developed:

12
Pd Pag Prcl (19)
The power output:
Po Pd loss (20)

The shaft torque:


Po Po
T (21)
m (1 s) s

The efficiency:
Po
(22)
Pin

A variety of fitness functions (FFs) were used. Each of these fitness functions utilized a
different combination of comparison parameters. For example, the simplest FF compared
the current and input power that would be produced for a given string solution to the
known quantities as shown in (24-26).

f 1 ( I 1est I 1 )100 / I 1 (23)

f 2 ( Pinpest Pinp )100 / Pinp (24)

1
ff1 (25)
f f 22
1
2

I1est and Pinpest were estimated quantities from the string currently being processed. I1 and
Pinp were the known values. The fitness function would be large if the estimated and
known values were close.
The power factor terms, pf and pfest, could be calculated in a similar manner to the
current and power. The rated output power and rated torque should be in the spec sheet
and therefore could also be used as comparison terms. Thus, the following quantities
were also defined:

f 3 ( pf est pf )100 / pf (26)

f 4 ( Poutest Pout )100 / Pout (27)

f 5 (Test T )100 / T (28)

A variety of fitness functions could be formed by using:

1
ff (29)
fi2

13
For any combination of f1, f2, f3, f4, and f5.
Many algorithms only calculated values at the rated slip, which in this case is s =
0.015. However, in order to add constraints, some of the GAs use quantities calculated at
more than one slip value. In these cases, the FF was calculated as shown in (31-32).

1
ff Sj
fi2 , s = sj , j = 1,2,n (30)

1
ff
1 (31)
ff
Sj

Once the fitness of each string in the population was determined, deterministic
sampling was used to decide how many copies of each string would be in the next
generation. In deterministic sampling, each string is assigned a number of expected
copies in the next generation based proportional to its fitness value. The integer part of
the expected number of copies is used to determine how many copies of the string get put
in the next generation. If there are extra spaces, the fractional parts are ranked and the
strings from the top of the list are copied into the next generation.
All GAs use one one-point crossover per parameter. This scheme involves
choosing two strings at random and choosing one random crossover point within the
encoding section of each parameter. For each parameter, all the bits following the
crossover point are switched. This procedure is performed throughout the whole
population. Mutation is accomplished by bit toggling. Different mutation rates were
investigated.

14
Chapter 4. Results and Discussion

All GAs differ only in fitness function, mutation rate, and population size. Table
3 shows the GAs in chronological order of formulation. The algorithm number is for
convenience of reference only.

Table 3. Experimental GAs


GA Fitness Function Mutation Population
1 , s = 0.025
1 ff 2 1/1000 250
f1 f 22
1
2 ff 2 2 2 , s = 0.025 1/1000 250
f1 f 2 f 3
1
3 ff 2 2 2 2 , s = 0.025 1/1000 250
f1 f 2 f 3 f 4
1
ff Si 2 s1 = 0.015, s2 = 0.025
f1 f 22 f 32

4 1 1/1000 250
ff
1 1

ff S 1 ff S 2
1
ff Si s1 = 0.015, s2 = 0.02, s3 = 0.025
f f 22 f 32
1
2

5 1 1/1000 250
ff
1 1 1

ff S1 ff S 2 ff S 3
1
ff Si s1 = 0.01, s2 = 0.015, s3 = 0.02, s4 = 0.025
f f 22 f 32
1
2

6 1 1/1000 250
ff
1 1 1 1

ff S 1 ff S 2 ff S 3 ff S 4
1 , s = 0.025
7 ff 1/500 250
f f 22
1
2

1 , s = 0.025
8 ff 2 1/100 250
f1 f 22
1 , s = 0.025
9 ff 2 1/50 250
f1 f 22

15
Table 3. Experimental GAs (continued)
GA Fitness Function Mutation Population
1 , s = 0.025
10 ff 2 1/10 250
f1 f 22
1
11 ff 2 2 2 , s = 0.025 1/100 250
f1 f 2 f 3
1
ff Si 2 s1 = 0.015, s2 = 0.025
f1 f 22 f 32

12 1 1/100 250
ff
1 1

ff S 1 ff S 2
1
13 ff , s = 0.025 1/100 1000
f f 22 f 32
1
2

1
14 ff 2 2 2 , s = 0.025 1/100 2000
f1 f 2 f 3
1 , s = 0.025
15 ff 2 1/100 2000
f1 f 22
1
16 ff 2 2 2 2 , s = 0.025 1/100 2000
f1 f 2 f 3 f 4
1
ff Si 2 s1 = 0.015, s2 = 0.025
f1 f 22 f 32

17 1 1/100 2000
ff
1 1

ff S 1 ff S 2
1
ff Si s1 = 0.015, s2 = 0.02, s3 = 0.025
f f 22 f 32
1
2

18 1 1/100 2000
ff
1 1 1

ff S1 ff S 2 ff S 3
1
ff Si s1 = 0.01, s2 = 0.015, s3 = 0.02, s4 = 0.025
f f 22 f 32
1
2

19 1 1/100 2000
ff
1 1 1 1

ff S 1 ff S 2 ff S 3 ff S 4

Table 3. Experimental GAs (continued)


GA Fitness Function Mutation Population

16
1
20 ff , s = 0.025 1/100 2000
f f 22 f 52
1
2

1
21 ff , s = 0.025 1/100 2000
f f f 32 f 52
1
2
2
2

1
ff Si , s1 = 0.015, s2 = 0.025
f f 22 f 52
1
2

22 1 1/100 2000
ff
1 1

ff S 1 ff S 2

1
ff Si , s1 = 0.015, s2 = 0.025
f f f 32 f 52
1
2
2
2

23 1 1/100 2000
ff
1 1

ff S 1 ff S 2

24 An approximated version of GA 22 described below (p. 19) 1/100 2000

1
ff S 1 , s1 = 0.015
f f 22
1
2

1
ff S 2 , s2 = 0.025
25 f f 22 f 52
1
2
1/100 2000

1
ff
1 1

ff S 1 ff S 2

One way to quantitatively show how well a GA converges is by finding what


percentages of the individuals in the final population fall within a certain percentage of
the convergent value. The convergent value was found by taking the mode. As an
example, the pie chart in Figure 4 shows what percentage of the individuals of the final
population fell within certain ranges away from the convergent value for GA 6, variable
X1. See appendix A for pie charts of variables from different GAs.

17
Figure 4. X1 of GA 6 convergence

Because of the random nature of GAs, each GA was run 10 times to get an
approximation of its behavior. Each of the 10 convergence values were then plotted
against algorithm number, as shown in Figure 5.

a) R2

Figure 5. Convergence values

18
b) Rm

c) X1

Figure 5 (continued). Convergence values

19
d) Xm

Figure 5 (continued). Convergence values

GAs 1-3
GAs 1 and 2 were based upon the algorithms used for a similar problem in [10]. The
fitness functions, mutation rate, and population size are based upon results of that article.
It became apparent that the GAs did not consistently converge to a single value. One
possibility was that the GA was suffering from lack of constraints. Only the current,
power, and power factor were being used as comparison values for the fitness evaluation.
Power factor was not an independent parameter, so the algorithm was using two
comparison values to determine four unknowns. In an effort to counter this problem, GA
3 was introduced, which used the added constraint of rated output power. However,
instead of performing better, GA performed terribly. In fact, it was one of the two worst
performing algorithms. The other was GA 16, which also used rated output power as a
constraint. These are the only two GAs that use rated output power.

GAs 4-6
GAs 4 to 6 were formulated as an alternative way of adding constraints. GAs 4, 5, and 6,
take the fitness evaluation of GA 2 and evaluate at 2, 3, and 4 slip values, respectively.
GA 2 was chosen because it had more constraints than GA 1. These GAs did not show
any improvement over the previous ones.

20
GAs 7-10
Since the GAs still did not converge to consistent values, premature convergence, rather
than lack of constraints, was considered as a potential problem. Premature convergence
is a common problem in genetic algorithms, and many techniques are being developed to
counter it. In premature convergence, one solution in the initial population is so much
more fit than any of the other solutions that it copies itself into the next generation many
more times than any other string. The result is that it takes over the entire population
without giving the algorithm a chance to search for better solutions. There are several
ways to counter premature convergence. One method is to increase the mutation rate.
Increasing the mutation rate allows a constant flow of new combinations into the
population and keeps the algorithm from getting stuck at one solution. However, the
mutation rate cannot be increased too much or the algorithm becomes nothing more than
a random search.
GAs 7 to 10 explore a range of mutation rates. Each of the GAs 7,8,9, and 10
perform the same fitness function as GA 1 but with mutation rates of 1/500, 1/100, 1/50,
and 1/10, respectively. No improvement was noted. However, as the GAs kept
converging even with a mutation of 1/100, a mutation of 1/100 was used in later GAs to
keep the diversity as high as possible.

GAs 11 and 12
GAs 11 and 12 are based off of GA 2 and GA 4 (which is in turn based off of GA 2), but
with mutation rates of 1/100. No significant improvements were noted, although GA 12
appeared to be slightly better in approximating Xm than any previous GAs.

GAs 13 and 14
GAs 13 and 14 use another method to counter premature convergence: increasing the
population size. A larger population size allows more potential solutions to be considered
at once, reducing the chance of one relatively super-fit string. GAs 13 and 14 are based
off of GA 2 with mutation 1/100 and population sizes 1000 and 2000, respectively. The
results appeared to be very slightly better. The population size of 2000 was maintained
for later GAs.

GAs 15-19
For the sake of completeness, GAs 15, 16, 17, 18, and 19 were run. They are based off of
GAs 1, 3, 4, 5, and 6, respectively. Each of these GAs was run with a mutation rate of
1/100 and population size 2000. GA 16 was exceptionally bad, but the others showed
significant improvement in Xm and marginal improvement in X1.

GAs 20 - 23
Since the algorithms were still not performing acceptably, the constraint of shaft torque
was tried. GA 20 uses current, power, and shaft torque as constraints. GA 21 uses the
power factor in addition. These GAs showed tremendous improvement in Rm. GAs 22
and 23 the equivalent of GAs 20 and 21, respectively, but evaluated at two slip values.
GAs 22 and 23 were by far the two best GAs. However, they still were not accurate
enough for a real application. They consistently gave results within 25%. Results
within 10% would be reasonable for a real application.

21
These GAs are also not practical for another reason. The GA assumes that the
user would know the shaft torque at two slip values. In practice, the user would know the
torque at rated slip, which could be used at one slip value, but the user would not know
torque at other slip values. Therefore, these GAs assume knowledge that is not usually
available, which limits the usefulness significantly.

GA 24
In order to counter the impracticality of the assumption that the user would know two
torque values, a linear approximation was used for the torque not at rated slip. This is a
common model used for the relationship between torque and slip near the rated slip. This
simple model assumes that the torque and slip are simply proportional to each other.
Since the rated slip and rated torque are known, any torque at a nearby slip value can be
approximated. This was tried for srated = 0.025 and sother = 0.015. Unfortunately, the GA
showed extreme sensitivity to the errors due to the linear approximation and gave useless
results.

GA 25
It was thought that the comparison of torque values at the slip value other than rated slip
might be a redundant condition and could be removed. If this constraint could be
removed, it would also eliminate errors due to linear approximations or any other
approximation of the true value. This is implemented in GA 25. However, this algorithm
shows considerable lack of consistency in different runs. It appears that the torque value
at the second slip value is essential in this formulation.

22
Chapter 5. Conclusions

This thesis has presented the steps taken in the formulation of a GA to extract out
parameter values from the equivalent circuit of an induction motor while only using
quantities easily measured in an environment where the motor is already running and
cannot be removed from its setting. The ultimate goal was to use these parameter values
to calculate the efficiency of the motor. A GA that gives a rough estimate of 25% of the
real values of the parameters was successfully formulated. However, an precision of
10% or less is needed for practical use. Also, the GA is extremely sensitive to errors in
input data.
In order to make a GA that can be applied to this problem in practice, the
precision and robustness of the GA would need to be improved. This might be
accomplishable by studying the shape of the various fitness functions employed. Fitness
function shape has a great impact on how sensitive the GA is to erroneous input data and
how easy it is for the GA to find the correct solution.

23
Appendix A

Pie Charts of Convergence

Each algorithm has four pie charts. R2 is located upper left, Rm


lower left, X1 upper right, and Xm lower right. Each pie chart
shows how many of the algorithms fall within certain percentage
ranges away from the convergent value.

The legend is shown below:

1
.
Figure A.1: Pie charts for GA 1.

2
Figure A.2: Pie charts for GA 2

3
Figure A.3: Pie charts for GA 3

4
Figure A.4: Pie charts for GA 4

5
Figure A.5 Pie charts for GA 5

6
Figure A.6 Pie charts for GA 6

7
Figure A.7 Pie charts for GA 7

8
Figure A.8 Pie charts for GA 8

9
Figure A.9 Pie charts for GA 9

10
Figure A.10 Pie charts for GA 10

11
Figure A.11 Pie charts for GA 11

12
Figure A.12 Pie charts for GA 12

13
Figure A.13 Pie charts for GA 13

14
Figure A.14 Pie charts for GA 14

15
Figure A.15 Pie charts for GA 15

16
Figure A.16 Pie charts for GA 16

17
Figure A.17 Pie charts for GA 17

18
Figure A.18 Pie charts of GA 18

19
Figure A.19 Pie charts for GA 19

20
Figure A.20 Pie charts for GA 20

21
Figure A.21 Pie charts for GA 21

22
Figure A.22 Pie charts for GA 22

23
Figure A.23 Pie charts for GA 23

24
Figure A.24 Pie charts for GA 24

25
Figure A.25 Pie charts for GA 25

26
Appendix B

Source code for GAs (MATLAB)

1
Source Code decode.m
function popReal = decode(currentPop, parmLen, noParm, rangeR2, rangeRm,
rangeX1, rangeXm)
% function: decode
%
% This function takes currentPop and converts the binary
% representations to floating point
% representations using the ranges specified by rangeR2, rangeRm,
% rangeX1, and rangeXm and
% parmLen number of bits for each of the noParm parameters. popReal is a
popSize by
% noParm matrix, where popSize is the number of rows in matrix
% currentPop.
%
% By: Nadeeka Yapa
% Date: June 13, 2003

% set population size


[popSize, dummy] = size(currentPop);

% weighting corresponding to each position in currentPop


% for example, if each chromosome in currentPop had 3 parameters of
% length 3 each,
% binWeight would be: [4 2 1 4 2 1 4 2 1; 4 2 1....]
column = 1;
for power = (parmLen-1):(-1):0
binWeight(1:popSize, column) = 2^power;
column = column+1;
end
for currParmNo = 2:noParm
startColumn = (currParmNo-1)*parmLen + 1;
endColumn = currParmNo*parmLen;
binWeight(:,startColumn:endColumn) = binWeight(:,1:parmLen);
end

% multiplies '1' positions in currentPop by corresponding weight


weightedCurrPop = binWeight.*currentPop;

% adds up weights of every parameter to give each parameter the


% appropriate weight
% on scale of zero to (2^(parmLength) - 1)
for currParmNo = 1:noParm
startColumn = (currParmNo-1)*parmLen+1;
endColumn = currParmNo*parmLen;
weightedPopReal(:,currParmNo) =
sum(weightedCurrPop(:,startColumn:endColumn),2);
end

% ranges is a vector of the range (difference between upper and lower


% limit) for each parameter
% startRange is a vector of lower end of range for each parameter
ranges = [rangeR2(2)-rangeR2(1),rangeRm(2)-rangeRm(1),rangeX1(2)-
rangeX1(1),rangeXm(2)-rangeXm(1)];
startRange = [rangeR2(1), rangeRm(1), rangeX1(1), rangeXm(1)];

% fractPopReal puts weightedPopReal on a scale of 0 to 1


% popReal is the output
fractPopReal = weightedPopReal/(2^(parmLen)-1);
for row = 1:popSize
popReal(row,:) = fractPopReal(row,:).*ranges + startRange;
end

2
Source Code determSample.m
function noCopies = determSample(popFit, popSize);
% function: determSample
% This function takes a vector of fitness values and uses the
% deterministic sampling
% method to output a vector (noCopies) of how many copies of each
% chromosome (corresponding
% to each fitness value) should be in the next generation (with popSize
% individuals)
% By: Nadeeka Yapa
% Date: June 12, 2003

fitFract = popFit./(sum(popFit)); % fractional fitness (total is one)


weightFract = fitFract*popSize; % fractional fitness scaled so total is
popSize
intContr = floor(weightFract); % number of copies from integer part
fractRem = weightFract - intContr; % fraction left after integer part is
taken away
[dummy, ranking] = sort(fractRem); % ranking is a vector giving the indices
of least value to highest value

nextPopSize = sum(intContr); % current size of new population (less than or


equal to max)
fractIndex = popSize; % this index corresponds to highest ranking
individual
noCopies = intContr; % number of copies vector (at this point only
taking into account integer contributions)

for n = (nextPopSize + 1):popSize


noCopies(ranking(fractIndex)) = noCopies(ranking(fractIndex)) + 1;
fractIndex = fractIndex - 1;
end

3
Source Code ff123.m
function [popFitOne, popFitTwo, popFitThree, effVec] = ff123(popReal,
parmVecEx, paraOrSeri);
% function: ff123
% This function gives a three vectors of fitness values for the floating point
% population representation of matrix popReal, given parameter values for motor
% (at whatever load level the user has decided to use).
%
% popFitOne is the fitness vector using ff1
% popFitTwo is the fitness vector using ff2
% popFitThree is the fitness vector using ff3
%
% parOrSeri determines if the parallel or series representation is being used.
% 1 means parallel, 2 means series
%
% By: Nadeeka Yapa
% Date: June 17, 2003

% extract parameter values from parameter vector


I0 = parmVecEx(1);
Ifl = parmVecEx(2);
ka = parmVecEx(3);
kc = parmVecEx(4);
r1 = parmVecEx(5);
sfl = parmVecEx(6);
Tr = parmVecEx(7);
Ts = parmVecEx(8);
xRat = parmVecEx(9);
Pout = parmVecEx(10);
I1 = parmVecEx(11);
Pinp = parmVecEx(12);
pf = parmVecEx(13);
s = parmVecEx(14);
voltage = parmVecEx(15);
[popSize,dummy] = size(popReal);

for n = 1:popSize
% variable values for the whole population (all column vectors)
if (paraOrSeri == 1)
r2 = popReal(n,1);
rm = popReal(n,2);
x1 = popReal(n,3);
xm = popReal(n,4);
end
if (paraOrSeri == 2)
r2 = popReal(n,1);
rmPrime = popReal(n,2);
x1 = popReal(n,3);
xmPrime = popReal(n,4);
end
x2 = x1/xRat;

% calculate fitness
rst = 0.018*r2*(1-sfl)/sfl;
Tt = (I1-I0)/(Ifl-I0)*(Tr-Ts)+Ts;
r1c = r1*(Tt+kc)/(Ts+kc);
r2c = r2*(Tt+ka)/(Ts+ka);
Y2 = 1/(r2c/s + rst + j*x2);

if (paraOrSeri == 1)
Ym = -j/xm + 1/rm;
end

4
if (paraOrSeri == 2)
Ym = 1/(rmPrime + j*xmPrime);
end

Y1 = 1/(r1c + j*x1);
V1 = voltage/(sqrt(3)+j*0);
I1_til = V1*Y1*(Y2+Ym)/(Y1+Y2+Ym);
I1est = abs(I1_til);
pfest = real(I1_til)/I1est;
I2 = abs(V1*Y1*Y2/(Y1+Y2+Ym));

if (paraOrSeri == 1)
Im = abs(V1*Y1/(rm*(Y1+Y2+Ym)));
Pinpest = 3*((I1^2)*r1c+(I2^2)*(r2c/s+rst)+(Im^2)*rm);
end
if (paraOrSeri == 2)
Im = abs(V1*Y1*Ym/(Y1+Y2+Ym));
Pinpest = 3*((I1^2)*r1c+(I2^2)*(r2c/s+rst)+(Im^2)*rmPrime);
end

Poutest = 3*(I2^2)*r2c*(1-s)/s;

f1 = (I1est - I1)*100/I1;
f2 = (Pinpest - Pinp)*100/Pinp;
f3 = (pfest - pf)*100/pf;
f4 = (Pout - Poutest)*100/Pout;

popFitOne(n) = 1/(f1^2 + f2^2);


popFitTwo(n) = 1/(f1^2 + f2^2 + f3^2);
popFitThree(n) = 1/(f1^2 + f2^2 + f3^2 + f4^2);
effVec(n) = Poutest/Pinpest*100;

end

5
Source Code ffPara.m
function [popFitOne, popFitTwo, eff] = ffPara(popReal, parm);
% function: ffPara
% (parallel circuit representation)
%
% This function gives a two vectors of fitness function (FF) values for the
floating point
% population representation of matrix popReal, given parameter values for motor
% (at whatever load level the user has decided to use), using the formulation in
% example 9.2 in:
% "Electric Machinery & Transformers", by Bhag S. Guru and Huseyin R.
% Hiziroglu, 2nd ed., Saunders College Publishing, 1995
%
% FF1: Compares values for I1 and Pinp
% FF2: Compares values for I1, Pinp, and pf
%
% By: Nadeeka Yapa
% Date: July 9, 2003

% extract parameter values from parameter vector


r1 = parm(1);
xRat = parm(2);
I1 = parm(3);
Pinp = parm(4);
pf = parm(5);
s = parm(6);
voltage = parm(7);
frictWindLoss = parm(8);
[popSize,dummy] = size(popReal);
for n = 1:popSize
% variable values for the whole population (all column vectors)
r2 = popReal(n,1);
rm = popReal(n,2);
x1 = popReal(n,3);
xm = popReal(n,4);
x2 = x1/xRat;

% for fitness calculation


V1 = voltage/sqrt(3); % voltage phasor
Z2 = r2/s + j*x2; % effective rotor impedence as
referred to the stator
Z1 = r1 + j*x1; % stator winding impedence
Ze = 1/(1/rm + 1/(j*xm) + 1/Z2); % equivalent impedence of r2,
j*xm, Z2
Zin = Z1 + Ze; % total input impedence
I1_til = V1/Zin; % stator current
I1est = abs(I1_til); % magnitude of stator current
estimate
pfest = cos(angle(I1_til)); % power factor estimate
Pinpest = 3*abs(V1)*I1est*pfest; % power input estimate

% efficiency calculation
Pscl = 3*I1est^2*r1; % stator copper loss (W)
E1 = V1 - I1_til*Z1; % stator copper loss (V)
Ic_til = E1/rm; % core-loss current (A)
Im_til = E1/(j*xm); % magnetization current (A)
ITheta_til = Ic_til + Im_til; % excitation current (A)
I2_til = I1_til - ITheta_til; % rotor current (A)
Pm = 3*(abs(Ic_til))^2*rm; % core loss (W)
Pag = Pinpest - Pscl - Pm; % air-gap power (W)
Prcl = 3*(abs(I2_til))^2*r2; % rotor copper loss (W)

6
Pd = Pag - Prcl; % power developed (W)
Po = Pd - frictWindLoss; % power output (W)

f1 = (I1est - I1)*100/I1;
f2 = (Pinpest - Pinp)*100/Pinp;
f3 = (pfest - pf)*100/pf;

eff(n) = Po/Pinpest*100;
popFitOne(n) = 1/(f1^2 + f2^2);
popFitTwo(n) = 1/(f1^2 + f2^2 + f3^2);
end

7
Source Code ffParaAlt02.m
function [popFitOne, popFitTwo, eff] = ffParaAlt01(popReal, parm);
% function: ffPara
% (parallel circuit representation)
%
% This function gives a two vectors of fitness function (FF) values for the
floating point
% population representation of matrix popReal, given parameter values for motor
% (at whatever load level the user has decided to use), using the formulation in
% example 9.2 in:
% "Electric Machinery & Transformers", by Bhag S. Guru and Huseyin R.
% Hiziroglu, 2nd ed., Saunders College Publishing, 1995
%
% FF1: Compares values for I1 and Pinp
% FF2: Compares values for I1, Pinp, and pf
%
% By: Nadeeka Yapa
% Date: July 9, 2003

% extract parameter values from parameter vector


r1 = parm(1);
xRat = parm(2);
I1 = parm(3);
Pinp = parm(4);
pf = parm(5);
s = parm(6);
voltage = parm(7);
frictWindLoss = parm(8);
[popSize,dummy] = size(popReal);
for n = 1:popSize
% variable values for the whole population (all column vectors)
r2 = popReal(n,1);
rm = popReal(n,2);
x1 = popReal(n,3);
xm = popReal(n,4);
x2 = x1/xRat;

% for fitness calculation


V1 = voltage/sqrt(3); % voltage phasor (?)
Z2 = r2/s + j*x2; % effective rotor impedence as
referred to the stator
Z1 = r1 + j*x1; % stator winding impedence
Ze = 1/(1/rm + 1/(j*xm) + 1/Z2); % equivalent impedence of r2,
j*xm, Z2
Zin = Z1 + Ze; % total input impedence
I1_til = V1/Zin; % stator current
I1est = abs(I1_til); % magnitude of stator current
estimate
pfest = cos(angle(I1_til)); % power factor estimate
Pinpest = 3*abs(V1)*I1est*pfest; % power input estimate

% efficiency calculation
Pscl = 3*I1est^2*r1; % stator copper loss
E1 = V1 - I1_til*Z1; % stator copper loss (V)
Ic_til = E1/rm; % core-loss current (A)
Im_til = E1/(j*xm); % magnetization current (A)
ITheta_til = Ic_til + Im_til; % excitation current (A)
I2_til = I1_til - ITheta_til; % rotor current (A)
Pm = 3*(abs(Ic_til))^2*rm; % core loss (W)
Pag = Pinpest - Pscl - Pm; % air-gap power (W)
Prcl = 3*(abs(I2_til))^2*r2; % rotor copper loss (W)

8
Pd = Pag - Prcl; % power developed (W)
Po = Pd - frictWindLoss; % power output (W)

f1 = (I1est - I1)*100/I1;
f2 = (Pinpest - Pinp)*100/Pinp;
f3 = (pfest - pf)*100/pf;

eff(n) = Po/Pinpest*100;
popFitOne(n) = 1/(abs(f1) + abs(f2));
popFitTwo(n) = 1/(abs(f1) + abs(f2) + abs(f3));
end

9
Source Code GAPieChartMaker.m
function convVec = GAPieChartMaker(RealNumberPop);

% function: GAPieChartMaker
% This function shows distribution of values in a GA population, centered
% around the convergent value and with percentage deviations
%
% By: Nadeeka Yapa
% Date: Feb 18, 2004

% Real Number Pop is a matrix with each row corresponding to one individual
% and each column corresponding to a parameter being searched for

% Test for convergence value


% Column 1: R2: 4 decimal places
R2orig = RealNumberPop(:,1);
R2 = R2orig*1e4;
R2 = round(R2);
i = 1;
for k = min(R2):max(R2)
R2Lengths(i) = length(find(R2==k));
i = i + 1;
end
[dummy, R2MaxIndex] = max(R2Lengths);
R2Conv = (R2MaxIndex + min(R2)-1)/1e4;

% Column 2: Rm: 1 decimal place


Rmorig = RealNumberPop(:,2);
Rm = Rmorig*10;
Rm = round(Rm);
i = 1;
for k = min(Rm):max(Rm)
RmLengths(i) = length(find(Rm==k));
i = i + 1;
end
[dummy, RmMaxIndex] = max(RmLengths);
RmConv = (RmMaxIndex + min(Rm)-1)/10;

% Column 3: X1: 4 decimal places


X1orig = RealNumberPop(:,3);
X1 = X1orig*1e4;
X1 = round(X1);
i = 1;
for k = min(X1):max(X1)
X1Lengths(i) = length(find(X1==k));
i = i + 1;
end
[dummy, X1MaxIndex] = max(X1Lengths);
X1Conv = (X1MaxIndex + min(X1)-1)/1e4;

% Column 4: Xm: 1 decimal place


Xmorig = RealNumberPop(:,4);
Xm = Xmorig*10;
Xm = round(Xm);
i = 1;
for k = min(Xm):max(Xm)
XmLengths(i) = length(find(Xm==k));
i = i + 1;
end
[dummy, XmMaxIndex] = max(XmLengths);
XmConv = (XmMaxIndex + min(Xm)-1)/10;

10
% Converged Values for R2,Rm,X1,Xm
convVec = [R2Conv, RmConv, X1Conv, XmConv];

% R2: Make Pie Chart: 1,5,10,50%


R2per01 = length(find((0.99*R2Conv<=R2orig)&(1.01*R2Conv>=R2orig)));
% within 1%
R2per05 = length(find((0.95*R2Conv<=R2orig)&(1.05*R2Conv>=R2orig))) - R2per01;
% 1 to 5%
R2per10 = length(find((0.90*R2Conv<=R2orig)&(1.1*R2Conv>=R2orig))) - R2per05 -
R2per01; % 5 to 10%
R2per50 = length(find((0.50*R2Conv<=R2orig)&(1.5*R2Conv>=R2orig))) - R2per10 -
R2per05 - R2per01;
R2over = length(R2orig) - R2per50 - R2per10 - R2per05 - R2per01;
% other

figure
pie([R2per01,R2per05,R2per10,R2per50,R2over]);
set(gcf,'Color','white');
legend('within 1%','1% to 5%','5% to 10%','10% to 50%','over 50%');
title('R2 convergence');

% Rm: Make Pie Chart: 1,5,10,50%


Rmper01 = length(find((0.99*RmConv<=Rmorig)&(1.01*RmConv>=Rmorig)));
% within 1%
Rmper05 = length(find((0.95*RmConv<=Rmorig)&(1.05*RmConv>=Rmorig))) - Rmper01;
% 1 to 5%
Rmper10 = length(find((0.90*RmConv<=Rmorig)&(1.1*RmConv>=Rmorig))) - Rmper05 -
Rmper01; % 5 to 10%
Rmper50 = length(find((0.50*RmConv<=Rmorig)&(1.5*RmConv>=Rmorig))) - Rmper10 -
Rmper05 - Rmper01;
Rmover = length(Rmorig) - Rmper50 - Rmper10 - Rmper05 - Rmper01;
% other

figure
pie([Rmper01,Rmper05,Rmper10,Rmper50,Rmover]);
set(gcf,'Color','white');
legend('within 1%','1% to 5%','5% to 10%','10% to 50%','over 50%');
title('Rm convergence');

% X1: Make Pie Chart: 1,5,10,50%


X1per01 = length(find((0.99*X1Conv<=X1orig)&(1.01*X1Conv>=X1orig)));
% within 1%
X1per05 = length(find((0.95*X1Conv<=X1orig)&(1.05*X1Conv>=X1orig))) - X1per01;
% 1 to 5%
X1per10 = length(find((0.90*X1Conv<=X1orig)&(1.1*X1Conv>=X1orig))) - X1per05 -
X1per01; % 5 to 10%
X1per50 = length(find((0.50*X1Conv<=X1orig)&(1.5*X1Conv>=X1orig))) - X1per10 -
X1per05 - X1per01;
X1over = length(X1orig) - X1per50 - X1per10 - X1per05 - X1per01;
% other

figure
pie([X1per01,X1per05,X1per10,X1per50,X1over]);
set(gcf,'Color','white');
legend('within 1%','1% to 5%','5% to 10%','10% to 50%','over 50%');
title('X1 convergence');

% Xm: Make Pie Chart: 1,5,10,50%


Xmper01 = length(find((0.99*XmConv<=Xmorig)&(1.01*XmConv>=Xmorig)));
% within 1%
Xmper05 = length(find((0.95*XmConv<=Xmorig)&(1.05*XmConv>=Xmorig))) - Xmper01;
% 1 to 5%

11
Xmper10 = length(find((0.90*XmConv<=Xmorig)&(1.1*XmConv>=Xmorig))) - Xmper05 -
Xmper01; % 5 to 10%
Xmper50 = length(find((0.50*XmConv<=Xmorig)&(1.5*XmConv>=Xmorig))) - Xmper10 -
Xmper05 - Xmper01;
Xmover = length(Xmorig) - Xmper50 - Xmper10 - Xmper05 - Xmper01;
% other

figure
pie([Xmper01,Xmper05,Xmper10,Xmper50,Xmover]);
set(gcf,'Color','white');
legend('within 1%','1% to 5%','5% to 10%','10% to 50%','over 50%');
title('Xm convergence');

12
Source Code initPop.m
function currentPop = initPop(popSize, chromoLen);
% function: initPop
% This function creates a popSize X chromoLen matrix randomly filled with zeros
and ones
% By: Nadeeka Yapa
% Date: June 12, 2003

currentPop = rand(popSize, chromoLen);


currentPop = round(currentPop);

13
Source Code mutation1000.m
function currentPop = mutation1000(currentPop);
% function: mutation1000
% This function mutates each bit of the matrix currentPop with a probability of
1/1000
% By: Nadeeka Yapa
% Date: June 12, 2003

% create matrix of random real numbers from 0 the 1000, same size as population
matrix
dimensions = size(currentPop);
mutation = 1000*rand(dimensions);

% if random number is less than one, put a '1' there, otherwise put '0'
% (this creates a 1/1000 chance of a '1' being formed)
mutation = (mutation < 1);

% add together mutation matrix and population matrix and divide resulting matrix
by 2,
% with remainder (toggles bits of any places in currentPop that correspond to a
1 in
% the mutation matrix)
currentPop = rem((currentPop + mutation),2);

14
Source Code mutationVary.m
function currentPop = mutationVary(currentPop, rate);
% function: mutationVary
% This function mutates each bit of the matrix currentPop with a
% probability of 1/rate
% By: Nadeeka Yapa
% Date: July 21, 2003

% create matrix of random real numbers from 0 the rate, same size as population
matrix
dimensions = size(currentPop);
mutation = rate*rand(dimensions);

% if random number is less than one, put a '1' there, otherwise put '0'
% (this creates a 1/rate chance of a '1' being formed)
mutation = (mutation < 1);

% add together mutation matrix and population matrix and divide resulting matrix
by 2,
% with remainder (toggles bits of any places in currentPop that correspond to a
1 in
% the mutation matrix)
currentPop = rem((currentPop + mutation),2);

15
Source Code OnePtParmCross.m
function currentPop = onePtParmCross(currentPop, popSize, noParm, parmLen);
% function: onePtParmCross
% This function randomly crosses between members of the population currentPop.
% One single point crossover for each encoded parameter
% (There are noParm parameters with encoding length parmLen each)
% By: Nadeeka Yapa
% Date: June 12, 2003

% create a random crossover order by generating a popSize vector of random


numbers
% and using the SORT function to convert to integers
crossOrder = rand(1, popSize);
[dummy, crossOrder] = sort(crossOrder);

% crossover between consecutive pairs (according to crossover order) for


% floor(popSize/2) times
for n = 1:floor(popSize/2)

% create a noParm length vector of random integers uniformly distributed


% between 2 and parmLen
crossPoints = rand(1, noParm);
crossPoints = ceil(1 + crossPoints*(parmLen - 1));

oneChromIndex = crossOrder(2*(n-1)+1); % row number of chromosome


one
twoChromIndex = crossOrder(2*n); % row number of chromosome
two

% crossover (parameter by parameter)


for k = 1:noParm
clear temp % reset temp
variable each time
startCopy = (k-1)*parmLen + crossPoints(k); % beginning of
crossover segment
finCopy = k*parmLen; % end of
crossover segment
temp = currentPop(oneChromIndex, startCopy:finCopy);
currentPop(oneChromIndex,startCopy:finCopy) =
currentPop(twoChromIndex,startCopy:finCopy);
currentPop(twoChromIndex,startCopy:finCopy) = temp;
end
end

16
Source Code parmGuru.m
function parmVecEx = parmGuru(slip);
% function: parmGuru
% This function returns the parameters needed in vector parmVecEx for the
function
% (IN THE NECESSARY ORDER)
% ffOneTwoThree (fitness function evaluation). The parameters are caluculated
by exParameters.m
% example 9.2 in:
% "Electric Machinery & Transformers", by Bhag S. Guru and Huseyin R.
% Hiziroglu, 2nd ed., Saunders College Publishing, 1995
%
% slip is an input that can have values of 0.01, 0.015, 0.02, or 0.025. The
function
% will return different parmVecEx for each of these values. If slip is not one
of
% these values, the function will return an error message and break
%
% The following simplifications have been included (by choosing appropriate
parameter values)
% in order to match the GA with the circuit in the book example
% rst set to zero
% no correction for temperature
% Pout has no meaning (value to be discarded -- do not use FF3)
%
% Notes: AB = arbitrary value
%
% By: Nadeeka Yapa
% Date: June 18, 2003

% motor parameters
Ifl = 1; % AB (not equal I0): nameplate stator
current at full load (A)
ka = 1; % AB: correction factor for aluminum
kc = 1; % AB: correction factor for copper
r1 = 0.5; % stator phase resistance (ohms)
sfl = 1; % (sets rst = 0): slip of motor under
full load (unitless)
Tr = 1; % AB: ref temp for insulation system of
class A (Celsius)
Ts = Tr; % (gets rid of temperature correction):
ambient temperature (Celsius)
xRat = 0.75/0.5; % value of fraction x1/x2
Pout = 1; % AB: output power in watts
voltage = 230; % voltage (V)

% slips: 0.01,0.015,0.02,0.025
I1 = [5.5920 8.0469 10.4653 12.8316]; % current, I1 (A)
Pinp = [2142.7 3123.3 4075.8 4998.5]; % input power, Pinp (W)
pf = [0.9618 0.9743 0.9776 0.9778]; % power factor, pf
s = [0.01 0.015 0.02 0.025]; % slip, s (unitless)
I0 = I1; % (eliminate temperature correction): no
load current (A)

% MAINTAIN ORDER
if (slip == 0.01)
parmVecEx =
[I0(1);Ifl;ka;kc;r1;sfl;Tr;Ts;xRat;Pout;I1(1);Pinp(1);pf(1);s(1);voltage]; %
0.01 slip
elseif (slip == 0.015)

17
parmVecEx =
[I0(2);Ifl;ka;kc;r1;sfl;Tr;Ts;xRat;Pout;I1(2);Pinp(2);pf(2);s(2);voltage]; %
0.015 slip
elseif (slip == 0.02)
parmVecEx =
[I0(3);Ifl;ka;kc;r1;sfl;Tr;Ts;xRat;Pout;I1(3);Pinp(3);pf(3);s(3);voltage]; %
0.02 slip
elseif (slip == 0.025)
parmVecEx =
[I0(4);Ifl;ka;kc;r1;sfl;Tr;Ts;xRat;Pout;I1(4);Pinp(4);pf(4);s(4);voltage]; %
0.025 slip
else
fprintf('ERROR: specified slip is not 0.01, 0.015, 0.02, or 0.025 \n');
end

18
Source Code parmGuruShort.m
function [parm01, parm015, parm02, parm025] = parmGuruShort(dummy);
% function: parmGuruShort
% This function returns the parameters needed in vector parm for the function
% (IN THE NECESSARY ORDER)
% ffPara (fitness function evaluation). The parameters are calculated by
exParameters.m
% example 9.2 in:
% "Electric Machinery & Transformers", by Bhag S. Guru and Huseyin R.
% Hiziroglu, 2nd ed., Saunders College Publishing, 1995
%
% slip is an input that can have values of 0.01, 0.015, 0.02, or 0.025. The
function
% will return different for each of these values. If slip is not one of
% these values, the function will return an error message and break
%
% The following simplifications have been included (by choosing appropriate
parameter values)
% in order to match the GA with the circuit in the book example
% rst set to zero
% no correction for temperature
%
% By: Nadeeka Yapa
% Date: July 9, 2003
% Status: UNFINISHED

% motor parameters
r1 = 0.5; % stator phase resistance (ohms)
xRat = 0.75/0.5; % value of fraction x1/x2
voltage = 230; % voltage (V)
frictWindLoss = 150; % friction and windage loss (W)

% slips: 0.01,0.015,0.02,0.025
I1 = [5.592042819492 8.0469154013517 10.4652253271488
12.8316800402308]; % current (A)
Pinp = [2142.71847245779 3123.28405712423 4075.80174367022
4998.46333267479]; % input power (W)
pf = [0.9618473705707 0.9743019806470 0.9776333400298
0.9778326297326]; % power factor

% MAINTAIN ORDER
parm01 = [r1;xRat;I1(1);Pinp(1);pf(1);0.01 ;voltage;frictWindLoss]; % 0.01
slip
parm015 = [r1;xRat;I1(2);Pinp(2);pf(2);0.015;voltage;frictWindLoss]; % 0.015
slip
parm02 = [r1;xRat;I1(3);Pinp(3);pf(3);0.02 ;voltage;frictWindLoss]; % 0.02
slip
parm025 = [r1;xRat;I1(4);Pinp(4);pf(4);0.025;voltage;frictWindLoss]; % 0.025
slip

19
Source Code parmPaper.m
function parmVecEx = parmPaper(load);
% function: parmPaper
% This function returns the parameters needed in vector parmVecEx for the
function
% IN THE NECESSARY ORDER
% ffOneTwoThree (fitness function evaluation). The parameters are the same as
in:
% P. Pillay, V. Levin, P. Otaduy, J. Kueck, "In-situ induction
motor
% efficiency determination using the genetic algorithm," IEEE
% Transactions on Energy Conversion, vol. 13, pp. 326-333,
Dec. 1998
%
% load is an input that can have values of 25, 50, 75, or 100 (%). The function
% will return different parmVecEx for each of these values. If load is not one
of
% these value, the function will return an error message and break
%
% Assumptions: I0 = 2.5 A
%
% By: Nadeeka Yapa
% Date: June 18, 2003

% hp to watt conversion constant


hpWattConv = 745.6999;

% motor parameters
I0 = 2.5; % no load current (A)
Ifl = 6.7; % nameplate stator current at full load
(A)
ka = 225; % correction factor for aluminum
kc = 234.5; % correction factor for copper
r1 = 1.635; % stator phase resistance (ohms)
sfl = (1800 - 1719)/1800; % slip of motor under full load
(unitless)
Tr = 75; % ref temp for insulation system of
class A (Celsius)
Ts = 25; % ambient temperature (Celsius)
xRat = 0.67; % value of fraction x1/x2
Pout = hpWattConv*5; % output power in watts (output power =
5 hp)
voltage = 460; % voltage (V)

% parameter values at different loads


rpm = [1794 1764 1741 1719]; % rpm at 25, 50, 75, and 100%
load

% 25, 50, 75, and 100% load


varyLoadParm = [2.7 4 5.3 6.7; % current, I1 (A)
381 2047 3272 4326; % input power, Pinp (W)
0.177 0.642 0.775 0.810; % power factor, pf
((1800-rpm)/1800)]; % slip, s (unitless)

% MAINTAIN ORDER
if (load == 25)
parmVecEx = [I0;Ifl;ka;kc;r1;sfl;Tr;Ts;xRat;Pout;varyLoadParm(:,1);voltage];
% 25% load
elseif (load == 50)

20
parmVecEx = [I0;Ifl;ka;kc;r1;sfl;Tr;Ts;xRat;Pout;varyLoadParm(:,2);voltage];
% 50% load
elseif (load == 75)
parmVecEx = [I0;Ifl;ka;kc;r1;sfl;Tr;Ts;xRat;Pout;varyLoadParm(:,3);voltage];
% 75% load
elseif (load == 100)
parmVecEx = [I0;Ifl;ka;kc;r1;sfl;Tr;Ts;xRat;Pout;varyLoadParm(:,4);voltage];
% 100% load
else
fprintf('ERROR: specified load is not 25, 50, 75, or 100 % \n');
break;
end

21
Source Code reproduce.m
function newPop = reproduce(currentPop, popSize, noCopies);
% function: reproduce
% This function takes the chromosome in currentPop and copies the appropriate
number
% of each chromosome according to the number of copies in vector noCopies (one
value
% for each string). currentPop has popSize chromosomes in it.
% By: Nadeeka Yapa
% Date: June 13, 2003
% Status: FUNCTIONAL

newRow = 1; %row of new population being considered

for oldRow = 1:popSize % row of old population being considered


loopControl = noCopies(oldRow);
while (loopControl > 0) % how many more times current string has
to be copied
newPop(newRow,:) = currentPop(oldRow,:);
newRow = newRow + 1;
loopControl = loopControl - 1;
end
end

22
References

[1] J. M. Arroyo and A. J. Conejo, A parallel repair genetic algorithm to solve the unit
commitment problem, IEEE Trans. on Power Syst., vol. 17, pp. 1216-1224, Nov. 2002

[2] J.-B. Park, Y.-M. Park, J.-R. Won, and K. Y. Lee, An improved genetic algorithm for
generation expansion planning, IEEE Trans. on Power Syst., vol. 15, pp. 916-
922, Aug. 2000

[3] E. K. Burke and A. J. Smith, Hybrid Evolutionary Techniques for the Maintenance
Scheduling Problem, IEEE Trans. on Power Syst., vol. 15, pp. 122-128, Feb.
2000

[4] I. G. Damousis, A. G. Bakirtzis, and P. S. Dokopoulos, Network-constrained


economic dispatch using real-coded genetic algorithm, IEEE Trans. on Power
Syst., vol. 18, pp. 198-205, Feb. 2003

[5] B. Milosevic and M. Begovic, Nondominated sorting genetic algorithm for optimal
phasor measurement placement, IEEE Trans. on Power Syst., vol. 18, pp. 69-75,
Feb. 2003

[6] A. G. Bakirtzis, P. N. Biskas, C. E. Zoumas, and V. Petridis, Optimal power flow by


enhanced genetic algorithm, IEEE Trans. on Power Syst., vol. 17, pp. 229-236,
May 2002

[7] J. Tippayachai, W. Ongsakul, and I. Ngamroo, Parallel micro genetic algorithm for
constrained economic dispatch, IEEE Trans. on Power Syst., vol. 17, pp. 790-
797

[8] Y.-G. Wu, C.-Y. Ho, and D.-Y. Wang, A diploid genetic approach to short-term
scheduling of hydro-thermal system, IEEE Trans. on Power Syst., vol. 14, pp.
1268-1274

[9] B. S. Guru, H. R. Hiziroglu (1988), Electric Machinery and Transformers, 2nd edition,
Harcourt Brace & Company, USA, pp. 501-503

[10] P. Pillay, V. Levin, P. Otaduy, J. Kueck, In-situ induction motor efficiency


determination using the genetic algorithm, IEEE Trans. on Energy Conv., vol.
13, No. 4, pp. 326 - 333

You might also like