4 views

Uploaded by Anonymous TxPyX8c

asf

- mdo golinski
- ubc_2005-0529.pdf
- Darwin Calibration Watercad
- Global optimization guide.pdf
- A NEW METHOD FOR OPTIMAL LOCATION OF FACTS CONTROLLERS USING GENETIC ALGORITHM
- tu_eneg
- A History of Evolutionary Computation
- Machine Learning in Embedded System
- An Enhanced Inherited Crossover GA for theReliability Constrained UC Problem
- Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach Based Synthesis of Concentric Circular Antenna Array with Non-isotropic Elements
- Boosting Genetic Algorithms with Self-Adaptive Selection
- sa_paper
- Tilting Dalam Optimasi
- dc meet-v2
- Preparation of Papers for ICIET Conferences: Embedded Data With Image Using Chaos Based Particle Swarm Optimization(PSO)
- 6035
- Optimization of Milling Operation Using Genetic and PSO Algorithm
- Genetic Algorithm
- Genetic Algorithm and Data Mining
- GENETIC AGENT TO OPTIMIZE THE AUTOMATIC GENERALIZATION PROCESS OF SPATIAL DATA

You are on page 1of 32

Parameter Control in Genetic Algorithm and

Particle Swarm Optimization

K.Indira

Research Scholar

Department of Computer Science and Engineering

Pondicherry Engineering College

Puducherry, India

919585287662

induharini@gmail.com

S.Kanmani

Professor

Department of Information Technology

Pondicherry Engineering College

Puducherry, India

kanmani@pec.edu

Abstract: Association rule mining is a data mining task on a great deal of academic research has

been done and many algorithms are proposed. Association rule mining is treated as a twofold

process by most of the methods. It increases the complexity of the system and takes up more

time and space. Evolutionary Algorithms (EA) are fast growing search based optimization method

for association rule mining. Among EAs Genetic Algorithm (GA) and Particle Swarm Optimization

(PSO) are more suited for mining association rules. The bottleneck in both GA and PSO is setting

the precise values for their control parameters. Setting values to the control parameter is done

either through parameter tuning or parameter control. This paper proposes an adaptive

methodology for the parameter control of 1) Mutation rate in GA based on the fitness values and

mutation rate history. 2) Acceleration coefficients and inertia weight in PSO based on Estimation

of Evolution State (EES) and fitness value respectively. Both of the proposed adaptive methods

when tested on five datasets from University of California Irvine (UCI) repository proved to

generate association rules with better accuracy and rule measures compared to simple GA and

PSO.

Keywords: Association Rule Mining, Genetic Algorithm, Particle Swarm Optimization,

Adaptive GA, Adaptive PSO, Estimation of Evolution State.

2

1. Introduction

Classification rule discovery and association rules mining are two important data

mining tasks. Association rule mining discovers all those rules from the training

set that satisfies minimum support and confidence threshold while classification

rule mining discovers a set of rules for predicting the class of unseen data. Mined

rules can be effective in uncovering unknown relationships, providing results that

can be the basis of forecast and decision [1]. They have proven to be very useful

tools for an enterprise as it strives to improve its competitiveness and profitability.

The application area of association rule mining and Classification rule discovery

varies from market analysis to business intelligence, and now extended to

epidemiology, clinical medicine, fluid dynamics, astrophysics, and crime

prevention. Hence the accuracy of the rules mined and the relationship between

attributes has become an important issue. The standard rule mining methods such

as Apriori [2,19], FP growth tree [19,20] scans the whole dataset for each attribute

match, increasing the input/output overhead of the system. The rules generated are

with single objective aiming at accuracy alone and the number of rules generated

is vast. Pruning and summarization are needed to filter the significant rules [6].

The efficiency of association rule mining can be enhanced by

Reducing the number of passes over the database

Making the process as multiobjective

Sustaining the search space efficiently

Evolutionary Algorithms (EA) such as Genetic Algorithm (GA) and Particle

Swarm Optimization (PSO), provides solution for the above three requirements

while mining association rules [13, 24, 30]. Both methods generate rules with

better predictive accuracy and reduced complexity. However one laborious aspect

of all EAs, including PSO and GA, is setting appropriate values for the control

parameter [46]. Mining association rules for classification task is proposed in this

paper using Adaptive Genetic Algorithm (AGA) and Adaptive Particle Swarm

Optimization (APSO).

In solving problems with GA and PSO, the properties affect their performance.

The properties of both GA and PSO depend on the parameter setting and hence

users need to find the apt value for the parameters to optimize the performance.

The interactions between the parameters have a complex behavior and so each

parameter will give a different effect depending on the value set for others.

Without prior knowledge of the problem, parameter setting is difficult and time

consuming. The two major ways of parameter setting are through parameter

tuning and parameter control [16]. Parameter tuning is the commonly practiced

approach that amounts to find appropriate values for the parameters before

running the algorithm. Parameter control steadily modifies the control parameter

values during the run. This could be achieved either through deterministic or

adaptive or self-adaptive techniques [15].

Deterministic parameter control is made using a deterministic rule that modifies

the strategy parameter without any feedback. This method becomes unreliable for

most problems because the parameter adjustments must rely on the status of the

problem at current time. In self-adaptive approach the parameters to be controlled

are encoded into the candidate solution which may result to deadlock, good

3

solution depends on finding the good setting of parameters and obtaining the good

setting of parameters depends on finding the good solution. Moreover, extra bits

are required to store these strategy parameters, so the dimensionality of the search

space is increased. Thus the corresponding search space becomes larger and hence

the complexity of the problem is increased.

Therefore adaptive method is the solution. There have been several attempts in

adapting the parameters of GA and PSO. The crossover and mutation rate both are

adopted based on reproduction rate by Srinivas and Patnaik [45] aiming high

convergence speed and precision. An adaptive GA that determines mutation and

crossover rate of an individual by its location in a two dimensional lattice plane is

proposed in [26]. The algorithm keeps diversity of these parameters by limiting

the number of individuals in each lattice. The role of inertia weight is considered

critical for the convergence of PSO. An adaptive inertia weight is proposed by

Eberhart and Kennedy [14] and Inertia weight is linearly decreased over the

generations in Arumugam et al. [4]. The acceleration coefficients are adapted in

Ratnaweera et al. [39] dynamically and in Arumugam et al. [4] by balancing the

cognitive and the social components. The acceleration coefficients are adjusted

based on evolution state estimator and the inertia weight is adapted through

evolution factor by Zhan et al. [52] to perform global search over entire search

space with faster convergence. The adaptive methodologies adjust the parameters

based on strategies independent of the data used in the applications.

The literature on adaptive strategies in GA and PSO focuses on the mechanisms

of adaptation over generation either decreasing or increasing the parameter values,

independent of the data involved to enhance the performance. As association rule

mining is mainly based on relationship or correlation between items in a dataset,

data dependent strategy of adaptation could generate better association rules both

qualitatively and quantitatively. The performance of the adaptive strategy for

mining association rules could be enhanced if the parameter adjustments are done

depending on i) the data involved and ii) fitness values used for evolution through

generations. In this paper the parameter control mechanism based on EES is

proposed for adopting the acceleration coefficients in PSO and the inertia weight

is adopted during the evolutionary process based on the fitness values. In adaptive

GA the mutation rate is made adaptive relying on the fitness value and mutation

rate history.

The rest of the paper is organized as follows. Section 2 introduces the

preliminaries on association rules, GA and PSO. Section 3 reviews the literature

related to proposed methodology. In section 4 the proposed methodology for

mining association rules with Adaptive Genetic Algorithm and Adaptive Particle

Swarm Optimization is given. Section 5 reports the experiment settings,

experimental results and discussions. Finally section 6 draws the conclusion.

2. Preliminaries

This section briefly discusses about the association rules and their related factors.

The basic features of multiobjective optimization, Genetic Algorithm and Particle

Swarm Optimization are also discussed.

4

2.1 Association Rule

Association rules are a class of important regularities in data. Association rule

mining is commonly stated as follows [1]: Let I = {i

1,

i

2

, , i

n

} be a set of n binary

attributes called items. Let D = {t

1

, t

2

, , t

m

} be a set of transactions called the

database. Each transaction in D has a unique transaction ID and contains a subset

of the items in I. A rule is defined as an implication of the form x y where

x, y I and x y = . The set of itemsets x and y are called antecedent (left-

hand-side or LHS) and consequent (right-hand-side or RHS) of the rule. Often

rules are restricted to only a single item in the consequent. The same is adopted in

this paper also.

There are two basic measures for association rules, support and confidence.

Support of an association rule is defined as the percentage/fraction of records that

contain x y to the total number of records in the database. Support is calculated

using the following equation

()

(1)

Confidence of an association rule is defined as the percentage/fraction of the

number of transactions that contain to the total number of records that

contain where if the percentage exceeds the threshold of confidence an

interesting association rule can be generated.

( )

()

()

(2)

Confidence is a measure of strength of the association rule. ( ) is

the support of both antecedent and consequent occurring together in the dataset.

2.2 Multiobjective Optimization

A general minimization problem of M objectives can be mathematically stated as:

Given = [x

1

, x

2

, . . . , x

d

], where d is the dimension of the decision variable

space,

Maximize :

() [

() ], subject to :

g

j

() 0, j= 1, 2, . . . , J, and

h

k

() = 0, k = 1, 2, . . .,K, where

() is the i

th

objective function, M is the

number of objectives, g

j

() is the j

th

inequality constraint, and h

k

() is the k

th

equality constraint.

A solution is said to dominate another solution if it is not worse than that solution

in all the objectives and is strictly better than that in at least one objective. The

solutions over the entire solution space which are not dominated by any other

solution are called Pareto-optimal solutions.

Association rules generated are usually large in numbers and have redundant

information. Much work on association rule mining has been done focusing on

efficiency, effectiveness and redundancy. Focus is also needed on the quality of

rules mined. In this paper association rule mining using GA and PSO is treated as

5

a multiobjective problem where rules are evaluated quantitatively and

qualitatively based on the following five measures.

Predictive Accuracy

Laplace

Conviction

Leverage

Lift

i. Predictive Accuracy (also Confidence) measures the effectiveness of the

rules mined. The mined rules must have high predictive accuracy.

( )

()

(3)

where x is the antecedent and y the consequent.

ii. Laplace is a confidence estimator that takes support into account,

becoming more pessimistic as the support of decreases [12,18]. It ranges

between [0, 1] and is defined as

( )

()

()

(4)

iii. Conviction is sensitive to rule direction and attempts to measure the

degree of implication of a rule [8]. It ranges between [0.5, ]. Values far

from 1 indicate interesting rules.

( )

()

()

(5)

iv. Leverage also known as Piatetski-Shapiro measure [36], measures how

much more counting is obtained by the rule from the co-occurrence of the

antecedent and consequent from the union. It ranges between [0.25, 0.25]

and is defined as

( ) ( ) () () (6)

v. Lift (also called interest by Brin et al. [8]) measures how far from

independence are antecedent and the consequent. It ranges within [0,+].

A lift value of 1 indicates that the items are co-occurring in the database as

expected under independence. Values greater than one indicate that the

items are associated.

( )

()

()

(7)

The objective function or fitness function is derived to meet the above objectives

based on the confidence and support values of the mined rules. Maximization of

the fitness function is the goal.

() () (() () ) (8)

6

2.3 Genetic Algorithm

Genetic algorithm (GA) is a search method for solving optimization problems and

modeling evolutionary systems. A typical GA works with a population of

individuals, each being a solution for the optimization problem with an associated

fitness value. Successive generations of the population are produced based on the

principle of survival of the fittest. The evolution process is achieved through

genetic operations, such as crossover and mutation, on the population of

individuals.

The steps in Genetic Algorithm are given below.

Genetic algorithm ( )

{

Initialize population randomly;

Evaluate fitness of each individual in the population;

While stopping condition not achieved

{

Perform selection;

Perform crossover and mutation;

Evaluate fitness of each individual in the population;

}

}

In simple GA, the three basic operators namely, selection, crossover and mutation

operators are fixed apriori. The speed and success of the GA depends greatly on

the correct mix of GA operators and the probabilities of crossover and mutation

operations. The optimum parameters for these operators depend on problem on

which the GA is applied and also on the fitness of the current population.

2.4 Particle Swarm Optimization

Swarm Intelligence (SI)[28] is an innovative distributed intelligent paradigm for

solving optimization problems that originally took its inspiration from the

biological examples by swarming, flocking and herding phenomena in

vertebrates. Particle Swarm Optimization (PSO)[27] incorporates swarming

behaviors observed in flocks of birds, schools of fish, or swarms of bees, and even

human social behavior.

PSO is initialized with a group of random particles (solutions) and then searches

for optima by updating generations. During all iterations, each particle is updated

by following the two best values. The first one is the best solution (fitness) it

has achieved so far. This value is called pBest. The other best value that is

tracked by the particle swarm optimizer is the best value obtained so far by any

particle in the population. This best value is a global best and is called gBest.

After finding the two best values; each particle updates its corresponding velocity

and position.

Each particle p, at some iteration t, has a position x(t), and a displacement velocity

v(t). The personal best (pBest) and global best (gBest) positions are stored in the

7

associated memory. The velocity and position are updated using equations 9 and

10 respectively.

()(

()(

) (9)

(10)

Where

is the inertia weight

is particle velocity of the i

th

particle before updation

is particle velocity of the i

th

particle after updation

is the i

th

, or current particle

i is the particles number

d is the dimension of search space

rand ( ) is a random number in the range (0, 1)

is the individual factor

is the particle best

is the global best

Particles velocities on each dimension are clamped to a maximum velocity V

max

.

If the sum of accelerations causes the velocity on that dimension to exceed V

max

,

which is a parameter specified by the user, then the velocity on that dimension is

limited to V

max

. This method is called V

max

method [42].

Inertia weight controls the impact of the velocity history into the new velocity and

balances global and local searches. Suitable fine-tuning of cognitive parameter

and social parameter c1 and c2 result in faster convergence of the algorithm and

alleviate the risk of settling in one of the local minima. The Pseudo code for PSO

is given below.

For each particle

{

Initialize particle

}

END

Do

{

For each particle

{

Calculate fitness value

If the fitness value is better than the best fitness value (pBest) in

history

set current value as the new pBest

}

End

Choose the particle with the best fitness value of all the particles as the

gBest

8

For each particle

{

Calculate particle velocity according to equation (9)

Update particle position according to equation (10)

}

End

}

While maximum iterations or minimum error criteria is not attained

3. Literature review

The issue of parameter setting has been relevant since the beginning of the EA

research and practice. Eiben et al. [15,16] emphasize that the parameter values

greatly determinate the success of EA in finding an optimal or near-optimal

solution to a given problem. Choosing appropriate parameter values is a

demanding task. There are many methods to control the parameter setting during

an EA run. Parameters are not independent but trying all combinations is

impossible and the found parameter values are appropriate only for the tested

problem instances. Nannen et al. [33] state that major problem of parameter

tuning is weak understanding of the effects of EA parameters on the algorithm

performance.

In view of the potential important applications of GAs, many improvement ideas

have been proposed. Marseguerra et al. [63] proposed a drop by drop idea for

progressive Monte Carlo simulation to reduce the total running time of the GA

process. Kwon et al. [64] suggested a successive zooming genetic algorithm to

identify global solutions using continuous zooming factors. Paszkowicz [65]

equipped the GA with a dynamic penalty function. Ilgi and Tunali[66] carried out

a factorial experiment and regression analysis to identify the best GA parameter

settings. Back [5] embedded the mutation rate into chromosomes in GA to

observe the performance of the self-adaptive approach in different functions.

Spears [43] added an extra bit in the schematic of chromosome to investigate the

relative probability of operating two-point crossover and uniform crossover in

GA.

Besides these methods, an abundance of literature has proved that adaptive GAs

(AGAs) is able to improve the GA performance. San Jose-Revuelta [67] showed

that adaptive GA requires a much smaller population size, in the range of 1020,

yet could obtain a satisfactory solution. Many approaches have been proposed for

adjusting GA parameters such as mutation probability, crossover probability

[68,69] and population size [70] in the course of a genetic algorithm. Hop and

Tabucanon [21] presented a new and original approach to solve the lot size

problem using an adaptive genetic algorithm with an automatic self-adjustment of

three operator rates namely, the rate of crossover operation, the rate of mutation

operation and the rate of reproduction operation. Adaptive directed mutation

operator in real coded genetic algorithm was applied to solve complex function

optimization problems [37]. It enhances the abilities of GAs in searching global

optima as well as in speeding convergence by integrating the local directional

search strategy and the adaptive random search strategies. A hybrid and adaptive

fitness function [10], in which both filter and wrapper approaches were applied

9

for feature selection via genetic algorithm to generate different subsets for the

individual classifiers.

In recent past, many approaches have been applied to investigate how to adapt

suitable parameters. The impact of inertia weight and maximum velocity in PSO

was analyzed in [40]. Different combinations of inertia weight and max velocity

resulted in different performance on the Schaffer function and the time-varying

inertia weight lead to better performance. Parsopoulos et al.[35] focused on the

adaptation of the most important parameter of unified particle swarm

optimization, which was previously proposed in [34]. According to the results of

the experiments, the self-adaptive scheme proposed in [35] can well balance the

capabilities of exploration and exploitation. Yau-Tarng et al. [49] proposed an

adaptive fuzzy particle swarm optimization algorithm utilizing fuzzy set theory to

adjust PSO acceleration coefficients adaptively, and was thereby able to improve

the accuracy and efficiency of searches. Adaptive parameter control system [41]

using entire search history automatically adjusts the parameters of an evolutionary

algorithm according to the entire search history, in a parameter-less manner. For

GA, the methodology automatically controlled the crossover operator, crossover

values and mutation operator and for PSO, the two learning factors and the inertia

weight was also adjusted automatically. A Novel Self Adaptive Inertia Weight

Particle Swarm Optimization approach to determine the optimal generation

schedule of cascaded hydroelectric system was proposed in [3]. To overcome the

premature convergence of particle swarm optimization the evolution direction of

each particle is redirected dynamically by adjusting the two sensitive parameters

i.e. acceleration coefficients of PSO in the evolution process [50]. Several

strategies [9,25,38] have been proposed to tune the contribution of various sub-

algorithms adaptively according to their previous performance, and have shown

pretty good effect.

From the methodologies in literature it is seen that significant improvement in the

performance of GA and PSO can be achieved by setting appropriate values to the

control parameters. In specific adaptive method of parameter control has received

wider coverage. But so far the adaption was mostly based on parameter analysis,

which is independent of the application being executed. Though this approach

enhances the performance, association rule mining is strongly based on items in

dataset and the relationship among them. Hence they are expected to perform well

when the adaptation is made data dependent. This paper proposes methodology

for association rule mining based on GA and PSO where the parameters are set

depending on the dataset and fitness value generated.

4. Methodology

Any heuristic search is characterized by two basic behaviors, exploration and

exploitation. These concepts are often mutually conflicting in the sense that if

exploitation of a search is increased, then exploration decreases and vice versa.

The manipulation of control parameters in GA and PSO are based on the

application and hence data, balances the above problem.

The roles of the parameters are significant in the performance of the

methodologies. This paper proposes adaptive parameter control for the parameters

of both Genetic Algorithm and Particle Swarm Optimization.

10

The parameters involved in GA and PSO are

Table 1. Parameters of GA and PSO

Parameter GA/PSO Parameter Role

Mutation (p

m

) GA Maintains the genetic diversity of the

population from one generation to

other

Crossover (p

c

) GA Sharing Information between

chromosomes

Inertia weight () PSO Controls the impact of the velocity

history into the new velocity

Acceleration

Coefficient c

1

PSO Maintains the diversity of swarm

Acceleration

Coefficient c

2

PSO Convergence towards the global

optima

4.1 Adaptive Genetic Algorithm (AGA)

Genetic algorithms when applied for mining association rules performs global

search and copes better with attribute interaction. The roulette wheel selection is

adopted where the parents for crossover and mutations are selected based on their

fitness values, i.e. if a candidate is having a high fitness value then the chances of

selection is high. The crossover and mutation operations are manifested by the

crossover and mutation rate usually defined by the user.

The efficiency of the rules mined by genetic algorithm mainly depends on the

mutation rate [7] and the crossover rate which affects the convergence rate. In the

1960s, L. Fogel et al. [59] illustrated how mutation and selection can be used to

evolve finite state automatons for a variety of tasks. Sophisticated versions of

evolution strategies, with adaptive mutation rates, proved quite useful for function

optimization tasks [58,61]. Recent studies confirm this view, illustrating the

power of mutation [62]. D. Fogel has continued the earlier work of L. Fogel and

makes an even stronger claim - that crossover has no general advantage over

mutation [60]. It is possible to implement mutation with a parameter that is

adapted during genetic algorithm search.

Tuning the value of the mutation rate becomes an important criterion for mining

association rules with GA. Higher mutation rate generates chromosomes much

deviated from original values resulting in higher exploration time. Lower

mutation rate results in crowding of chromosomes towards the global optima

thereby limiting the search space. The dataset from which association rules are

mined is considered for modifying the mutation rate during evolution process. So

mutation rate is made adaptive based on the history of mutation rate and the

fitness value.

The algorithm for adaptive genetic algorithm for mining association rules is as

given below

Initialize population randomly;

Evaluate fitness of each individual in the population;

11

While the stopping condition is not met

{

Perform selection;

Perform crossover and mutation;

Evaluate fitness of each individual;

Change mutation operator.

}

The Michigan encoding method is applied for the chromosomes representation.

The fitness function used for evaluating the chromosome is based on equation 8.

The mutation operator is made adaptive to maintain and introduce the diversity of

the population throughout the evolution process. The adaption of the mutation rate

is made dependent on fitness value, to introduce the diversity of the population in

the future generations. The initial mutation rate set along with an adjustment

factor maintains the diversity of the population. The proposed adaptive mutation

rate derived from the objective function is given in equation 11.

()

()

()

(11)

p

m

(n+1)

is the mutation rate of (n+1)

th

generation. The first generation mutation

rate is p

m

0

, f

mean

is the mean fitness of itemset. f

max

(n+1)

is the highest fitness of the

(n+1)

th

individual stocks. f

i

n

is the fitness of the n

th

individual i. is the adjustment

factor, which is set within the range [0,1].

The selection of individuals for reproduction (crossover and mutation) is based on

fitness value. Adapting the mutation rate based on the fitness values of the itemset

results in generation of offspring for the new population with higher fitness value

[45]. This leads towards the global optima easily thereby maintaining the search

space effectively. The main drawback of GA is its lack of memory. The mutation

rate adaption based on the history of mutation rates solves this problem, thus

enhancing the performance of GA.

4.2 Adaptive Particle Swarm Optimization (APSO)

A swarm [17] consists of a set of particles {

1,,

x

m

}of size m, moving within

the search space, S R

d

, each representing a potential solution of the problem as

Find

Where is the particle, S is the search space, d the dimension of the search space,

R

d

is

the set of potential solution and F is the fitness function associated with the

problem, which we consider to be a minimization problem without loss of

generality.

PSO is mainly conducted by three key parameters important for the speed,

convergence and efficiency of the algorithm [48]: the inertia weight () and two

positive acceleration coefficients(c

1

and c

2

). Inertia weight controls the impact of

12

the velocity history into the new velocity. Acceleration parameters are typically

two positive constants, called the cognitive parameter c

1

and social parameters c

2

.

Inertia Weight plays a key role in the process of providing balance between

exploration and exploitation process. The linearly decreasing inertia weight over

time enhances the efficiency and performance of PSO [47]. Suitable fine-tuning of

cognitive and social parameters c

1

and c

2

may result in faster convergence of the

algorithm and alleviate the risk of settling in one of the local minima. To adapt the

parameters of PSO in the proposed adaptive PSO methodology for association

rule mining the parameter tuning is performed based on the evolutionary state to

which the data fit. The acceleration coefficients are adapted based on the dataset

from which association rules are mined. The inertia weight is adjusted based on

the fitness value of the individuals. The pseudo code for the Adaptive PSO is

given below

/* Ns: size of the swarm, C: maximum number of iterations, O

f

: the final output*/

i. t = 0, randomly initialize S

0

,

Initialize x

i

, i, i {1, . . .,Ns} /* x

i :

the i

th

particle */

Initialize v

i

,

i

, i {1, . . .,Ns} /* v

i

: the velocity of the i

th

particle*/

Initialize

1,

c

1

1

, c

2

1

/*

: Inertia weight, c

1,

c

2

:

Acceleration

Coefficients */

Pb

i

x

i

, i, i {1, . . .,Ns} /* Pb

i

: the personal best of the i

th

particle */

Gb x

i

/* Gb : the global best particle */

ii. for t = 1 to C, /* C : total no of iterations */

for i = 1 to N

s

F(x

i

) = confidence(x

i

) log (support (x

i

) (length(x) + 1)

/* F(x

i

) : Fitness of x

i

*/

If ( F(x

i

) < F(Pb

i

))

Pbi x

i

/* Update particle best */

Gb min(Pb

1

, Pb

2

, , Pb

Ns

) /* Update global best */

adjust parameters(t, c

1

t

, c

2

t

) /* Adaptive Adjustment */

v

i

(t) =

t

v

i

(t-1)+ c

1

t

r1(Pb

i

x

i

) + c

2

t

r2(Gb x

i

)

/* Velocity Updation

r1, r2 : random values*/

x

i

(t)= x

i

(t-1) + v

i

(t)

/* Position Updation */

A

t

non dominated(S

t

A

t

) /* Updating the Archive*/

iii. O

f

A

t

and stop /* O

f

: Output*/

adjust parameters (

t

, c

1

t

, c

2

t

) in the above pseudo code is achieved through

adaptive mechanism newly proposed. The proposed approach distributes the

evolution of data into four states based on estimation of evolutionary states:

Convergence, Exploration, Exploitation and Jumping out.

13

4.2.1 Estimation of Evolutionary State (EES)

During the PSO process, the population distribution characteristics vary not only

with the generation number but also with the evolutionary state. For example, at

an early stage, the particles may be scattered in various areas, and, hence, the

population distribution is dispersive. As the evolutionary process goes on,

however, particles would cluster together and converge to a locally or globally

optimal area. Hence, the population distribution information would be different

from that in the early stage. In order to detect the different population distribution

information and use this information to estimate the evolutionary state is carried

out.

Based on the search behaviors and the population distribution characteristics of

the PSO Estimation of Evolution State is done as follows

a. The distance between particles is calculated using the Euclidean distance

measure for each particle i using the following equation

(12)

where N is the population size, x

i

and x

j

are the i

th

and j

th

particle in the

population respectively.

b. Calculate the evolutionary state estimator e, defined as

(13)

Where

are

the maximum and minimum distance measures respectively from step a.

c. Record the evolutionary e factor for 100 generations for each dataset

individually

d. Classify the estimator e, to the state it belongs: Exploration, Exploitation,

Convergence, Jumping out for the datasets based on the evolutionary

states.

The evolutionary state estimator e over hundred generations is plotted for the zoo

dataset as shown in Figure 1 below. The evolution states are estimated based on

the e values. The state transition is different for the five datasets and are non

deterministic and fuzzy. Based on the evolution state estimator values the

classification is done for determining the intervals at which class transition occurs.

14

Figure 1. Evolutionary state information robustly revealed by e at run time for Zoo dataset

The intervals arrived for the five datasets based on EES is as shown in Table 2.

Table 2. Classification into States by Evolutionary Factor

Datasets

State

Lenses

Car

Evaluation

Habermans

Survival

Postoperative

Patient

Zoo

Convergence 0.0 0.3 0.0- 0.15 0.0 - 0.4 0.0 - 0.5 0.0- 0.15

Exploitation 0.1- 0.4 0.15- 0.25 0.3-0.7 0.2- 0.6 0.1- 0.35

Exploration 0.2- 0.7 0.1- 0.3 0.6- 0.9 0.4-0.8 0.2- 0.4

Jumping out 0.6- 1 0.3- 1 0.8- 1 0.7-1 0.3-1

The change of state reflected as per the PSO sequence is Convergence

Exploitation Jumping Out Convergence.

The formulation for numerical implementation of the classification for the

Habermans Survival dataset is as follows.

Case (a) Exploration: A medium to large value of e represents exploration,

whose membership function is defined as

()

{

(14a)

Case (b)Exploitation: A shrunk value of e represents exploitation whose

membership function is defined as

()

{

(14b)

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

1 6 11 16 21 26 31 36 41 46

E

v

a

l

u

a

t

i

o

n

e

s

t

i

m

a

t

o

r

(

e

)

Iteration No.

15

Case (c)Convergence: A minimal value of e represents convergence, whose

membership function is defined as

() {

(14c)

Case (d)Jumping Out: When PSO is jumping out of a local optimum, the

globally best particle is distinctively away from the swarming cluster. Hence, the

largest values of f reflect S4, whose membership function is, thus, defined as

() {

(14d)

Therefore, at a transitional period, two memberships will be activated, and e can

be classified to either state. The singleton method of defuzzification is adopted.

The distribution of all particles in PSO based on Euclidean distance calculation is

illustrated in Figure 2.

, Exploration

, Exploitation, Convergence

, Jumping Out

, - Best particle of the swarm

Figure 2. Population Distribution in PSO based on Evolutionary factor e

Based on the estimation into states using evolutionary estimation factor e and

fitness values the adaptation of acceleration coefficients and inertia weight is

performed.

4.2.2 Adaptive Control of Acceleration Coefficients

The acceleration coefficients are made adaptive through the classification of

evolutionary states. Parameter c

1

represents the self-cognition that pulls the

particle to its own historical best position, helping in exploration of local niches

and maintaining the diversity of the swarm. Parameter c

2

represents the social

influence that pushes the swarm to converge to the current globally best region,

helping with fast convergence. Both c

1

and c

2

are initially set to 2 [29, 31,42]

based on the literature works. The acceleration coefficients are adaptively altered

during evolution according to the evolutionary state, with strategies developed.

The strategy to adopt for the four states is as given in Table 3. The values for

and are empirically arrived for the datasets taken up for study.

16

Table 3. Control Strategies for c

1

and c

2

State/Acceleration

Coefficient

c

1

c

2

Exploration Increase by Decrease by

Exploitation Increase by Decrease by

Convergence Increase by Increase by

Jumping out Decrease by Increase by

The increase or decrease in the values of c

1

and c

2

arrived in Table 3 is discussed

below.

Exploration: During exploration particles should be allowed to explore as many

optimal regions as possible. This avoids crowding over single optima, probably

the local optima and explores the target thoroughly. Increase in the value of c

1

and

decrease in c

2

facilitate this process.

Exploitation: In this state based on the historical best positions of each particle

they group towards those points. The local information of the particle aids this

process. A slight increase in c

1

advances the search around particle best (pBest)

positions. At the same time the slight decrease in c

2

avoids the deception of local

optima as the final global position has yet to be explored.

Convergence: In this state the swarm identifies the global optima. All the other

particles in the swarm should be lead towards the global optima region. The slight

increase in the value of c

2

helps this process. To fasten up the process of

convergence a slight increase in the value of c

1

is adopted.

Jumping Out: The global best (gBest) particle moves away from the local optima

towards global optima, taking it away from the crowding cluster. Once any

particle in the swarm reaches this region then all particles are to follow the same

pattern rapidly. A large c

2

along with a relatively small c

1

value helps to obtain

this goal.

4.2.3 Acceleration Coefficient Bounds

The adjustments on the acceleration coefficients should be minimum so as to

maintain the balance between values c

1

and c

2

. Hence, the maximum increase or

decrease between two generations is bounded by the range of [0.02, 0.1]. The

value of is set as 0.02 based on trials and the value for is set as 0.06 similarly.

c

1

and c

2

values are clamped in the interval [1.5, 2.5] in order to maintain the

diversity of the swarm and fast convergence [11]. The sum of the acceleration

coefficients is limited to 4.0[51], when the sum exceeds this limit then both c

1

and

c

2

are normalized based on the equation 15.

(15)

The adaptation behavior of the acceleration coefficients c

1

and c

2

through change

of states in estimation of evolution state for Zoo dataset is shown in Figure 3.

17

Figure 3. Adaptation of Acceleration Coefficients through EES for Zoo Dataset

4.2.4 Inertia Weight Adaptation

The inertia weight controls the impact of previous flying experience, which is

utilized to keep the balance between exploration and exploitation. The particle

adjusts its trajectory according to its best experience and enjoys the information of

its neighbors. In addition, the inertia weight is also an important convergence

factor; the smaller the inertia weight, the faster the convergence of PSO. A linear

decrease in inertia weight gradually may swerve the particle from their global

optima [53]. Hence a nonlinear adaptation of inertia weight as proposed in the

given equation 14 is the solution. The global best particle is derived based on the

fitness value of the particles in the swarm. The proposed methodology for

adopting the inertia weight is based on the fitness values exhibited by the

particles.

{

(

)(

(16)

where f, f

min

and f

avg

are the fitness values, which are decided by the current

particles in the swarm. The minimum fitness value of all particles in swarm is

defined as f

min

, while the mean value is selected as f

avg

. As a result, all the

information searched, whatever is now or past, is utilized to update the future

velocity. It is also a way to deal with the problem of falling into the local

optimum. The adaptation of inertia weight for zoo dataset over generations is

given in Figure 4 as follows.

1.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

2.4

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61

C

o

e

f

f

i

c

i

e

n

t

V

a

l

u

e

Generation Number

C1

C2

18

Figure 4. Inertia Weight Adaptation for Zoo dataset

From Fig. 2, it can be concluded that, in general, the change in inertia weight

decreases linearly. In detail, it is adapted based on the fitness information

dynamically. The value of inertia weight swings around 0.461 frequently at later

generations. In this area, the algorithm can balance the searching and convergence

ability in the search space to avoid the risk of more likely to be trapped into local

minima, and improve the diversity of PSO.

5. Experimental Results and Discussions

The objective of the adaptive GA and adaptive PSO is to enhance the performance

of association rule mining by making the adaption dependent on dataset used. To

validate the performance of the proposed methodology, Nine datasets from

University of California Irvine (UCI) repository [32]: Lenses, Habermans

Survival, Car Evaluation, Postoperative Patient, Zoo, Iris, Nursery, Tic Tac Toe

and Wisconsin breast cancer have been used for the study.

The experiments were developed using Java and run in windows environment.

Each methodology was run 10 times on the dataset chosen. The number of

generations was fixed to 100. The datasets considered for the experiments is listed

in Table 4.

Table 4. Datasets Description

Dataset Attributes Instances Attribute

characteristics

Lenses 4 24 Categorical

Car Evaluation 6 1728 Categorical, Integer

Habermans Survival 3 310 Integer

Post-operative Patient Care 8 87 Categorical, Integer

Zoo 16 101 Categorical, Binary,

Integer

Iris 4 150 Real

Nursery 8 12960 Categorical

Tic Tac Toe 9 958 Categorical

Wisconsin Breast Cancer 32 569 Real

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

17

1

3

1

9

2

5

3

1

3

7

4

3

4

9

5

5

6

1

6

7

7

3

7

9

8

5

9

1

9

7

I

n

e

r

t

i

a

W

e

i

g

h

t

Iterations

19

The first five datasets ( Lenses, Car Evaluation , Habermas Survival, Post

Operative Patient and Zoo) were used for analyzing the performance of AGA and

APSO in comparison with simple GA and Simple PSO.

In adaptive GA mutation rate was made self adaptive, based on the analysis of

control parameters [22], where the mutation operation was found to influence the

accuracy of the system while the crossover operation affects the convergence rate

alone. Hence the crossover rate is kept at fixed value. For the mutation rate, in

addition to the feedback from earlier mutation rate, the fitness value is also

considered while adaption. This enhances the accuracy of the association rules

mined and makes the adaption data dependent.

The parameter setting for both the AGA and APSO methodology is given in the

Table 5.

Table 5. Parameter Setting for AGA and APSO

Parameter Name Parameter Value

Population Size Lenses : 20

Habermans Survival : 300

Car Evaluation : 1000

Postoperative Care : 50

Zoo : 100

Number of Generations 100

0.3

pc 0.3

0.25

c

1

, c

2

2

0.9

0.4

The scope of this study on association rule mining using adaptive GA and

adaptive PSO is to

Mine association rules with adaptive GA and PSO with multiobjectives

namely accuracy and significance of the rules generated

Compare the performance with simple GA and simple PSO (non adaptive

methodologies)

Analyze the performance over generations

Convergence analysis of AGA and APSO in terms of execution time

The AGA and APSO methodology are applied on the five datasets to mine

association rules and the predictive accuracy is recorded. The results are

compared with the simple GA and PSO performance [23] implemented on the

same platform with same datasets.

5.1 Performance of AGA

The predictive accuracy of the association rules mined by this method is

compared with the results of Simple GA as plotted in Figure 5. The predictive

accuracy of the rules mined by the AGA is improved by 8% to 12.5% for the

datasets in comparison with the simple GA. The iteration at which maximum

20

predictive accuracy achieved has also been reduced due to effectiveness in fixing

up of the search space by adaptation methodology. To analyze the effects of AGA

on accuracy the mutation rate of the adaptive GA at the final iteration is noted.

This mutation rate is applied for the simple GA (GA with AGA mutation) and

compared with the results of AGA.

Figure 5. Comparison of AGA with GA and GA with AGA mutation Rate

The adaptive GA methodology gives better performance than GA. The

performance of GA with the mutation rate obtained at final iteration of AGA is

inferior to the simple GAs performance.

Accuracy alone is not the objective of mining association rules. The

interestingness of the association rules mined is also measured through Laplace,

Conviction and Leverage measures. Table 6 presents the results of Laplace,

Conviction , Leverage and Lift measures of the rules mined from the datasets

taken up for analysis.

Table 6. Multiobjective Measures of AGA

Lenses

Haberman's

Survival

Car

Evaluation

Postoperative

Patient

Zoo

Laplace 0.65201 0.64782 0.652488 0.6528 0.6524

Conviction infinity Infinity Infinity infinity Infinity

Leverage 0.0324 0.0569 0.04356 0.0674 0.0765

Lift 1.6778 2.12 2.5345 2.3455 1.9877

The Laplace and Leverage values are away from 1 for all datasets which indicates

that the rules generated are of interest. The conviction value, infinity also signifies

the importance of the rules generated. The Lift values greater than 1 indicate the

dependency of the antecedent and consequent on one another. Thus signifying the

importance of the rules generated.

5.2 Performance of APSO

The APSO methodology for mining association rules adapts the acceleration

coefficients based on the Estimation of evolutionary state. The estimate factor (e)

determines one among the four states: Exploration, Convergence, Exploitation

and Jumping out, to which the particle belongs and adjusts the acceleration

0

10

20

30

40

50

60

70

80

90

100

Lenses Habermans

Survival

Car Evaluation Postoperative

Patient

Zoo

P

r

e

d

i

c

t

i

v

e

A

c

c

u

r

a

c

y

(

%

)

Datasets

GA

AGA

GA with AGA

mutation

21

coefficients accordingly. Based on the fitness values the inertia weight is adapted.

Then the velocity modification is based on the state in which the particle lies. This

balances between exploration and exploitation thus escaping from premature

convergence. The predictive accuracy of the association rules mined is improved

through the evolution process.

The predictive accuracy for the adaptive PSO methodology over 100 generations

is shown in Figure 6.

Figure 6. Predictive Accuracy of Adaptive PSO over hundred Generations

The adaptive PSOs performance on Car Evaluation dataset and Zoo data set is

consistent as seen from the figure above. The performance on Habermans

survival dataset and Post operative patient dataset is maximum at the end of

evolution. This avoids premature convergence at initial stages. For the Lenses

dataset where the dataset size is small, global search space is maintained

effectively making the convergence possible even at earlier iterations.

The Laplace, Conviction, Leverage and Lift measures of the five datasets for the

adaptive PSO is recorded as in Table 7.

Table 7. Multiobjective Measures of APSO

Lenses

Haberman's

Survival

Car

Evaluation

Postoperative

Patient

Zoo

Laplace 0.52941 0.501608 0.502488 0.5028 0.5024

Conviction Infinity Infinity Infinity Infinity Infinity

Leverage 0.026 0.003039 0.002394 0.0301 0.0249

Lift 1.548 2.0458 1.9875 2.2478 2.4558

When the Laplace measure is away from 1, indicates that the antecedent values

are dependent on the consequent values and hence the rules generated are of

significant. The conviction measure which is infinity for all datasets show that the

rules generated is interesting. The Leverage measure being far away from 1 again

insists on the interestingness of the rules generated. The measure of Lift for all the

datasets, greater than1 signifies the dependency of the consequent on the

antecedent. The rules generated by Adaptive PSO are of importance based on the

Lift measure.

93

94

95

96

97

98

99

100

10 20 30 40 50 60 70 80 90 100

P

r

e

d

i

c

t

i

v

e

A

c

c

u

r

a

c

y

(

%

)

Iteration No.

Car

Haberman

Lens

Postop

Zoo

22

The predictive accuracy achieved from the adaptive PSO methodology is

compared with the simple PSO as in Figure 7. The Adaptive PSO performs better

than the simple PSO for all the five datasets when applied for association rule

mining.

Figure 7. Predictive Accuracy comparison of Adaptive PSO with PSO

The number of rules generated by APSO with predictive accuracy less than 0.05

percentage of the maximum predictive accuracy arrived is plotted for mining

association rules against the simple PSO methodology for the datasets in Figure 8.

In terms of number of rules generated adaptive PSO performs better for

Hagermans Survival, Zoo and car evaluation datasets. The number of rules

generated for lens and postoperative patient dataset is marginally better than

simple PSO. As the number of instances in both datasets is small, the increase in

the number of rules is minimum.

Figure 8. Comparison of No. of Rules Generated by APSO and PSO

The execution time of APSO for mining association rules from the datasets in

comparison with PSO methodology is given in Figure 9. This is the time taken by

the system to generate association rules (CPU time). The number of iteration

when the predictive accuracy is at maximum is taken as point at which the

execution time is recorded. The execution time of the APSO for mining

association rules is more when compared to PSO. The adaption mechanism of

65

70

75

80

85

90

95

100

Lenses Po-opert

Care

Habermans

Survival

Zoo Car

Evaluation

P

r

e

d

i

c

t

i

v

e

A

c

c

u

r

a

c

y

(

%

)

Dataset

PSO

APSO

0

5

10

15

20

25

30

35

40

Lens Postop Haberman Zoo Car

N

o

.

o

f

R

u

l

e

s

G

e

n

e

r

a

t

e

d

Datasets

PSO

APSO

23

acceleration coefficients based on evolution factor, and inertia weight adaptation

increases the complexity of the algorithm resulting in higher execution time when

compared to PSO.

Figure 9. Comparison of Execution Time of APSO and PSO

When number of instances in the dataset is less then increase in execution time is

marginal. Thus the execution time difference for mining rules from the Lenses,

Postoperative Patient and Zoo datasets are minimum for APSO than PSO. For the

Habermans survival dataset with moderate number of instances there is

noticeable increase in execution time, whereas for the car Evaluation dataset the

difference in execution time is large. The consistent performance of the APSO in

term of predictive accuracy throughout evolution and the predictive accuracy

achieved from APSO for mining association rules balances for the execution time

difference.

The major drawback of the simple PSO is its premature convergence where the

particle fixes some local optima as the target (global search area), where all

particles converge locally. One of the objectives of Adaptive PSO is to avoid the

premature convergence, thereby balancing between exploration and exploitation.

The iteration number at which predictive accuracy is high is plotted for both the

APSO and PSO methodologies for mining association rules for the datasets used

in Figure 10.

Figure 10. Comparison of Convergence of APSO and PSO

84

1243

2552

3425

6123

156

1841

6532

4586

9634

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

Lens Postop Haberman Zoo Car

E

x

e

c

u

t

i

o

n

T

i

m

e

(

m

s

)

Datasets

PSO

APSO

0

20

40

60

80

100

120

Lenses Zoo Car

Evaluation

Habermans

Survival

Postop

I

t

e

r

a

t

i

o

n

N

u

m

b

e

r

Datasets

PSO

APSO

24

The analysis of the performance of the five datasets through the adaptive methods

of GA and PSO has been compared with the simple GA and simple PSO. The

Adaptive GA and Adaptive PSO generated association rules with better predictive

accuracy and qualitative measures. To test the significance of the proposed

adaptive techniques on GA and PSO, the AGA and APSO is compared with Ant

Colony Optimization (ACO) methods for mining rules. Four more datasets from

UCI repository for which association rule mining using ACO has been carried out

in literature have been chosen. The predictive accuracy of the mined rules using

AGA and APSO for these datasets was recorded. The comparative results

obtained by AGA, APSO and ACO are shown in Table 8.

Table 8. Predictive Accuracy Comparison of AGA and APSO

Dataset

Adaptive

GA

Adaptive

PSO

Taco-Miner

zbakr et

al [55]

Ant Colony

Parpinelli

et al [57]

Ant

Colony

Liu et al

[54]

Ant

Colony

zbakr

et al [56]

Lenses 87.5 98.1 - - - -

Habermans

Survival

68 99.6 - - - -

Car

Evaluation

96 99.95 - - - -

Postoperative

Care

92 99.54 - - - -

Zoo 91 99.72 - - - -

Iris 96 98.86 98 - - 98.67

Nursery 97.30 99.12 97.24 - - 97.16

Tic Tac Toe 88.36 98.76 - 75.57 76.58 97.59

Wisconsin

Breast Cancer

92.54 98.75 - 96.97 94.32 -

The adaptive PSO methodology generates association rules with better predictive

accuracy when compared to other methodologies analyzed. Adaptive GA

generates association rules with predictive accuracies closer to the compared

methodologies.

The adaptive particle Swarm Optimization methodology when applied for mining

association rules generates rules with increase in predictive accuracy, increase in

the number of rules generated along with better rule set measures. The increase in

the execution time of APSO when compared to simple PSO methodology is also

minimum taking into account the complexity of the methodology and the increase

in accuracy achieved over PSO.

The balance in setting the global search space between exploration and

exploitation is also attained while mining association rules with APSO for all the

datasets used. This is noted from the shift in iteration number of APSO when

compared to simple PSO where highest predictive accuracy is noted. Adopting the

acceleration coefficients c

1

(cognitive factor) and c

2

(social factor) through EES

25

methodology makes the adjustments in fixing up the global search space

effectively with the help of local optimas. The fitness value of the particles is

used in adapting inertia weight. This helps in better balance between exploration

and exploitation when particles fly through the search space.

6. Conclusion

Genetic algorithm and particle Swarm Optimization are the fast and efficient

search methods. Both simple GA and simple PSO are problem dependent and

setting of the control parameters is a trivial task. This paper proposes adaptive

methods of parameter control for association rule mining using GA and PSO. The

fitness value plays a vital role in passing data over generations and adaptation

mainly depends on the fitness values of the chromosomes or particles. In Adaptive

GA the mutation operator was made self adaptive based on feedback of previous

generations and fitness function. The crossover rate was kept constant. In

Adaptive PSO the acceleration coefficients were adapted through estimation of

evolutionary state. In each generation, particles were set into one of the four

evolutionary states and based on the states the acceleration coefficients were

altered. The inertia weight parameter adaptation relied on the fitness values and

maximum and minimum values set for inertia weight.

When tested on five datasets from UCI repository both AGA and APSO generated

association rules with better predictive accuracy and good rule set measures. The

execution time of both the methods for mining association rules is found to be

optimum taking into account the accuracy achieved.

References

1. Agrawal R, Imielinski T, Swami A (1993) Mining association rules between sets of items in

large databases. In: Buneman P, Jajodia S ( eds) Proceedings of the 1993 ACM SIGMOD

International Conference on Management of Data. New York: ACM Press, pp 207216

2. Agrawal R, Srikant R (1994) Fast Algorithms for Mining Association Rules in Large

Databases. In: Proceedings of the 20th International Conference on Very Large Data Bases,

Morgan Kaufmann, pp 478-499

3. Amita Mahor, Saroj Rangnekar (2012) Short term generation scheduling of cascaded hydro

electric system using novel self adaptive inertia weight PSO. International Journal of

Electrical Power & Energy Systems 34(1): 1-9

4. Arumugam MS, Rao MVC (2008) On the improved performances of the particle swarm

optimization algorithms with adaptive parameters, crossover operators and root mean

square (RMS) variants for computing optimal control of a class of hybrid systems. Applied

Soft Computing 8 (1): 324336

5. Back T (1992) Self-adaptation in genetic algorithms. In: Proceedings of the 1st European

Conference on Artificial Life, pp 263271

6. Bing Liu, Wynne Hsu , Yiming Ma (1999) Pruning and Summarizing the Discovered

Associations. In: Proceeding of the fifth ACM SIGKDD international conference on

Knowledge discovery and data mining, pp125 134

7. Boyabalti O, Sabuncuoglu I (2007) Parameter Selection in Genetic Algorithms. System,

Cybernatics &Informatics 2(4): 78-83

8. Brin S, Motwani R, Ullman JD, Tsur S (1997) Dynamic itemset counting and implication

rules for market basket data. In : J. Peckham, editor, Proceedings of the 1997 ACM

SIGMOD International Conference on Management of Data, pp 255264

26

9. Burke EK, Hyde M, Kendall G et al. (2009) A survey of hyper-heuristics. In: Technical Rep.

NOTTCS-TR-SUB-0906241418-2747, School of Computer Science and Information

Technology, University of Nottingham

10. Canuto AMP (2012) A genetic-based approach to features selection for ensembles using a

hybrid and adaptive fitness function . In: The 2012 International Joint Conference on

Neural Networks (IJCNN), pp 1-8

11. Carlisle, Dozier G (2001) An off-the-shelf PSO. In: Proceeding on Particle Swarm

Optimization. Indianapolis, Purdue School of Engineering and Technology, pp 16

12. Clark P, Boswell (1991) Rule Induction with CN2: Some recent improvements. In:

Proceedings of the European Working Session on Learning, pp 151-163

13. Deepa Shenoy P, Srinivasa KG, Venugopal KR, Lalit M. Patnaik (2003) Evolutionary

Approach for Mining Association Rules on Dynamic Databases. In: Proceeding of Pacific-

Asia Conference on Knowledge Discovery and Data Mining, LNAI, 2637, Springer-Verlag, pp

325336

14. Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings

of the Sixth International Symposium on Micro Machine and Human Science, Piscataway,

NJ, IEEE Service Center, pp 3943

15. Eiben A E, Michalewicz Z, Schoenauer M, Smith J E, Parameter Control in Evolutionary

Algorithms , Lobo F.G. and Lima C.F. and Michalewicz Z. (eds.) , Parameter Setting in

Evolutionary Algorithms , Springer , 2007 , pp. 19-46

16. Eiben AE, Smith JE (2003) Introduction to Evolutionary Computing. Springer-Verlag, Berlin

Heidelberg New York.

17. Engelbrecht AP (2006) Fundamentals of Computational Swarm Intelligence. John Wiley &

Sons, New York

18. Good IJ (1965) The Estimation of Probabilities: An Essay on Modern Bayesian Methods,

MIT Press, Cambridge

19. Han J, Kamber M (2001) Data mining: Concepts and Techniques. Morgan Kaufmann,

Burlington, Massachusetts

20. Han J, Pei J,Yin Y (2000) Mining frequent patterns without candidate generation, In:

Proceeding of SIGMOD00, Dallas, TX, May 2000, pp 1-12

21. Hop NV, Tabucanon MT (2005) Adaptive genetic algorithm for lot- sizing problem with self-

adjustment operation rate. International Journal of Production Economics 98 (2): 129135

22. Indira K, Kanmani S (2011) Association Rule Mining Using Genetic Algorithm: The role of

Estimation Parameters. In: International conference on advances in computing and

communications, Communications in Computer and Information Science 190 (8), pp 639-

648

23. Indira K, Kanmani S et al.( 2012) Population Based Search Methods in Mining Association

Rules. In: Third International Conference on Advances in Communication, Network, and

Computing CNC 2012, LNCS pp 255261

24. Jacinto Mata Vzquez, Jos Luis lvarez Macas, Jos Cristbal Riquelme Santos (2002)

Discovering Numeric association rules via evolutionary algorithm. In: Pacific-Asia

Conference on Knowledge Discovery and Data Mining, pp 4051

25. Jasper A Vrugt, Bruce A Robinson, James M Hyman (2009) Self-adaptive multimethod

search for global optimization in real-parameter spaces. IEEE Transactions on Evolutionary

Computation 13 (2) (2009), pp 243259.

26. Kee E, Aiery S, Cye W (2001) An Adaptive genetic algorithm. In: Proceedings of The

Genetic and Evolutionary Computation Conference, pp 391397

27. Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceeding of the IEEE

International Conference on Neural Networks, pp 19421948

27

28. Kennedy J, Eberhart RC (2001) Swarm Intelligence. Morgan Kaufman, Burlington,

Massachusetts

29. Kennedy J, Mendes R (2002) Population structure and particle swarm performance. In:

Proceedings of the 2002 Congress on Evolutionary Computation, pp 16711676

30. Kuo R J, Chao CM, Chiu YT (2011) Application of Particle Swarm Optimization to

Association rule Mining. Applied soft computing 11(1):326-336

31. Mendes R, Kennedy J, Neves J (2004) The fully informed particle swarm: simpler, maybe

better. IEEE Transactions on Evolutionary Computation 8: 204210.

32. Merz CJ, Murphy P (1996) UCI Repository of Machine Learning Databases,

http://www.cs.uci.edu/~mlearn/ MLRepository.html

33. Nannen V, Smit SK, Eiben AE (2008) Costs and benefits of tuning parameters of

evolutionary algorithms. In: Proceedings of the 10th International Conference on Parallel

Problem Solving from Nature, PPSN X, Dortmund, Germany, pp 528538

34. Parsopoulos KE, Vrahatis MN (2004) Unified particle swarm optimization scheme. In:

Proceeding in Lecture Series on Computational Sciences, pp 868873

35. Parsopoulos KE, Vrahatis MN (2007) Parameter selection and adaptation in unified

particle swarm optimization. Mathematical and Computer Modelling 46: 198213

36. Piatetsky-Shapiro G (1991) Discovery, analysis, and presentation of strong rules. In:

Knowledge Discovery in Databases, AAAI/MIT Press, pp 229248

37. Ping-Hung Tang, Ming-Hseng Tseng (2013) Adaptive directed mutation for real-coded

genetic algorithms, Applied Soft Computing 13(1):600-614

38. Qin AK, Huang VL, Suganthan PL (2009) Differential evolution algorithm with strategy

adaptation for global numerical optimization. IEEE Transactions on Evolutionary

Computation 13 (2):398417

39. Ratnaweera A, Halgamuge SK, Watson HC (2004) Self-organizing hierarchical particle

swarm optimizer with time-varying acceleration coefficients. IEEE Transactions on

Evolutionary Computation 8 (3): 240255

40. Shi Y, Eberhart RC (1998) Parameter selection in particle swarm optimization. In:

Proceeding of the 7th Conference on Evolutionary Programming, New York, pp 591600

41. Shing Wa Leung, Shiu Yin Yuen, Chi Kin Chow (2012) Parameter control system of

evolutionary algorithm that is aided by the entire search history. Applied Soft Computing

12: 30633078

42. Song, MP, Gu GC (2004) Research on particle swarm optimization: A review. In:

Proceedings of the IEEE International Conference on Machine Learning and Cybernetics,

pp 22362241

43. Spears WM (1995) Adapting crossover in evolutionary algorithms. In: J. McDonnel, R.

Reynolds, D. Fogel (Eds.), Evolutionary Programming IV: Proc. of the Fourth Annual Conf.

on Evolutionary Programming, MIT Press, Cambridge, MA pp 367384

44. Srinivas M, Patnaik LM(1994) Adaptive probabilities of crossover and mutation in genetic

algorithms. System, Man and Cybernetics 24(4):656-667

45. Srinivasa KG , Venugopal KR, Patnaik LM (2007) A self-adaptive migration model genetic

algorithm for data mining applications. Information Sciences 177: 42954313

46. Ueno, G., Yasuda, K., Iwasaki, N., 2005. Robust adaptive particle swarm optimization. IEEE

International Conference on Systems, Man and Cybernetics 4, 39153920.

47. Xin J, Chen G, Hai Y (2009) A Particle Swarm Optimizer with Multistage Linearly-

Decreasing Inertia Weight. In: International Joint Conference on Computational Sciences

and Optimization, IEEE, pp 505508

48. Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Transactions on

evolutionary Computation 3 (2): 82102.

28

49. Yau-Tarng Juang, Shen-Lung Tung, Hung-Chih Chiu (2011) Adaptive fuzzy particle swarm

optimization for global optimization of multimodal functions. Information Sciences 181:

45394549

50. Ying Wang, Jianzhong Zhou, Chao Zhou, Yongqiang Wang, Hui Qin, Youlin Lu (2012) An

improved self-adaptive PSO technique for short-term hydrothermal scheduling. Expert

Systems with Applications 39(3):2288-2295

51. Zhan ZH, Xiao J, Zhang J, Chen WN (2007) Adaptive control of acceleration coefficients for

particle swarm optimization based on clustering analysis. In: Proceedings of IEEE Congress

on Evolutionary Computation, Singapore, pp 32763282

52. Zhan Z-H, Zhang J, Li Y et al. (2009) Adaptive particle swarm optimization. IEEE

Transactions on Systems Man, and Cybernetics Part B: Cybernetics, 39 (6) :1362-1381

53. Chatterjee, A., Siarry, P. (2006) Nonlinear inertia weight variation for dynamic adaptation

in particle swarm optimization. Computers and Operations Research, 33: 859871.

54. Bo Liu, Abbas, H.A., McKay, B. (2003), Classification rule discovery with ant colony

optimization.In : IEEE/WIC International Conference on Intelligent Agent Technology, 13-

17 October, Halifax, Canada,pp.83-88.

55. Lale Ozbakir, Adil Baykasoglu, Sinem Kulluk, Huseyin Yapc. (2010) TACO-miner: An ant

colony based algorithm for rule extraction from trained neural networks. Expert Systems

with Applications, 36 (10) : 1229512305.

56. Lale Ozbakir, Adil Baykasoglu, Sinem Kulluk. (2008) Rule Extraction from Neural Networks

Via Ant Colony Algorithm for Data Mining Applications, Lecture Notes in Computer

Science, 5313, pp. 177-191.

57. Parpinelli, R. S., Lopes, H. S., & Freitas, A. A. (2002). Data mining with an ant colony

optimization algorithm. IEEE Transactions on Evolutionary Computation, 6(4): 321332.

Special issue on Ant Colony Algorithms.

58. Baeck, T., Hoffmeister, F., & Schwefel, H.-P. (1991) A survey of evolution strategies.

Proceedings of the Fourth International Conference on Genetic Algorithms, 2-9. La Jolla, CA:

Morgan Kaufmann.

59. Fogel, L. J., Owens, A. J., & Walsh, M. J. (1966) Artificial Intelligence Through Simulated

Evolution. New York: Wiley Publishing.

60. Fogel, D. B. & Atmar, J. W. (1990) Comparing genetic operators with Gaussian mutations in

simulated evolutionary processes using linear systems. Biological Cybernetics, 63, 111-

114.

61. Schwefel, H.-P. (1977) Numerical Optimization of Computer Models. New York: John Wiley

& Sons.

62. Schaffer, J. D., Caruana, R. A., Eshelman, L. J., & Das, R. (1989) A study of control

parameters affecting online performance of genetic algorithms for function optimization.

In: Proceedings of the Third International Conference on Genetic Algorithms, 51-60. La

Jolla, CA: Morgan Kaufmann.

63. Marseguerra M, Zio E, Podollini L. (2005) Multiobjective spare part allocation by means of

genetic algorithms and Monte Carlo simulation. Reliability Engineering and System Safety,

87(3):32535

64. Kwon YD, Kwon SB, Jin SB, Kim JY. (2003) Convergence enhanced genetic algorithm with

successive zooming method for solving continuous optimization problems. Computers and

Structures,81(17):171525.

65. Paszkowicz W. (2009) Properties of a genetic algorithm equipped with a dynamic penalty

function. Computational Materials Science, 45(1):7783.

66. Ilgi MA, Tunali S. (2007) Joint optimization of spare parts inventory and maintenance

policies using genetic algorithms. International Journal of Advanced Manufacturing

Technology, 34(56):594604

29

67. San Jose-Revuelta LM. A new adaptive genetic algorithm for xed channel assignment.

Information Sciences 2007;177(13):265578.

68. Bingul Z. Adaptive genetic algorithms applied to dynamic multiobjective problems.

Applied Soft Computing 2007;7(3):7919.

69. Ghosh SC, Sinha BP, Das N. Channel assignment using genetic algorithm based on

geometric symmetry. IEEE Transactions on Vehicular Technology 2003; 52(4):86075.

70. Eiben AE, Hinterding R, Michalewicz Z. Parameter control in evolutionary algorithms. IEEE

Transactions on Evolutionary Computation 1999;3(2): 124141

Annexure 1

UCI Datasets

The datasets that have been used in this research work have been taken from the

UCI machine learning repository. The UCI Machine Learning Repository is a

collection of databases, domain theories, and data generators that are used by the

machine learning community for the empirical analysis of machine learning

algorithms. The archive was created as an ftp archive in 1987 by David Aha and

fellow graduate students at UC Irvine. The details of the datasets used in this

study from http://archive.ics.uci.edu/ml/about.html are presented below.

Lenses Dataset

Number of Instances : 24

Number of Attributes : 4 (all nominal)

Attribute Information : 3 Classes

1 : the patient should be fitted with hard contact lenses,

2 : the patient should be fitted with soft contact lenses,

3 : the patient should not be fitted with contact lenses.

age of the patient: (1) young, (2) pre-presbyopic, (3) presbyopic

spectacle prescription: (1) myope, (2) hypermetrope

astigmatic: (1) no, (2) yes

tear production rate: (1) reduced, (2) normal

Number of Missing Attribute Values: 0

Habermans Survival Dataset

Number of Instances : 306

Number of Attributes : 4 (including the class attribute)

Attribute Information :

1. Age of patient at time of operation (numerical)

2. Patient's year of operation (year - 1900, numerical)

3. Number of positive axillary nodes detected (numerical)

4. Survival status (class attribute)

30

1 = the patient survived 5 years or longer

2 = the patient died within 5 year

Missing Attribute Values : None

Car Evaluation Dataset

Number of Instances : 1728

(instances completely cover the attribute space)

Number of Attributes : 6

Attribute Values :

buying v-high, high, med, low

maint v-high, high, med, low

doors 2, 3, 4, 5-more

persons 2, 4, more

lug_boot small, med, big

safety low, med, high

Missing Attribute Values : none

Post Operative Patient Dataset

Number of Instances : 90

Number of Attributes : 9 including the decision (class attribute)

Attribute Information :

1. L-CORE (patient's internal temperature in C):

high (> 37), mid (>= 36 and <= 37), low (< 36)

2. L-SURF (patient's surface temperature in C):

high (> 36.5), mid (>= 36.5 and <= 35), low (< 35)

3. L-O2 (oxygen saturation in %):

excellent (>= 98), good (>= 90 and < 98),

fair (>= 80 and < 90), poor (< 80)

4. L-BP (last measurement of blood pressure):

high (> 130/90), mid (<= 130/90 and >= 90/70), low (< 90/70)

5. SURF-STBL (stability of patient's surface temperature):

stable, mod-stable, unstable

6. CORE-STBL (stability of patient's core temperature)

stable, mod-stable, unstable

7. BP-STBL (stability of patient's blood pressure)

stable, mod-stable, unstable

8. COMFORT (patient's perceived comfort at discharge, measured as

an integer between 0 and 20)

9. decision ADM-DECS (discharge decision):

I (patient sent to Intensive Care Unit),

S (patient prepared to go home),

A (patient sent to general hospital floor)

31

Missing Attribute Values : Attribute 8 has 3 missing values

Zoo Dataset

Number of Instances : 101

Number of Attributes : 18 (animal name, 15 Boolean attributes, 2 numerics)

Attribute Information : (name of attribute and type of value domain)

1. animal name Unique for each instance

2. hair Boolean

3. feathers Boolean

4. eggs Boolean

5. milk Boolean

6. airborne Boolean

7. aquatic Boolean

8. predator Boolean

9. toothed Boolean

10. backbone Boolean

11. breathes Boolean

12. venomous Boolean

13. fins Boolean

14. legs Numeric (set of values: {0,2,4,5,6,8})

15. tail Boolean

16. domestic Boolean

17. catsize Boolean

18. type Numeric (integer values in range [1,7])

Missing Attribute Values : None

Iris Dataset

Number of Instances : 150 (50 in each of three classes)

Number of Attributes : 4 numeric, predictive attributes and the class

Attribute Information :

Sepal length, Sepal width, Petal length and Petal width in cm

Class Types : Iris Setosa, Iris Versicolour and Iris Virginica.

Missing Attribute Values : None

Nursery Dataset

Number of Instances : 12960 (instances completely cover the attribute

space)

Number of Attributes : 8

Attribute Values :

32

parents usual, pretentious, great_pret

has_nurs proper, less_proper, improper, critical, very_crit

form complete, completed, incomplete, foster

children 1, 2, 3, more

housing convenient, less_conv, critical

finance convenient, inconv

social non-prob, slightly_prob, problematic

health recommended, priority, not_recom

Missing Attribute Values : none

Tic Tac Toe Dataset

Number of Instances : 958 (legal tic-tac-toe endgame boards)

Number of Attributes : 9, each corresponding to one tic-tac-toe square

Attribute Information : (x=player x has taken, o=player o has taken,

b=blank)

1. top-left-square: {x,o,b}

2. top-middle-square: {x,o,b}

3. top-right-square: {x,o,b}

4. middle-left-square: {x,o,b}

5. middle-middle-square: {x,o,b}

6. middle-right-square: {x,o,b}

7. bottom-left-square: {x,o,b}

8. bottom-middle-square: {x,o,b}

9. bottom-right-square: {x,o,b}

10. Class: {positive,negative}

Missing Attribute Values : None

Wisconsin Breast Cancer Dataset

Number of Instances : 699

Number of Attributes : 10 plus the class attribute

Attribute Information :

Sample code number, Clump Thickness, Uniformity of Cell Size,

Uniformity of Cell Shape, Marginal Adhesion, Single Epithelial Cell

Size, Bare Nuclei, Bland Chromatin, Normal Nucleoli, Mitoses and

Class (2 for benign, 4 for malignant).

Missing attribute values : 16

- mdo golinskiUploaded byhmxa91
- ubc_2005-0529.pdfUploaded bymusiomi2005
- Darwin Calibration WatercadUploaded byLisandro Felix Bolivar Quispe
- Global optimization guide.pdfUploaded byJanine Figueiredo
- A NEW METHOD FOR OPTIMAL LOCATION OF FACTS CONTROLLERS USING GENETIC ALGORITHMUploaded byAPLCTN
- tu_enegUploaded byPutuAryaDharmaadi
- A History of Evolutionary ComputationUploaded byGerhard Herres
- Machine Learning in Embedded SystemUploaded bynhungdieubatchot
- An Enhanced Inherited Crossover GA for theReliability Constrained UC ProblemUploaded byIDES
- Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach Based Synthesis of Concentric Circular Antenna Array with Non-isotropic ElementsUploaded byIDES
- Boosting Genetic Algorithms with Self-Adaptive SelectionUploaded bykalanath
- sa_paperUploaded byAnonymous TxPyX8c
- Tilting Dalam OptimasiUploaded byachmad_chanafi
- dc meet-v2Uploaded byAnonymous TxPyX8c
- Preparation of Papers for ICIET Conferences: Embedded Data With Image Using Chaos Based Particle Swarm Optimization(PSO)Uploaded byGRD Journals
- 6035Uploaded byPatriotsher Dash
- Optimization of Milling Operation Using Genetic and PSO AlgorithmUploaded byBONFRING
- Genetic AlgorithmUploaded byterminator91
- Genetic Algorithm and Data MiningUploaded byAnonymous TxPyX8c
- GENETIC AGENT TO OPTIMIZE THE AUTOMATIC GENERALIZATION PROCESS OF SPATIAL DATAUploaded byIJDIWC
- Forouraghi2000-A Genetic Algorithm for Multiobjective Robust DesignUploaded byThanh Trường Nhữ
- 18-xvalen03Uploaded bymghgol
- zhang2011.pdfUploaded byketan
- A Novel Approach for Optimum Allocation of Flexible AC Transmission Systems Using Harmony Search TechniqueUploaded byDavid Perez
- REVISTAUploaded byDavid A. Gómez
- sima baseUploaded bysasikalasivakumar
- Image Edge Detection Based on Cellular Neural Network and Particle Swarm OptimizationUploaded byivy_publisher
- A Hybrid PSO_ACO Algorithm for Classification Alex 2007Uploaded byBreno Jacob
- Seminar_report (Ci) EditedUploaded byAayushGoyal
- 12868Uploaded byapi-270028295

- 1000 English Proverb & Saying in EnglishUploaded byMr. doody
- gaand psoUploaded byAnonymous TxPyX8c
- Icicca Dm SurveyUploaded byAnonymous TxPyX8c
- IJCSI-9-3-2-522-529Uploaded byAnonymous TxPyX8c
- MagicInfo Lite Edition Server_Eng05Uploaded byIndira Sivakumar
- Important Days National InternationalUploaded byKarthik Manoharan
- IJCSI-9-3-3-432-438Uploaded byAnonymous TxPyX8c
- 118-136-145Uploaded byAnonymous TxPyX8c
- STUDENTS PromisesUploaded byAnonymous TxPyX8c
- Survey Table2Uploaded byAnonymous TxPyX8c
- b.ed PropspectousUploaded byAnonymous TxPyX8c
- 2 Nd Year Class Time Table for DeptUploaded byAnonymous TxPyX8c
- 21. B.E.CSEUploaded bySangeetha Shankaran
- CurriculumUploaded byTharaneeswaran Thanigaivelu
- Ece Master Timetable 2013Uploaded byIndira Sivakumar
- Practical Oct 13 Tea&LucnchUploaded byAnonymous TxPyX8c
- Bed Cp TrainingUploaded byAnonymous TxPyX8c
- Control Structures PgmsUploaded byAnonymous TxPyX8c
- X Standard Public Time Table 2013Uploaded byAnonymous TxPyX8c
- GD 2016 Welcome SpeechUploaded byAnonymous TxPyX8c
- Application Form for Submission of PhD ThesisUploaded byIndira Sivakumar
- xin-121220153447-phpapp01Uploaded byAnonymous TxPyX8c
- III rd YEARUploaded byAnonymous TxPyX8c
- 736E09.docUploaded byAnonymous TxPyX8c
- gk-and-current-affairs.pdfUploaded byakhilyerawar
- 2014 ANNA UNIV-BOOKS & JR..docxUploaded byAnonymous TxPyX8c
- Pd TeacherUploaded byAnonymous TxPyX8c
- Thesis 2Uploaded byAnonymous TxPyX8c

- Process Genre ApproachUploaded bygrace
- MLUploaded byankit
- Concrete Under Fire_tcm45-345580Uploaded byRita Hui
- Flow Through Packed Beds and Fluidized BedsUploaded bySeassey Raymond
- Post Myocardial Infarction.pdfUploaded byDesti Eryani
- Inventory ManagementUploaded bymba abhi
- Khattak Nasim Zahid CVUploaded byZahid Nasim Khattak
- a16 Bio 101-Nya-05 BiologyUploaded byUmehabibah Rehman
- HPUploaded byiaehnn
- Crane-Loads-and-Wharf-Structure-Design-Workshop-Putting-the-Two-Together.pdfUploaded byvinaysa18gmailcom
- CBT - Errors ListUploaded byShaikh Abdulsaboor Sapru
- HOUSE HEARING, 113TH CONGRESS - ISLAMIST EXTREMISM IN CHECHNYA: A THREAT TO THE U.S. HOMELAND?Uploaded byScribd Government Docs
- Database Normal is at IonUploaded byJaseem Abid
- Srinivasa RamanujanUploaded byJoseph Joshua A. PaLapar
- As 2360.1.3-1993 Measurement of Fluid Flow in Closed Conduits Pressure Differential Methods - Measurement UsiUploaded bySAI Global - APAC
- Gamemodes Server.txt.ExampleUploaded byiheb
- I.J. Sola et al- Controlling attosecond electron dynamics by phase-stabilized polarization gatingUploaded byPocxa
- How Federal Regulators Lenders and Wall Street Caused the Housing CrisisUploaded byDeontos
- Mil-Mi-171-Flight-Manual-Book-1.pdfUploaded byromixrayzen
- BLUETOOTH.pptUploaded byDevesh Pandey
- Azerbaijan EE 2013 ENGUploaded byailgevicius
- West Bengal Non-Government Educational Institutions and Local Authorities (Control of Provident Fund of Employees) Act,1983.pdfUploaded byLatest Laws Team
- THE_IMPACT_OF_FOREIGN_DIRECT_INVESTMENT_ON_ECONOMIC_GROWTH_IN_NIGERIA (1).pdfUploaded byTamunokuro Whyte
- rahman_advanced-fracture-mechanics.pdfUploaded bytassili17
- Main Principles of Pumps SelectionUploaded byGodwin
- Secondary Containment of Large Aboveground Storage TanksUploaded byocayli
- Knowledge, Beliefs, and Practices Regarding Exclusive Breastfeeding of Infants Younger Than 6 Months in Mozambique: A Qualitative StudyUploaded bySal
- drilling rigUploaded byMohit Bhatia
- Light Design - CopyUploaded by696969
- QUIK-TROL_GOLD.pdfUploaded byPaul Varela