This action might not be possible to undo. Are you sure you want to continue?

Due to rapid advances in VLSI design technology, the complexity and size of circuits have been rapidly increasing as well as placing demand on industry for faster and more efficient CAD tool. Physical design is a process of converting the physical description into geometric description. Physical design process is subdivided into four problems a) Partitioning b) floor planning c) Placement d) Routing. In this process, placement phase determines the position of cells in the silicon wafer. Generally, the placement phase contains two types of problems like standard and macro cell placement problem. This project aims to develop an efficient,a low time complexity algorithm for placement. This can be achieved by use of Genetic algorithm(GA).GA is one of the popular optimization algorithm which will support the multi objective optimization and flexible for hybrid applications.In this project, the main objective is wirelength minimization in the macro cell placement

CHAPTER 2

1

INTRODUCTION

The size of present-day computing systems demands the elimination of repetitive manual operations and computations in their design.This motivates the development of automatic design systems.To accomplish this task, a fundamental understanding of the design problem and full knowledge of the design process are essential.Only then could one hope to efficiently and automatically fill the gap between system specification and manufacturing . Automation of a given (design)process requires an algorithmic analysis of it.The availability of fast and easily implementable algorithms is essential to the discipline.In order to take full advantage of the resources in the very-large-scale integration (VLSI) environment, new procedures must be developed.The efficiency of these techniques must be evaluated against the inherent limitations of VLSI.Previous contributions are a valuable starting point for future improvements in design performance and evaluation. Physical design (or layout phase) is the process of determining the physical location of active devices and interconnecting them inside the boundary of a VLSI chip (i .e ., an integrated circuit).The measure of the quality of a given solution to the circuit layout problem is the efficiency with which the circuit (corresponding to a given problem) can be laid out according to the formal (design) rules dictated by the VLSI technology . Since the cost of fabricating a circuit is a function of the circuit area, circuit layout techniques aim to produce layouts with a small area. Also, a smaller area implies fewer defects, hence a higher yield . These layouts must have a special structure to guarantee their wirability (using a small number of planes in the third dimension).Other criteria of optimality, for example, wire length length minimization, delay minimization, power minimization, and via minimization also have to be taken into consideration.In present-day systems, delay minimization is becoming more crucial.The aim is to design circuits that are fast while having small area. Indeed,in "aggressive" designs, used for example in the medical electronics industry,speed and reliability are the main objectives. In the past two decades, research has been directed toward automation of layout process.Many invaluable techniques have been proposed.The early layout techniques, designed for printed circuit boards containing a small number of components, assumed a fixed position for the components.The earliest work along this line, proposed in [200], is a wiring algorithm based on the propagation of distance wave .Its simplicity and effectiveness have been the reasons for its success. As the number of components increased and with the

2

advent of VLSI technology, efficient algorithms for placing the components and effective techniques for component (or cell) generation and wiring have been proposed. Because of the inherent complexity of the layout problem, it is generally partitioned into simpler subproblems, with the analysis of each of these parts providing new insights into the original problem as a whole.In this framework,the objective is to view the layout problem as a collection of subproblems;each subproblem should be efficiently solved, and the solutions of the subproblems should be effectively combined.Indeed, a circuit is typically designed in a hierarchical fashion.At the highest level of hierarchy, a set of ICs are interconnected. Within each IC, a set of modules, for example, memory units, ALUs, inputoutput ports,and random logic, are arranged.Each module consists of a set of gates, where each gate is formed by interconnecting a collection of transistors.Transistors and their interconnections are defined by the corresponding masks.

**CHAPTER 3 VLSI PHYSICAL DESIGN
**

3

Vlsi physical design consist of following steps:1. Partitioning 2. Floorplanning 3. Placement 4. Routing 3.1 PARTITIONING Partitioning is the task of dividing a circuit into smaller parts.The objective is to partition the circuit into parts, so that the size of each component is within prescribed ranges and the number of connections between the components is minimized.Different ways to partition correspond to different circuit implementations.Therefore,a good partitioning can significantly improve circuit performance and reduce layout costs.A hypergraph and a partition of it is shown in Figure 3.1a. The cut (or general cuts) defines the partition . 3.2 FLOORPLANNING Floorplanning is the determination of the approximate location of each module in a rectangular chip area, given a circuit represented by a hypergraph,the shape of each module and the location of the pins on the boundary of each module may also be determined in this phase.The floorplanning problem in chip layout is analogous to floorplanning in building design, where we have a set of rooms (modules) and wish to decide the approximate location of each room based on some proximity criteria . An important step in floorplanning is to decide the relative location of each module.A good floorplanning algorithm should achieve many goals, such as making the subsequent routing phase easy, minimizing the total chip area, and reducing signal delays .The floorplan corresponding to the partitioning is shown in Figure 3.1b . Typically, each module has a set of implementations, each of which has a different area, aspect ratio, delay, and power consumption, and the best implementation for each module should be obtained.

4

There are two prevalent cost functions : wire-lengthbased and cut-based. has fixed shape and fixed terminals. it is hard to control it . when each module is fixed. 3. some modules have fixed positions (eg:1/O pads).Usually.Although area is the major concern. is the determination of the best position for each module . that is.1c.1: Hierarchial steps in vlsi physical design 3. alternative cost functions are employed. The placement corresponding to the partitioning is shown in Figure 3.3 PLACEMENT Placement. Thus.4 ROUTING 5 . where each module has a fixed shape and area.Fig 3.

For integrated circuits.This decomposition is carried out by finding a "rough" path(sequence of "subregions" it passes) for each net in order to reduce the chip size. manageable problems for detailed routing . Global routing 2.1c is shown in Figure 3. A global routing based on the placement shown in Figure 3.The method first partitions the routing region into a collection of disjoint rectilinear subregions.A detailed routing corresponding to the global routing shown in Figure 3.1e. the unpreserved layer model has also been discussed.Routing may be of two types:1.The traditional model of detailed routing is the two-layer Manhattan model with reserved layers.1d is shown in fig 3. where vertical and horizontal wires can run in both layers . CHAPTER 4 6 . the horizontal segments are typically realized in metal while the vertical segments are realized in polysilicon. where horizontal wires are routed on one layer and vertical wires are routed in the other layer. shorten the wire length. Detailed routing Global routing decomposes a large routing problem into small. and evenly distribute the congestion over the routing area .In order to interconnect a horizontal and vertical segment.a contact (via) must be placed at the intersection points. More recently.1d Detailed routing follows the global routing to effectively realize interconnections in VLSI circuits. .

Early constructive placement algorithms were generally based on primitive connectivity rules Then other modules are selected one at a time in order of their connectivity to the placed modules (most densely connected first) and are placed at a vacant location close to the placed modules. algorithms start with an initial placement and repeatedly modify it in search of a cost reduction. the modification is accepted. Such algorithms are generally very fast. These algorithms are now used for generating an initial placement for iterative improvement algorithms. a method is used to build up a placement from scratch. the area needed for routing is estimated during the placement. otherwise it is rejected. each with a number of terminals at fixed positions along the edges of the cell ii) a netlist specifying the interconnections of all terminals iii) an approximate horizontal length W of the cell to be constructed. If a modification results in a reduction in cost. The main reason for their use is their speed. Iterative improvement algorithms typically produce good 7 . such that the wire length is minimized. In constructive placement.1 PLACEMENT TYPES Placement algorithms can be divided into two major classes: constructive placement and iterative improvement.It is to be determined so that the area is minimized subject to the following constraints: i) no pair of cells overlap each other ii) the space inside R not occupied by cells is sufficiently large to contain all routing necessary to realize all specified interconnections To meet the second constraint. in iterative improvement.PLACEMENT PROBLEM In the literature the definition of the macro cell placement problem varies slightly. 4. The macro cell placement problem is to find absolute coordinates for the lower left corner of each cell. As input we have i) a set of cells. They take a negligible amount of computation time compared to iterative improvement algorithms and provide a good starting point for them. but typically result in poor layouts.The following definition is used in this paper.

work by randomly examining configurations and may produce a different result each time they are run. Constructive algorithms are usually deterministic. An improvement over this algorithm is repeated iterative improvement in which the iterative improvement process is repeated several times with different initial configurations in the hope of obtaining a good configuration in one of the trials. Algorithms that function on the basis of fixed connectivity rules or formulas or determine the placement by solving simultaneous equations are deterministic and will always produce the same result. Other possible classifications for placement algorithms are deterministic algorithms and probabilistic algorithms. whereas iterative improvement algorithms are usually probabilistic CHAPTER 5 8 . The algorithm is terminated when there is no further improvement during a given large number of trials. The simplest iterative improvement strategy interchanges randomly selected pairs of modules and accepts the interchange if it results in a reduction in cost . on the other hand.placements but require enormous amounts of computation time. Probabilistic algorithms.

Certainly. the value of an optimal solution is computed in a bottom-up fashion .Typically. and then the original problem is solved by combining the solutions.Finally. The main problem is that such algorithms are very slow-"very slow" does not mean that they take a few hours or a few days to run .SOLUTIONS TO PLACEMENT PROBLEM As a solution to placement problem many suboptimal algorithms. 9 . 5. the subproblems are solved. The following algorithmic paradigms have been employed in most subproblems arising in VLSI layout. In greedy algorithms the choice that results in a locally optimal solution is made .exhaustive search produces optimal solutions.a number of efficient algorithmic techniques have been proposed. 5. or a set of choices.Such algorithms are of practical importance.1 EXHAUSTIVE SEARCH The most naive paradigm is exhaustive search.Each subproblem is solved once. they do not always produce globally optimal solutions.However.first the structure of an optimal solution is characterized and the value of an optimal solution is recursively defined. greedy algorithms are simpler than other classes of algorithms . it means they take a lifetime to run. is made .those that are fast and produce good quality solutions.In dynamic programming.The idea is to search the entire solution space by considering every possible solution and evaluating the cost of each solution. have been designed. and the solution is saved for other subproblems.Based upon this evaluation.Alternatively.2 GREEDY APPROACH Algorithms for optimization problems go through a number of steps . one of the best solutions is selected.3 DYNAMIC PROGRAMMING A problem is partitioned into a collection of subproblems. At each step. 5.Dynamic programming is applied when the subproblems are not independent. a choice.

It is generally hard to claim anything about the running time of such algorithms. 5.5 MATHEMATICAL PROGRAMMING In the mathematical programming approach.8 GENETIC ALGORITHMS 10 .The sizes of the subproblems are typically balanced.The partition is usually done in a top-down fashion and the solution is constructed in a bottom-up fashion .The idea originated from observing crystal formation of physical materials.5.When the objective function and all inequalities are expressed as a linear combination of the involved variables then the system is called a linear system.A simulated annealing algorithm examines the configurations (ie.This technique used is especially useful when the solution space of the problem is not well understood. a problem is also partitioned into a set of subproblems.7 BRANCH AND BOUND This is a general and usually inefficient method for solving optimization problems .The subproblems are independent and the partitions are recursive. The algorithm evaluates the feasible solutions it encounters and moves from one solution to another .The aim is to avoid searching the entire space by stopping at nodes when an optimal solution cannot be obtained .4 HIERARCHICAL APPROACH In this approach.6 SIMULATED ANNEALING Simulated annealing is a technique used to solve general optimization problems. 5. There is a tree-structured configuration space . 5.The objective function is a minimization (or maximization) problem subject to a set of constraints. the set of feasible solutions) of the problem in sequence .there is a set of constraints expressed as a collection of inequalities. 5.

The fitness value is a measure of the competence (i .At each stage of a genetic algorithm. CHAPTER 6 11 ...e.. a population of solutions(i.a solution subspace)to the problem is stored and allowed to evolve(i. new solutions are formed by merging previous solutions(called crossover) or by modifying previous solutions (called mutation).e . be modified) through successive generations.The solutions to be selected in the next generation are probabilistically selected based on a fitness value.To create a new generation. quality) of the solution .e.

The perfect shapes of the albatross wring the efficiency and the similarity between sharks and dolphins and so on. biological organisms evolve based on the principle of natural selection “survival of the fittest” to reach certain remarkable tasks. it works so well in nature. In the traditional genetic algorithm. individuals compete to attract mates for reproduction. The most popular technique in evolutionary computation research has been the genetic algorithm.GENETIC ALGORITHM 6. Holland’s theory has been further developed and now Genetic Algorithms (GAs) stand up as a powerful tool for solving search and optimization problems.Thus. Each position in the string is assumed to represent a particular feature of an individual. In 1975. and search optimization problems. the representation used is a fixedlength bit string. a recombination of the good characteristics of each ancestor can produce “best fit” offspring whose fitness is greater than that of a parent. are best examples of achievement of random evolution over intelligence. In nature. an individual in population competes with each other for virtual resources like food. timetabling. shelter and so on. jobshop scheduling. Due to this selection. It can also be noted that during reproduction. Today. games playing. like. and the value stored in that position represents how that feature is expressed in the 12 . Also in the same species.He described how to apply the principles of natural evolution to optimization problems and built the first Genetic Algorithms. which describe many different phenomena and solve wide variety of problems. Holland developed this idea in his book “Adaptation in natural and artificial systems”.1 INTRODUCTION TO GENETIC ALGORITHM Charles Darwin stated the theory of natural evolution in the origin of species. which solves concrete. After a few generations. The power of mathematics lies in technology transfer: there exist certain models and methods. GAs are used to resolve complicated optimization problems. and the most adapted or “fit” individuals produce a relatively large number of offspring’s. species evolve spontaneously to become more and more adapted to their environment. as a result it should be interesting to simulate natural evolution and to develop a method. GAs are an example of mathematical technology transfer: by simulating evolution one can solve optimization problems from a variety of sources. Genetic algorithms are based on the principle of genetics and evolution. Over several generations. poorly performing individuals have less chance to survive.

Another popular operator is bit-flipping mutation. Usually. Fig. then N offspring are generated which replace the parents in the next generation. Traditionally. For example.g. 6. Crossover.does not while mutation does. the string is “evaluated as a collection of structural features of a solution that have little or no interactions”. for example. but are used less frequently (e. two genes at the same location on two strings may be swapped between parents. Each gene represents an entity that is structurally independent of other genes. in which a single bit in the string is flipped to form a new offspring string (see Fig 6. The analogy may be drawn directly to genes in biological organisms..1 Bit-string crossover of parents a & b to form offspring c & d The main reproduction operator used is bit-string crossover. and the offspring that are created replace the parents.but not combined based on their values. if N parents are selected.6.1). in which a subsequence in the bit string is reversed).solution. All operators are also constrained to manipulate the string in a manner consistent with the structural interpretation of genes. 6. For example.2 Bit-flipping mutation of parent a to form offspring b A variety of other operators have also been developed.2). Fig. in which two strings are used as parents and new individuals are formed by swapping a sub-sequence between the two strings (see Fig.A primary distinction that may be made between the various operators is whether or not they introduce any new information into the population. 13 . individuals are selected to be parents probabilistically based upon their fitness values. inversion.

there are not any definite mathematical restrictions on the properties of the fitness function. multimodal etc. a vector or a string. It may be discrete. Typical population size is from few dozens to thousands. however. To start there is usually a totally random population. In terms of energy and entropy local search corresponds to entropy while global optimization depends essentially on the fitness i. Parallel algorithms are usually used to speed up processing. energy landscape. These cases include among others such. There are. called population.2 COMPARISON OF GENETIC ALGORITHM WITH OTHER OPTIMIZATION TECHNIQUES The principle of GAs is simple: imitate genetics and natural selection by a computer program: The parameters of the problem are coded most naturally as a DNA-like linear data structure. constrained / unconstrained and sequential / parallel. like in the case of eyewitness. The main criteria used to classify optimization algorithms are as follows: continuous / discrete.So. Sometimes. The nice thing when comparing GAs to other optimization methods is that the fitness function can be nearly anything that can be evaluated by a computer or even something that cannot! In the latter case it might be a human judgement that cannot be stated as a crisp program. By a fitness function we can select the best solution candidates from the population and delete the not so good specimens.e. In addition optimization algorithms can be classified as local or global.A set. in which there is high probability of each individual search run to get stuck into a local extreme. There is a clear difference between discrete and continuous problems. Irrespective of the above classification. To do optimization we need a cost function or fitness function as it is usually called when genetic algorithms are used. optimizationmethods can be further classified into deterministic and non-deterministic methods. when the problem is naturally two or threedimensional also corresponding array structures are used.Therefore it is instructive to notice that continuous methods are sometimes used to solve inherently discrete problems and vice versa. where a human being selects among the alternatives generated by GA.6. the values of different parameters generated by a random number generator. Genetic algorithm differs from conventional optimization techniques in following ways: 14 . of these problem dependent parameter value vectors is processed by GA. some cases in which it is more efficient to run several processors in parallel rather than sequentially.

e. Easy to discover global optimum 6. poorly understood search spaces easily 11. GA uses fitness function for evaluation rather than derivatives. Only uses function evaluations.. They require no knowledge or gradient information about the response surface 15 . 8. This plays a major role to the robustness of genetic algorithms.1. 3. GA works with the coding of solution set and not with the solution itself. they can be applied to any kind of continuous or discrete optimization problem. GA uses population of solutions rather than a single solution fro searching. GAs does not use deterministic rules. The key point to be performed here is to identify and specify a meaningful decoding function. 2. 1. Easily modified for different problems. Parallelism 2. Good for multi-modal problems Returns a suite of solutions. 4. 9. The problem has multi objective function 7.3 ADVANTAGES OF GENETIC ALGORITHM The advantages of genetic algorithm includes. GAs use probabilistic transition operates while conventional methods for continuous optimization apply deterministic transition operates i. Very robust to difficulties in the evaluation of the objective function. Handles noisy functions well. Handles large. Almost all conventional optimization techniques search from a single point but GAs always operate on a whole population of points(strings) i.e. 10... As a result. The fitness landscape is complex 5. It improves the chance of reaching the global optimum and also helps in avoiding local stationary point. Solution space is wider 4. 12. 6.e. Liability 3. 13. GAs operate with coded versions of the problem parameters rather than parameters themselves i.

11. for evolving pictures and music. Nonlinear dynamical systems–predicting. for machine learning and also for evolving simple programs. No effective terminator. Have trouble finding the exact global optimum 12. Not good at identifying local optima 8. data analysis 2. Strategy planning 5. Premature convergence occurs 4. Discontinuities present on the response surface have little effect on overall optimization performance 15. Cannot use gradients.5 APPLICATIONS OF GENETIC ALGORITHM Genetic algorithms have been used for difficult problems (such as NP-hard problems). 6. The problem of identifying fitness function 2. Robot trajectory planning Evolving LISP programs (genetic programming) 4.14. They perform very well for large-scale optimization problems 17. They have been also used for some art. Not effective for smooth unimodal functions 10. The problem of choosing the various parameters like the size of the population. Can be employed for a wide variety of optimization problems 6. Finding shape of protein molecules 16 . 5. Definition of representation for the problem 3. 3. Require large number of response (fitness) function evaluations 13. Needs to be coupled with a local search technique. Configuration is not straightforward 6.mutation rate. 9. Cannot easily incorporate problem specific information 7. the selection method and its strength. cross over rate. They are resistant to becoming trapped in local optima 16.4 THE LIMITATIONS OF GENETIC ALGORITHM 1. A few applications of GA are as follows: 1.

TSP and sequence scheduling 7. GA handles a population of possible solutions. pursuit 9. but certainly not the most straightforward one of a Genetic Algorithm. Appropriate representation and reproduction operators are really something determinant. Genetic Algorithms provides one of these methods. missile evasion.Basically. the one with the most payoffs. graph coloring and partitioning 6.6 ALGORITHM An algorithm is a series of steps for solving a problem. Signal Processing–filter design 13. classifier systems 12. keyboard configuration. Each solution is represented through a chromosome. Reproduction operators are applied directly on the chromosomes. Coding all the possible solutions into a chromosome is the first part. adapting the simple genetics to algorithmic mechanisms. Sequence scheduling. Combinatorial Optimization–set covering. and are used to perform mutations and recombinations over solutions of the problem. bin packing. The set of all the solutions that meet this form constitute the search space. Functions for creating images 8. A genetic algorithm is a problem solving method that uses genetics as its model of problem solving. enumeration is soon no longer feasible simply because it would take far too much time. both architecture and weights. which is just an abstract representation. One knows the form of all possible solutions corresponding to a specific question. facility scheduling. Machine Learning–Designing neural networks. i. Scheduling–manufacturing. when the search space becomes large. aircraft design. which respects the 17 . traveling salesman (TSP). too. 10. pole balancing. routing. Control–gas pipeline. In this it’s needed to use a specific technique to find the optimal solution.Frequently. Design–semiconductor communication networks layout. It’s a search technique to find approximate solutions to optimization and search problems. improving classification algorithms. A set of reproduction operators has to be determined. an optimization problem looks really simple.6.e. from all the possible solutions. as the behavior of the GA is extremely dependant on it. If it’s possible to quickly enumerate all the solutions. But. Practically they all work in a similar way. The problem consists in finding out the solution that fits the best. the problem does not raise much difficulty. it can be extremely difficult to find a representation. resource allocation 11.

structure of the search space and reproduction operators. which maximizes the fitness function. which are coherent and relevant according to the properties of the problems. But. For generating new chromosomes. or the selection can be adapted in such way that they consider individuals with low evaluation functions as better. Either the cost function can be transformed into a fitness function. Each iteration consists of the following steps: 1. the genetic algorithm loops over an iteration process to make the population evolve. Genetic Algorithms deal with the problems that maximize the fitness function. offspring are bred by the selected individuals. if the problem consists in minimizing a cost function. the adaptation is quite easy. The gene pool should be as large as possible so that any solution of the search space can be engendered. for example by inverting it. 3.Then. REPRODUCTION In the second step. Each chromosome has an associated value corresponding to the fitness of the solution it represents.This selection is done randomly with a probability depending on the relative fitness of the individuals so that best ones are often chosen for reproduction than poor ones. Selection is supposed to be able to compare each individual in the population. the initial population is generated randomly. a Genetic Algorithm is evolved according to the same basic structure. the algorithm can use both recombination and mutation. Generally. Selection is done by using a fitness function. 18 . Once the reproduction and the fitness function have been properly defined. SELECTION The first step consists in selecting individuals for reproduction. This first population must offer a wide diversity of genetic materials. The fitness should correspond to an evaluation of how good the candidate solution is. 2. The optimal solution is the one. EVALUATION Then the fitness of the new chromosomes is evaluated. It starts by generating an initial population of chromosomes.

If there is any improvement in fitness function goto step 7 else goto step1. select two parent chromosomes from a population according to their fitness ( the better fitness. 6. REPLACEMENT During the last step. individuals from the old population are killed and replaced by the new ones. If there is any improvement in fitness function stop running program else goto step1. the bigger chance to get selected). 7. 5.7 FLOWCHART START 19 . If no crossover was performed. 6. The basic genetic algorithm is as follows: 1. Genetic random population of n chromosomes (suitable solutions for the problem) 2. offspring is the exact copy of parents. Create a new population by repeating following steps until the New population is complete 4. mutate new offspring at each locus (position in chromosome) 8. Evaluate the fitness f(x) of each chromosome x in the population 3.The algorithmis stoppedwhen the population converges toward the optimal solution.4. With a crossover probability. cross over the parents to form new offspring ( children). With a mutation probability.

CREATE INITIAL RANDOM POPULATION EVALUATE FITNESS FOR EACH POPULATION STORE BEST INDIVIDUAL CREATING MATING POOL CREATE NEXT GENERATION BY APPLYING CROSSOVER OPTIMAL OR GOOD SOLUTION ??/? ??FOUND? PERFORM MUTATION OPTIMAL OR GOOD SOLUTION FOUND? STOP CHAPTER 7 STEPS IN GENETIC ALGORIHM 20 .

The higher the selection pressured. The selection pressure is defined as the degree to which the better individuals are favored. the next step is to decide how to perform selection i. 2.1. 3. Crossing the parents to create new individuals (offspring or children). the more the better individuals are favored.The breeding cycle consists of three steps: 1. After deciding on an encoding.1 SELECTION Selection is the process of choosing two parents from the population for crossing..1 shows the basic selection process. It is in this process. the more chance an individual has to be selected. Replacing old individuals in the population with the new ones. Chromosomes are selected from the initial population to be parents for reproduction.Selection is a method that randomly picks chromosomes out of the population according to their evaluation function.e. Selecting parents. According to Darwin’s theory of evolution the best ones survive to create new offspring. The Fig 7. 21 . how to choose individuals in the population that will create offspring for the next generation and how many offspring each will create. the search process creates new and hopefully fitter individuals.7. The convergence rate of GA is largely determined by the magnitude of the selection pressure. The higher the fitness function. The purpose of selection is to emphasize fitter individuals in the population in hopes that their off springs have higher fitness. with higher selection pressures resulting in higher convergence rates. 7. The problem is how to select these chromosomes. This selection pressure drives the GA to improve the population fitness over the successive generations.1 BREEDING The breeding process is the heart of the genetic algorithm.

and is solely based upon the relative ordering (ranking) of the population. if the selection pressure is too low. reducing the diversity needed for 22 . Proportionate-based selection picks out individuals based upon their fitness values relative to the fitness of the other individuals in the population.Selection has to be balanced with variation form crossover and mutation. there is an increased change of the GA prematurely converging to an incorrect (sub-optimal)solution. if all the solutions have their fitness in the range [999. selection schemes should also preserve population diversity. This requires that the selection pressure is independent of the fitness distribution of the population. 1000].Fig 7. Ordinal-based selection schemes selects individuals not upon their raw fitness. and the GA will take unnecessarily longer time to find the optimal solution. the probability of selecting a better individual than any other using a proportionate-based method will not be important. In addition to providing selection pressure. as this helps to avoid premature convergence. For example. If the fitness in every individual is brought to the range [0. 1] equitably. the convergence rate will be slow.It is also possible to use a scaling function to redistribute the fitness range of the population in order to adapt the selection pressure. Too strong selection means sub optimal highly fit individuals will take over the population.1 Selection Genetic Algorithms should be able to identify optimal or nearly optimal solutions under a wise range of selection scheme pressure. proportionate selection and ordinal-based selection. but upon their rank within the population. If the selection pressure is too high.Typically we can distinguish two types of selection scheme. the probability of selecting good individual instead of bad one will be important. However.

1 ROULETTEWHEEL SELECTION Roulette selection is one of the traditional GA selection techniques.1. the size of the slice being proportional to the individual’s fitness. The rate of evolution depends on the variance of fitness’s in the population. Loop through the individuals in the population. It is essential that the population not be sorted by fitness. since this would dramatically bias the selection. the next chromosome in line has a chance. Choose a random integer ‘r’ between o and T. A target value is set. The population is stepped through until the target value is reached. The principle of roulette selection is a linear search through a roulette wheel with the slots in the wheel weighted in proportion to the individual’s fitness values. since fit individuals are not guaranteed to be selected for. where N is the number of individuals in the population. Repeat N times: i. Roulette wheel selection is easier to implement but is noisy. The above described Roulette process can also be explained as follows: The expected value of an individual is that fitness divided by the actual fitness of the population. The individual whose expected value puts the sum over this limit is the one selected. The commonly used reproduction operator is the proportionate reproductive operator where a string is selected from the mating pool with a probability proportional to the fitness. Sum the total expected value of the individuals in the population. This method is implemented as follows: 1. On each spin. but somewhat have a greater chance. 23 .change and progress.1. A fit individual will contribute more to the target value. but if it does not exceed it. the individual under the wheel’s marker is selected to be in the pool of parents for the next generation. Let it be T. too weak selection will result in too slow evolution. ii. which is a random proportion of the sum of the fit nesses in the population. 2. This is only a moderately strong selection technique.until the sum is greater than or equal to ‘r’. summing the expected values. The various selection methods are discussed as follows: 7.Each individual is assigned a slice of the roulette wheel. and it may be weak. The wheel is spun N times.

Generate a random number.2 RANDOM SELECTION This technique randomly selects a parent from the population. 1. than roulette wheel selection.In effect. The mating pool comprising of the tournament winner has higher average population fitness. the tournament selection strategy provides selective pressure by holding a tournament competition among Nu individuals. potential parents are selected and a tournament is held to decide which of the individuals will be the parent. If the R>=r then use the second individual as the parent. 7.3 RANK SELECTION The Roulette wheel will have a problem when the fitness values differ very much.1. The worst has fitness 1 and the best has fitness N.If the best chromosome fitness is 90%.4 TOURNAMENT SELECTION An ideal selection strategy should be such that it is able to adjust its selective pressure and population diversity so as to fine-tune GA search performance. The individual with the highest evaluation becomes the parent.1. If R < r use the first individual as a parent. This is repeated to select the second parent.7. on average. There are many ways this can be achieved and two suggestions are. 7. It preserves diversity and hence leads to a successful search. Unlike. its circumference occupies 90% of Roulette wheel. between 0 and 1. Tournament competitions and the winner are then inserted into the mating pool. Rank Selection ranks the population and every chromosome receives fitness from the ranking. Repeat to find a second parent.the Roulette wheel selection. Select a pair of individuals at random. random selection is a little more disruptive.The best individual from the tournament is the one with the highest fitness. Select two individuals at random. R. It also keeps up selection pressure when the fitness variance is low. It results in slow convergence but prevents too quick convergence. and then other chromosomes have too few chances to be selected. 2. The value of r is a parameter to this method. In terms of disruption of genetic codes.1.1. which is the winner of Nu. The fitness difference provides the 24 .1. The tournament competition is repeated until the mating pool for generating new offspring is filled.1.

7. 7.167]: 0.1.selection pressure. 6. which drives GA to improve the fitness of the succeeding genes. A logarithmically decreasing temperature is found useful for convergence without getting stuck to a local minima state. 3. the distance between the pointers is 1/6=0.6 STOCHASTIC UNIVERSAL SAMPLING Stochastic universal sampling provides zero bias and minimum spread.5 BOLTZMANN SELECTION Simulation annealing is a method of function minimization or maximization. which means the selection pressure is low. 0.2 shows the selection for the above example. Here equally spaced pointers are placed over the line.167. 25 .In Boltzmann selection a continuously varying temperature controls the rate of selection according to a preset schedule. 8. 1/NPointer]. But to cool down the system to the equilibrium state takes time. Consider NPointer the number of individuals to be selected.This method is more efficient and leads to an optimal solution. The temperature starts out high. 4. Sample of 1 random number in the range [0.1.1. which gradually increases the selection pressure. For 6 individuals to be selected. Controlling a temperature like parameter introduced with the concept of Boltzmann probability distribution simulates the cooling phenomenon. This method simulates the process of slow cooling of molten metal to achieve the minimum function value in a minimization problem. which is closer to what is deserved than roulette wheel selection.Figure 7. 2.1. as many as there are individuals to be selected. The temperature is gradually lowered.1.1. The individuals are mapped to contiguous segments of a line. thereby allowing the GA to narrow in more closely to the best part of the search space while maintaining the appropriate degree of diversity. such that each individual’s segment is equal in size to its fitness exactly as in roulette-wheel selection.After selection the mating population consists of the individuals. Stochastic universal sampling ensures a selection of offspring. then the distance between the pointers are 1/NPointer and the position of the first pointer is given by a randomly generated number in the range [0.

1. better children can be obtained by combining good parents else it severely hampers string quality. Crossover operator is applied to the mating pool with the hope that it creates a better offspring. The reproduction operator selects at random a pair of two individual strings for the mating. ii. 26 .2 CROSSOVER (RECOMBINATION) Crossover is the process of taking two parent solutions and producing from them a child. the simplest way how to do that is to choose randomly some crossover point and copy everything before this point from the first parent and then copy everything after the crossover point from the other parent.1 SINGLE POINT CROSSOVER The traditional genetic algorithm uses single point crossover.2. where the two mating chromosomes are cut once at corresponding points and the sections after the cuts exchanged. If appropriate site is chosen. the position values are swapped between the two strings following the cross site. iii. Fig 7. A cross site is selected at random along the string length. After the selection (reproduction) process.Crossover is a recombination operator that proceeds in three steps: i.2: Stochastic universal sampling That is.7. the population is enriched with better individuals. Here. Finally.1. The various crossover techniques are discussed as follows: 7. Reproduction makes clones of good strings but does not create new ones. a cross-site or crossover point is selected randomly along the length of the mated strings and bits next to the cross-sites are exchanged.

two crossover points are chosen and the contents between these points are exchanged between two mated parents.However.4 the dotted lines indicate the crossover points.The Fig 5. often involving more than one cut point. Thus the contents between these points are exchanged between the parents to produce new children for mating in the next generation.In two-point crossover.2 TWO POINT CROSSOVER Apart from single point crossover. It should be noted that adding further crossover points reduces the performance of the GA.4 Two-point crossover In the above Fig 7. an advantage of having more crossover points is that the problem space may be searched more thoroughly. 27 .3 illustrates single point crossover and it can be observed that the bits next to the crossover point are exchanged to produce children.3 Single point crossover 7. Fig 7. The crossover point can be chosen randomly.1. many different crossover algorithms have been devised. Fig 7.2. The problem with adding additional crossover points is that building blocks are more likely to be disrupted.

the gene is copied from the parent 1 else from the parent 2. a good thing is to use a uniform crossover as recombination operator 7. 7. genes that encode dependant characteristics of the solution should be close together. It can be noticed. It leads to an unwanted correlation between genes next to each other. the head and the tail of one chromosome cannot be passed together to the offspring. is generally considered better than 1-point crossover. The number of effective crossing point is not fixed. In the case of even number of cross-sites. In fact this problem can be generalized to each gene position in a chromosome. Consequently.To avoid all the problem of genes locus. A new crossover mask is randomly generated for each pair of parents. new children are produced using uniform crossover approach. Offsprings. In a genetic representation. cross-sites are selected randomly around a circle and information is exchanged. If both the head and the tail of a chromosome contain good genetic information. therefore contain a mixture of genes from each parent. the efficiency of a N-point crossover will depend on the position of the genes within the chromosome. that while producing child 1. On producing 28 . and where there is a 0 in the mask the gene is copied from the second parent. Using a 2-point crossover avoids this drawback. and then. but will average L/2 (where L is the chromosome length). 7. In the case of odd number of crosssites. But with this one-point crossover. the gene is copied from the first parent. Genes that are close on a chromosome have more chance to be passed together to the offspring obtained through a N-points crossover.Originally. when there is a 1 in the mask.2. none of the offsprings obtained directly with one-point crossover will share the two good features.2.3 MULTI-POINT CROSSOVER (N-POINT CROSSOVER) There are two ways in this crossover. GAs were using one-point crossover which cuts two chromosomes in one point and splices the two halves to create new ones.5. Each gene in the offspring is created by copying the corresponding gene from one or the other parent chosen according to a random generated binary crossover mask of the same length as the chromosomes.1. Where there is a 1 in the crossover mask.In Fig. a different cross-point is always assumed at the string beginning.4 UNIFORM CROSSOVER Uniform crossover is quite different from the N-point crossover.1.One is even number of cross-sites and the other odd number of cross-sites.

fig 7.6.1. three parents are randomly chosen. A single crossover position (as in single-point crossover) is selected.1.1. when there is a 0 in the mask.2. This is implemented by restricting the location of crossover points such that crossover points only occur where gene values differ. Fig 7.6 three parent crossover 7. 29 . This removes positional bias as the variables are randomly reassigned each time crossover is performed. But before the variables are exchanged. the gene is copied from parent 2. This concept is illustrated in Fig. 5.5 uniform crossover 7. they are randomly shuffled in both parents. when there is a 1 in the mask.7 SHUFFLE CROSSOVER Shuffle crossover is related to uniform crossover. After recombination. Each bit of the first parent is compared with the bit of the second parent. If both are the same. the variables in the offspring are unshuffled. the gene is copied from the parent 1.6 CROSSOVER WITH REDUCED SURROGATE The reduced surrogate operator constrains crossover to always produce new individuals wherever possible. the bit from the third parent is taken for the offspring. 7.2.child 2.5 THREE PARENT CROSSOVER In this crossover technique.2. the bit is taken for the offspring otherwise.

is randomly filled with elements of the set {1.7.7 Precedence Preservative Crossover (PPX) 3. (1996). This step is repeated until both parents are empty and the offspring contains all operations involved. for which the operations ‘append’ and ‘delete’ are defined.7. representing the number of operations involved in the problem. Fig 7. Note that PPX does not work in a uniform-crossovermanner due to the ‘deletionappend’ scheme used. First we start by initializing an empty offspring. This vector defines the order in which the operations are successively drawn from parent 1 and parent 2.2. 6. sub i=1to mi.7 7.1. 7. 9.PPX is illustrated in below.The operator passes on precedence relations of operations given in two parental permutations to one offspring at the same rate. 8.while no new precedence relations are introduced.9 ORDERED CROSSOVER 30 .8 PRECEDENCE PRESERVATIVE CROSSOVER (PPX) PPX was independently developed for vehicle routing problems by Blanton and Wainwright (1993) and for scheduling problems by Bierwirth et al. A vector of length Sigma. We can also consider the parent and offspring permutations as lists. 4. 2.The operator works as follows: 1.1. After an operation is selected it is deleted in both parents. Example is shown in Fig. 2}. 5. for a problem consisting of six operations A–F. The leftmost operation in one of the two parents is selected in accordance with the order of parents given in the vector.2. Finally the selected operation is appended to the offspring.

Alleles are moved to their new positions in the offspring 5.2.e. Under this representation. 8. Fig 7. Indeed. This is shown in Fig 5. Given two parent chromosomes. i. we are only interested in labels and not alleles. It may be viewed as a crossover of permutations that guarantees that all positions are found exactly once in each offspring. Two crossing sites are selected uniformly at random along the strings. The matching section is used to effect a cross through position-by-position exchange operation 4. 7.Ordered two-point crossover is used when the problem is of order based. for example in U-shaped assembly line balancing etc.PMX proceeds as follows: 1. the dots mark the selected cross points. known as permutation encoding. 31 . and its middle section is determined.10 PARTIALLYMATCHED CROSSOVER (PMX) PMX can be applied usefully in the TSP. both offspring receive a full complement of genes. middle andright portion.9.followed by the corresponding filling in of alleles from their parents.1. Consider the two strings shown in Fig 7. Where. 2. A similar process is applied to determine child 2. The ordered two-point crossover behaves in the following way: child 1 inherits its left and right section from parent 1. The following illustrates how PMX works. where each integer represents a different city and the order represents the time at which a city is visited. 6. The two chromosomes are aligned. TSP chromosomes are simply sequences of integers. defining a matching section 3. The exchanges are read from the matching section of one chromosome to that of the other. The matching section defines the position-wise exchanges that must take place in both parents to produce the offspring.8 Ordered crossover Parent by the genes in the middle section of parent 1 in the order in which the values appear in parent 2.two random crossover points are selected partitioning them into a left.8 7.

It introduces new genetic structures in the population by randomly modifying some of its building blocks.1.3 MUTATION After crossover. offspring are made from parts of both parent’s chromosome. 6 and 3. whole new generation is made from exact copies of chromosomes from old population (but this does not mean that the new generation is the same!). Mutation prevents the algorithm to be trapped in a local minimum. it is good to leave some part of old population survive to next generation. Crossover is made in hope that new chromosomes will contain good parts of old chromosomes and therefore the new chromosomes will be better. 7. 7. offspring are exact copies of parents. mutation is supposed to help for the exploration of the whole search space.2. then all offspring are made by crossover. Mutation is viewed as a background operator to maintain genetic diversity in the population. the strings are subjected to mutation. the numbers that exchange places are 5 and 2. If crossover probability is 100%.9 Strings given Fig 7.11 CROSSOVER PROBABILITY The basic parameter in crossover technique is the crossover probability (Pc).10 Partially matched crossover 7. If it is 0%.The resulting offspring are as shown in Fig 7. In the example.9. However. 32 . It is an insurance policy against the irreversible loss of genetic material.1.If there is no crossover. If crossover is supposed to exploit the current solution to find better ones. Mutation has traditionally considered as a simple search operator. If there is crossover.10 Fig. Mutation plays the role of recovering the lost genetic materials as well as for randomly disturbing genetic information. and 7 and 10.Crossover probability is a parameter to describe how often crossover will be performed.

The probability is usually taken about 1/L. and thus ensuring ergodicity. the corresponding bit in parent chromosome is flipped (0 to 1 and 1 to 0) and child chromosome is produced. A search space is said to be ergodic if there is a non-zero probability of generating any solution from any population state.For binary representation.where L is the length of the chromosome.2 INTERCHANGING Fig 7. A parent is considered and a mutation chromosome is randomly generated.1.11:flipping 7.11 explains mutation-flipping concept. In the above case.There are many different forms of mutation for the different kinds of representation. 7. Fig 7. It also keeps the gene pool well stocked. But care should be taken.1.Mutation helps escape from local minima’s trap and maintains diversity in the population. changing 0 to 1 and vice-versa.12:Interchanging 33 .3. there occurs 1 at 3 places of mutation chromosome. Such an operator can accelerate the search.The Fig 5.1 FLIPPING Flipping of a bit involves changing 0 to 1 and 1 to 0 based on a mutation chromosome generated. the corresponding bits in parent chromosome are flipped and child is generated. because it might also reduce the diversity in the population and makes the algorithm converge toward some local optima. Mutation of a bit involves flipping a bit.3. It is also possible to implement kind of hill-climbing mutation operators that do mutation only if it improves the quality of the solution. For a 1 in mutation chromosome. a simple mutation can consist in inverting the value of each gene with a small probability.

This is shown in Fig.Two random positions of the string are chosen and the bits corresponding to those positions are interchanged.4 REPLACEMENT Replacement is the last stage of any breeding cycle. This is shown in Fig.1.3. The technique used to decide which individual stay in a population and which are replaced in on a par with the selection in influencing convergence. Basically. there are two kinds of methods for maintaining the population. If mutation probability is 100%.e. 34 . Mutation should not occur very often. one or more parts of a chromosome are changed..12 7. 7. a method must determine which of the current members of the population.3 REVERSING A random position is chosen and the bits next to that position are reversed and child chromosome is produced.1.generational updates and steady state updates. If there is no mutation. if any. whole chromosome is changed. so two must be replaced i. offspring are generated immediately after crossover (or directly copied) without any change.13:Reversing 7.13 Fig 7. but not all four can return to the population.1. If mutation is performed. The mutation probability decides how often parts of chromosome will be mutated.7.5. Two parents are drawn from a fixed size population. should be replaced by the new solutions.3. they breed two children. if it is 0%. once off springs are produced. Mutation generally prevents the GA from falling into local extremes. because then GA will in fact change to random search. nothing is changed.4 MUTATION PROBABILITY The important parameter in the mutation technique is the mutation probability (Pm).

4. This process improves the overall fitness of the populationwhen paired with a selection technique that selects both fit and weak parents for crossing. the population and genetic material moves around but leads to a problem when combined with a selection technique that strongly favors fit parents: the fit breed and then are disposed of. As a result.If the maximum number of generation has been reached before the specified number of generationwith no changes has been reached. the processwill end.2 WEAK PARENT REPLACEMENT In weak parent replacement. 35 . the process will end. The child replaces the parent.With the four individuals only the fittest two.4.1 RANDOM REPLACEMENT The children replace two randomly chosen individuals in the population. each individual only gets to breed once. since weak individuals can be introduced into the population. In this case. Maximum generations–The genetic algorithm stops when the specified number of generation’s have evolved. This can be useful for continuing the search in small populations. 7. the various stopping condition are listed as follows: 1.2 SEARCH TERMINATION (CONVERGENCE CRITERIA) In short. No change in fitness–The genetic process will end if there is no change to the population’s best fitness for a specified number of generations.1. parent or child. 2. but if weak individuals and discriminated against in selection the opportunity will never raise to replace them.1. return to population. 7. 3. Elapsed time–The genetic process will end when a specified time has elapsed. The parents are also candidates for selection. a weaker parent is replaced by a strong child.4.7. 7.1.3 BOTH PARENTS Both parents replacement is simple.If the maximum number of generation has been reached before the specified time has elapsed.

7. 5. Stall generations–The algorithm stops if there is no improvement in the objective function for a sequence of consecutive generations of length Stall generations. In this case. the search is considered to have satisfaction converged when the sum of the fitness in the entire population is less than or equal to the convergence value in the population record. which should give a good range of solutions to choose from. 36 .1 BEST INDIVIDUAL A best individual convergence criterion stops the search once the minimum fitness in the population drops below the convergence value. otherwise a few unfit individuals in the population will blow out the fitness sum.2.2 WORST INDIVIDUAL Worst individual terminates the search when the least fit individuals in the population have fitness less than the convergence criteria. The termination or convergence criterion finally brings the search to a halt. although the best individual may not be significantly better than the worst. This guarantees the entire population to be of minimum standard. 7. Stall time limit–The algorithm stops if there is no improvement in the objective function during an interval of time in seconds equal to Stall time limit. although it is better to pair this convergence criteria with weakest gene replacement.2. 7. a stringent convergence value may never be met.4 MEDIAN FITNESS Here at least half of the individuals will be better than or equal to the convergence value. This brings the search to a faster conclusion guaranteeing at least one good solution.2. The following are the few methods of termination techniques. in which case the search will terminate after the maximum has been exceeded.4. 7.3 SUM OF FITNESS In this termination scheme. This guarantees that virtually all individuals in the population will be within a particular fitness range.2. The population size has to be considered while setting the convergence value.

CHAPTER 8 COST FUNCTION The cost function is the sum of three components: the wire length cost. or that need to be limited to a certain maximum length due to propagation delay. and W3 is the weight for this penalty for which the default value of 5 is used. CHAPTER 9 37 . j) is the overlap between the ith and jth cell. It was observed that C2 converges to O for W2 = 1. Experiments show that the function used provides good control. The wire length cost C1 is estimated using the semiperirneter method. The parabolic function causes large overlaps to be penalized and hence discouraged more than small ones. the module overlap penalty. The row length control penalty C3 is a function of the difference between the actual row length and the desired row length. C2. it would take too much computation to determine the change in wire length. Unequal row lengths result in wasted space. j)]2. C3. LˆR is the desired row length. The penalty is given by: C3=W3∑│ LR-LˆR│ where LR is the actual row length.i≠j where 0( i. It tends to equalize row lengths by increasing the cost if the rows are of unequal lengths. with final row lengths within 3-5% of the desired value. where x(i) and y(i) are the vertical and horizontal spans of the net bounding rectangle. as shown in Figure la.The module overlap penalty. with weighting of critical nets and independent weighting of horizontal and vertical wiring spans for each net: C1 = [x(i) WH(i) +y(i)WV(i)]. and W 2 is the weight for this penalty. Although cell overlap is not allowed in the final placement and has to be removed by shifting the cells slightly. C2. and WH( i) and WV(i) are the weights of the horizontal and vertical wiring spans. If too many cells are shifted in an attempt to remove overlap. it takes a large amount of computation time to remove overlap for every proposed move. Critical nets are those that need to be optimized more than the rest. C1.parabolic in the amount of overlap: C2 = W2∑ [O(i. and the row length control penalty.

further improvements are most likely.g.In conclusion. genetic algorithms is a promising approach to the macro cell placement problem. trying alternative estimation of routing area. Since this work is a novel approach to macro cell placement. but also here improvements can be made. E.etc. problem has been presented. by trying different crossover operators. CHAPTER 10 REFERENCE 38 .CONCLUSION In this paper a genetic algorithm for the macro cell placemenl. The current runtime is too extensive. The layout quality obtained by the algorithm is comparable to the best published results.

06. CADIC. 10(02). Mar 2002 [9] Wenting Hou. IEEE Trans. Ko. 10. IEEE Trans. Sheqin Dong. Markov. No. CADIC.. Habib Youssef. PP. “A New Optimization Model For VLSI Placement Algorithm”. Barada. Srinivas Katkoori. 803 – 536.. 10. Xianlong Hong. 09. Robin.. VLSI.. IEEE Trans. Yici Cai. L. Vol. Jun Qu. “VLSI Placement With Placed Modules Based On Less Flexibility First Principles”. Vol. Ahmad Al. Vol. “On Wire Length Estimation For Row Based Placement”. Zhong Xiu. Valenzulea. No. Dec 2002 [11] Christine L. “Fuzzy Simulated Evolution Algorithm For Multi – Objective Optimization of VLSI Placement”. CADIC. “FaSa: A Fast and Stable Quadratic Placement Algorithm”. Andrew B. W.. Y. 01. Loannis Karaflydis. 12. IEEE Trans. 06. CADIC. Evolutionary Computation. 04. CADIC. CAD. Kahng. Oct 1999 [4] Sadiq M. Vol. Wang. IEEE Trans. 12. Igor L. Yamani. “VLSI Block Placement Using Less Flexibi. IEEE Trans. Jan 2000 [7] Sheqin Dong. Vol. Sep 1999 [2] Paul. Prasad. “Genetic Partitioning And Placement for VLSI Circuits”. No. Apr 2000 [6] Yuliang Wu. S. IEEE Trans. Vol. Vol. Sirakoulis. “VLSI Placement And Area Optimization Using a Genetic Algorithm To Breed Normalized Postfix Expressions”. Feb 2001 [8] Sadiq M. CAD.ity First Principles”. IEEE Trans. 08. IEEE Trans. 10(01): 7803 – 6677. No. Sait. Jun 2002 [10] Stelian Alupoaei. IEEE Trans.[1] Andrew E. Jun Cu. 02. 10. Vol. Aug 2002 39 .. “Net Based Force Directed Macro Cell Placement For Wire Length Optimization”. IEEE Trans. CAD. 10(10): 7695 – 0719. Pearl Y.. Cheung. Hussain Ali. No. No. 18. Hassan R. Sait. Caldwell. Isreal Koren. PP. “HPTS: Heterogeneous Parallel Tabu Search For VLSI Placement”. 12. Apr 1999 [3] George Ch. “The Effect Of Placement On Yield For Standard Cell Design”. 7803 – 7547. 1999 [5] Rajnish K. 10. Xianlong Hong. No.

. Apr 2004 [17] Gi Joon Nam. IEEE Trans.. In Proc. On Circuits and Systems. “Routability – Driven Placement and White Space Allocation”. Vol. IEEE Trans. Symp. “Tabu Search Application In Solving an SP Based BBL Placement Problem of VLSI Circuit Physical Design”. 03. “VLSI Module Placement Based On Probability Search Space Smoothing”. Yao – Wen Chang. 01. Ba Yang.. Alpert. Paul G. Janson Cong. Vol. Kin Yuan. Koh. No.7. Vol. 17. Jan van Compenhout.. Vol. CADIC. Ning Xu. No.05 May 2007 [21] Jarrorl A. “Arbitrarily Shaped Rectilinear Module Placement Using Transitive Closure Graph Representation”. Hsin – Lung Chen. Roy. Vol. Jul 2007 [20] Chen Li. Vol. IEEE Trans. IEEE Trans. B. Jan 2007 40 . VLSI. Kahng. Mar 2003 [15] Sheqin Dong. No. 25. 26. IEEE Trans. Richard Shit. No. CADIC. “ Multi – Level Placement For Large Scale Mixed – Size IC Designs”. No. Dink Stroobandt. Xin Qi. Jan 2007 [22] Devang Jariwala. Igor L. 47 th IEEE Int. Apr 2004 [16] Joni Dambre. 07. IEEE Trans. 26. CADIC. IEEE Int. Conf. 12. Carles J. 26. “A Fast Hierarchical Quadfratic Placement Algorrithm”. CADIC. IEEE Trans. Xianlong Hong. Dec 14 -17 2003 [14] Chin – Chin Chang. Cheng – Koh. CADIC. “Wire Length Minimization for Min cut Placements Via Placement Feedback”. No. VLSI. Vol. 10. Min Xie.. No. John Lillis. Jan Gu. 04. “ Seeing The Forest And The Trees: Steiner Wire Length Optimization in Placement”. IEEE Trans. “CASCADE: A Standard Super Cell Design Methodology with Congestion – Driven Placement for Three Dimensional Interconnect Heavy VLSI Circuit “. CADIC. C.. Villarrubia. Markov. Neural Networks & Signal Processing. Vol. 04. “Toward the Accurate Prediction of Placement Wire Length Distributions in VLSI Circuits”. 26. Jul 2006 [19] Lili Zhou..[12] Jai Ming Lin. Kahng.. 01. CADIC. No. Vol.. IEEE Trans. Sherif Reda. 25. No. Dec 2002 [13] Yiling Liu. Apr 2006 [18] Andrew B. J. “RBI: Simultaneous Placement Routing Optimization Technique”. 06.

“An Efficient Approach To On Chip Logic Minimization”.. and Yao – Wen Chang. IEEE Trans. Rabi M.[23] Seraj Ahmad. Apr 2008 41 . 27. Mahaptra. VLSI. 15. Yi –Lin Chuang. 09. No. Sep 2007 [24] Tung – Chieh Chen.. CADIC. No. “ Effective Wire Models For X – Architecture Placement”. Vol. Vol. 04. IEEE Trans.

Sign up to vote on this title

UsefulNot useful- 1989 - On Genetic Crossover Operators for Relative Order Preservation
- Genetic Algorithms
- Genetic Algorithms a Step by Step Tutorial
- ms3
- GATE GUIDE
- P=NP Solution
- LZ77
- prachi ch
- Introduction to Genetic Algorithms
- Hill Climbing Methods
- Artificial_Intelligence_A_Guide_to_Intelligent_Systems_2nd_Edition
- IB 2012 Question Paper
- anne1
- [IJCT-V3I2P31]
- travelling salesman problem
- 2210 ok
- Vehicle Routing Problem
- Project
- vol2no10_5
- PP 108-111 Implementation of Genetic Algorithm in Dyamic Channel Allocation
- teja project
- Survey on Multiobject Evolutionary and Real Coded Genetic Algorithms
- Multi-Objective Optimization using Genetic Algorithms
- Real coded genetic algorithm
- 05639829
- 13--
- 4rwrfsd
- nebro06
- MOGA Class
- Application of Genetic Algorithm to Hydrocarbon
- ACTUAL REPT

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd