Design Methodologies
Approach followed to solve the VLSI design problem
To consider all parameters in one go during VLSI design process we use:
To design a VLSI circuit at one go while at the same time optimizing the cost functions
complexity is simply too high
To understand complexity
Two main concepts are helpful to deal with this complexity
1. Hierarchy and
2. Abstraction
A
A  Level 1
B C D  Level 2
J  Level 3
( D JK ) J E F G H l /\ M N O
 Level 4
© K L
decomposition tree
(a) (b)
A
A C
©© © ©© ®®
v© ® ® © @ .
The behavioral domain.: Part of the design (or the whole) is seen as set of black
boxes
Black box: relations between outputs and inputs are given without a reference to the
implementation of these relations.
fullcustom design:
maximal freedom
ability to determine the shape of every mask layer for the production of die
chip.
Semicustom:
• smaller search space
• limiting the freedom of the designer
• shorter design time
• semicustom design implies the use of gate arrays, standard cells,
parameterizable modules
Standard Cells
• simple logic gates, flipflops, etc.
• Predesigned and have been made available to the designer in a library
• characterization of the cells: determination of their timing behavior is done once by
the library developer
Module Generators
• generators exist for those designs that have a regular structure such as adders,
multipliers, and memories.
• Due to the regularity of the structure, the module can be described by one or two
parameters.
VLSI design automation tools:
Can be categorized in:
Structural and
logic design %

Transistor level design
Layout design
PHYSICAL DOMAIN
Algorithmic and System Design:
• formal description mainly concerned with the initial algorithm to be implemented in
hardware and works with a purely behavioral description.
• Hardware description languages (HDLs) are used for the purpose.
• A second application of formal description is the possibility of automatic synthesis
• synthesizer reads the description and generates an equivalent description of the
design at a much lower level.
• Highlevel synthesis: The synthesis from the algorithmic behavioral level to structural
descriptions is called highlevel synthesis.
Hardwaresoftware cosimulation:
Verification of the correctness of the result of codesign using simulation.
• Structural and Logic Design
• Sometimes the tools might not be able to cope with the desired behaviour: inefficient
synthesis
• Designer provides lower level description :Structural and Logic Design
• designer can use a schematic editor program: CAD tool
• It allows the interactive specification of the blocks composing a circuit and their
interconnections by means of a graphical interface.
• schematics constructed in this way are hierarchical
• Role of simulation: Once the circuit schematics have been captured by an editor, it is
a common practice to verify the circuit by means of simulation
• fault simulation: checks whether a set of test vectors or test patterns (input signals
used for testing) will be able to detect faults caused by imperfections of the
fabrication process
At the switch level , transistors are modeled as ideal bidirectional switches and
the signals are essentially digital
At the timing level , analog signals are considered, but the transistors have simple
models (e.g. piecewise linear functions)
At the circuit level , more accurate models of the transistors are used which often
involve nonlinear differential equations for the currents and voltages
more accurate the model, the more computer time is necessary for simulation
Process (fullcustom Transistorlevel design):
1. it is the custom to extract the circuit from the layout data of transistor.
2. Construct the network of transistors, resistors and capacitances.
3. The extracted circuit can then be simulated at the circuit or switch level.
Layout Design
Design actions related to layout are very diverse therefore, many different layout
tools.
If one has the layout of the subblocks of a design available, together with the list of
interconnections then
1. First, a position in the plane is assigned to each subblock, trying to minimize
the area to be occupied by interconnections (placement problem).
2. The next step is to generate the wiring patterns that realize the correct
interconnections (routing problem).
Timing constraint (2): As the length of a wire affects the propagation time of a signal
along the wire, it is important to keep specific wires short in order to guarantee an
overall execution speed of the circuit. (timingdriven layout)
Partitioning problem: grouping of the subblocks
• Subblocks that are tightly connected are put in the same group while the number
of connections from one group to the other is kept low
• Partitioning helps to solve the placement problem
Floorplanning:
• Layout editor (In fullcustom design): provides the possibility to modify the layout at
the level of mask patterns.
• In a correct design, the mask patterns should obey some rules called design rules.
• Tools that analyze a layout to detect violations of these rules are called designrule
checkers.
Circuit extractor: takes the mask patterns as its input and constructs a circuit of
transistors, resistors and capacitances that can be simulated
disadvantage of fullcustom design is that the layout has to be redesigned when the
technology changes. symbolic layout has been proposed as a solution.
Compactor: takes the symbolic description, assigns widths to all patterns and spaces
the patterns such that all design rules are satisfied.
Verification Methods: There are three ways of checking the correctness of an
integrated circuit without actually fabricating it
1. Prototyping
2. Simulation
3. Formal verification
it is impossible to have an exhaustive test of a circuit of reasonable size, as the set of all
possible input signals and internal states grows too large.
Graph: A graph is a mathematical structure that describes a set of objects and the
connections between them.
Graphs are used in the field of design automation for integrated circuits
1. when dealing with entities that naturally look like a network (e.g. a circuit of
transistors)
2. in more abstract cases (e.g. precedence relations in the computations of some
algorithm.
The two vertices that are joined by an edge are called the edge ' s endpoints, notation
( u, v ) is used.
The vertices u and v such that ( u, v) G E, are called adjacent vertices.
e2 V4
*
& el Vj *5
4
V
5
Subgraph: When one removes vertices and edges from a given graph G, one gets a
subgraph of G.
Rule: removing a vertex implies the removal of all edges connected to it.
Complete graph : a complete graph is a simple undirected graph in which every pair
of distinct vertices is connected by a unique edge.
I V 5
1
Clique: a clique in an undirected graph is a subset of its vertices such that every two
vertices in the subset are connected by an edge.
A subgraph that is complete
three cliques identified by the vertex sets {V1 , V2, V3}, {V3, V4} and {V5, V6}
V2
e2
e
3
h*
v6
The image canno t be displayed. Your computer may not hav e eno ugh memory to open the image, or the image may hav e been corrupted. Restart your computer, and then open the file again. If the red x still appears, y ou may hav e to delete the image and then insert it again.
Maximal clique: A maximal clique is a clique that cannot be extended by including one
more adjacent vertex,
it is not a subset of a larger clique.
A maximum clique (i.e., clique of largest size in a given graph) is therefore always maximal
degree of a vertex: The degree of a vertex is equal to the number of edges incident with it
Selfloop: An edge (u, u), i.e. one starting and finishing at the same vertex, is called a
selfloop. A B
D
C
Parallel edges: Two edges of the form e1= (v1 , v2) and e2 = (v1 , v2), i.e. having the
same endpoints, are called parallel edges.
Simple graph: A graph without selfloops or parallel edges is called a simple graph
9
Graphs with degree r 2
v>
A o
Graphs with degrees r = 3
A graph without selfloops but with parallel edges is called a multigraph.
1
V3 VA
2
1
1
2 3 V5
3
D
V2 Vl *
bigraph : a bipartite graph (or bigraph) is a graph whose vertices can be divided into two
disjoint sets U and V (that is, U and V are each independent sets) such that every edge
connects a vertex in U to one in V. V
Planargraph: A graph that can be drawn on a twodimensional plane without any of
its edges intersecting is called planar.
Path: A sequence of alternating vertices and edges, starting and finishing with a
vertex, such that an edge e = (u, v) is preceded by u and followed by v in the
sequence (or vice versa), is called a path. Ex Ub e\ V2 , ei , V3
f
A path, of which the first and last vertices are the same and the length is larger than
zero, is called a cycle (sometimes also: loop or circuit).
A path or a cycle not containing two or more occurrences of the same vertex is a
simple path or cycle.
I V 5
1
Connected graph: If all pairs of vertices in a graph are connected, the graph is called a
connected graph
outdegree: outdegree of an edge is equal to the number of edges incident from it.
strongly connected vertices: Two vertices u and v in a directed graph are called strongly
connected if there is both a directed path from u to v and a directed path from u to u.
V
5
ei
v
e3 eA e5
I
2 v6
A weighted graph is a graph in which each branch is given a numerical weight.
a special type of labeled graph in which the labels are numbers (which are
usually taken to be positive).
b 4
2 5
'
4 2 53
4
3 31

5
To implement graph algorithms suitable data structures are required
Different algorithms require different datastructures.
adjacency matrix: If the graph G(V, E) has n vertices, an nxn matrix A is used.
t
/
II
I
—
hJ
UJ
T>
©
§
hi
'W
V
w
if
The adjacency matrices of undirected graphs are symmetric.
The image canno t be displayed. Your computer may not hav e eno ugh memory to open the image, or the image may hav e been corrupted. Restart your computer, and then open the file again. If the red x still appears, y ou may hav e to delete the image
and then insert it again.

J*
*V£ f
* \ *r
1
r
0
©
0
o o
o

00
——
— —
0
ooo
—— —
—
O O
0O O O
o o o o o
—
— —
o
0 0 0
0 0
i
——
o o o
—ooo
o o o o o
ooo
00
i
Finding all vertices connected to a given vertex requires inspection of a complete row
and a complete column of the matrix (not very efficient)
2 1 3
e2 V3 V4
4
e\ Vj
3
cs
4 5
1 2
5
struct vertex { 1
int vertex.index ;
struct edge *outgoingjedges; 1 3
2
> ;
S3
struct edge { 4
int edgeJndcx ; 3
struct vertex * frumf * to;
struct edge *ncxt; 5
4
);
1 2
5
The complexity/ behaviour of an algorithm is characterized by mathematical functions of
the algorithm's "input size“.
The input size or problem size of an algorithm is related to the number of symbols
necessary to describe the input.
Ex: a sorting algorithm has to sort a list of n words, each consisting of at most 10 letters
input size =10n
using the ASCII code that uses 8 bits for each letter
input size =80n
2 0{ n2 )
jfl =
0.02n 2 + 127« + 1923 = 0(n 2 )
3n \ogn + n = O (nlogn )
0( 1) constant
linearithmic. loglinear. or
0( 7? log n. ) = O( logn !) quasilinear
0 ( n2 ) quadratic
0 ) , Ol exponential
0( nl ) factorial
0( 71 * 7l!) n * n factorial
struct vertex {
Examples of Graph Algorithms:
Depthfirst Search: ini mark;
• To traverse the graph 1;
• to visit all vertices only once
• A new member ‘mark’ in the vertex structure dfs(struct vertex u )
• initialized with the value 1 and given the value 0 (
when the vertex is visited. u .mark 0;
process y ;
for each ( v , u ) e E [
process ( a , u ) ;
if ( u mark)
dfs( w );
)
main ()
(
for each v eV
 )V;
t .mark «
for each v
if { v .mark )
dfs( v );
)
V\
v2
?r
meia
ez 3
2
( a) (b)
dfs(vi )
 e\ = ( « i
* . i ) dfs(u2)
*3 = ( 2 , V4 ) dfs(l>4 )
 .
64  ( v4 V3) dfs(V3>
* C5 = (V2 * V5 ) dfs( vs )
+ «2 = ( V , U3)
each vertex is visited exactly once, all edges are also visited exactly once.
Assuming that the generic vertex and edge actions have a constant time complexity, this
leads to a time complexity of 0 ( n + E) where n = V.
depthfirst search could be used to find all vertices connected to a specific vertex u
Breadthfirst Search:
• directed graphs represented by an adjacency list
• The central element is the FIFO queue.
• the call shift_in (q, o) adds an object o to the queue q, that shif t_out ( q ) removes
the oldest object from the queue q.
• adding and removing objects from a FIFO queue can be done in constant time.
tf ( w mark) {
,
v5
•
'C? V3
V4
depthfirst search could be used to find all vertices connected to a specific vertex u
The shortestpath problem becomes more complex, if the length of the path between
two vertices is not simply the number of edges in the path. (weighted graphs)
struct vertex {
• Dijkstra's Shortestpath
Algorithm int distance;
);
• a weighted directed graph G(V,
E) is given dijkstra(set of struct vertex V , struct vertex vs , struct vertex v{
• edge weights w(e), w(e) > 0 (
• Visited vertices of the set V are set of struct vertex T ;
transferred one by one to a set struct vertex u , u ;
T V V \M ;
• Ordering of vertices is done
using vertex attribute ‘distance’. .distance « 0;
• the distance attribute of a for each u V
vertex v is equal to the edge if ( (uj , u) E )
weight w((vs, v)) u .distance 4 w ( ( vs > u ) )
else u.distance 4 +00;
while ( Uf g 7){
*

Vt such that Vu e V : u .distance < v .distance ;
r ru {H ];
V « V \ {4 ; >
for each u such that ( u, v ) E and v V"
If ( v ,distance > v)) + u .distance)
)
v.distance 
u> ((wf v ) ) f u .distance;
)
*
9DUUl$ip ?a = J JOJ
uop jajT 1 l 3 e 9 9
I { la} 9 GO I e oo
Z {fra ‘la} 9 9 Z OG
£ { a ‘fra * la } 9 9 £
P { 9a * £a ‘fra ‘la } t 9
£ { Ea * 9a £a ‘fra ‘la} 5
9 ( £rt ' Eft ' 9a Sft fra la }
I Z
9
l
I £
z
1
vt = V3 is reached after 5 iterations
continuing for one iteration more computes the lengths of the shortest paths for all
vertices in the graph
Time complexity of while loop = O(n) time, where n = V
overall time complexity = O(n^2)
as all edges are visited exactly once, viz. after the vertex from which they are incident
is added to set T. This gives a contribution of 0(E) to the overall time complexity.
worstcase time complexity = O(n^2+E)
Prim's Algorithm for Minimum Spanning Trees
In the mathematical field of graph theory, a spanning tree T of an undirected graph G is a
subgraph that includes all of the vertices of G that is a tree.
One gets a spanning tree by removing edges from E until all cycles in the graph have
disappeared while all vertices remain connected.
a graph has several spanning trees, all of which have the same number of edges (number
of vertices minus one)
In the case of edgeweighted undirected graphs, spanning tree is to be found with the
least total edge weight, also called the tree length.(minimum spanning tree problem)
starts with an arbitrary vertex which is considered the initial tree
struct vertex {
int distance;
struct edge * via.edge; .distance <
else hoo ;
); while ( V 0) {
struct edge { u«  e V , such that Vv V : u.distance < u.distance" ;
tv W U ( «};
); V V \ { u );
F * F U { M . via edge};
prim(set of struct vertex V ) For each v such that (w , ) e £"
( »
if ( u.distance :> tu ( (u , t> ) t
set of struct edge F;
set of struct vertex W ;
_
i .distance * UJ{(II , v));
i/ , via edge « ( u , v )\
struct vertex u ; J
u < any vertex from V ;

V + V \ { u );
W < [*};
F
for each v V v5
if ((« , t>) E ) (
 »
v.distance « u> ((w , V ;
u .viajedge « ( w , v) ;
V4 V4
( a) (b)
)
2
v 5
v i/ 4 2 2 V5 2 V5
1
V4 V4
( a) < b>
iteration u 1 2 3 4 5
0 V\ 2, ( v i , V2 ) 4 ( l> t , l>3 )
T 3*( VUU4) + OO ,?
1 V2 1 , ( u2 , V l ) 2 ( V2, U4 )
t 5 (V2 vs )
t t
2 V3 l , ( U3 , U4 ) 2,(V3 , V S )
3 V4 .
2 ( V3. U5 )
4 vs
Tractable Intractable problems:
If the variables range over real numbers, the problem is called a continuous
optimization problem.
If the variables are discrete, i.e. they only can assume a finite number of distinct values,
the problem is called a combinatorial optimization problem.
One could associate Boolean variables bi, bi = 1means that the edge is "selected"
and bi = 0 means that it is not.
solving the shortestpath problem for this graph can be seen as assigning Boolean
values to the variables bi: making the problem combinatorial.
a combinatorial optimization problem is defined as the set of all the instances of the
problem,
each instance I being defined as a pair (F, c).
F is called the set of feasible solutions (or the search space),
c is a function assigning a cost to each element of F.
Solving a particular instance of a problem consists of finding a feasible solution
f with minimal cost
• The traveling salesman problem (TSP):
• Given a list of cities and the distances between each pair of cities, what is the shortest
possible route that visits each city exactly once and returns to the origin city?
• TSP can be modelled as an undirected weighted graph, such that cities are the graph's
vertices.
• paths are the graph's edges, and a path's distance is the edge's length.
• It is a minimization problem starting and finishing at a specified vertex after having
visited each other vertex exactly once.
• Often, the model is a complete graph
any permutation of the cities defines a feasible solution and the cost of the feasible
solution is the length of the cycle represented by the solution
CI c2 c3 C] C2 C3 c\ c2 ci
'SsrtJ CTS
c9
*4
c7
C
B
6
C5
; 51 X< C
* *
i i i i i i 1 1 1 1 i i i i i i i i
(a ) (b) (c)
the evaluation version merely asks for the length of the shortest path.
decision problems:
decision version : These are problems that only have two possible answers: "yes" or
"no”.
If optimization version can be solved in polynomial time, then the decision version can
also be solved in polynomial time.
In other words: if there is an algorithm that is able to decide in polynomial time whether
there is a solution with cost less than or equal to k, it is not always obvious how to get
the solution itself in polynomial time.
An interesting subset of instances is formed by those instances for which the answer to
the question is "yes".
The class of decision problems for which an algorithm is known that operates in polynomial
time is called P (which is an abbreviation of "polynomial").
The machine splits itself into as many copies as there are choices, evaluates all choices in
parallel, and then merges back to one machine.
Complexity class NP: The complexity class NP (an abbreviation of "nondeterministic
polynomial") consists of those problems that can be solved in polynomial time on a
nondeterministic computer.
Any decision problem for which solution checking can be done in polynomial time is in NP.
Ex:
HAMILTONIANCYCLE problem: whether a given undirected graph G(V,E) contains a so
called Hamiltonian cycle, i.e. a simple cycle that goes through all vertices of V.
TRAVELING SALESMAN, the decision version of TSP amounts to answering the question of
whether there is a tour (simple cycle) through all vertices, the length of which is less than
or equal to k.
ni
instance transforma ¬ instance
/ , tion /
polynomial!
algorithm algorithm
for for n2
polynomial ? polynomial ?
Yes/Nol /E <
«* / J) e YUi Yes/No
Nondeterministic computer: Turing machine (mathematical model)
a computer with a sequentially accessible memory (a "tape") and a very simple instruction
set.
The set only includes instructions for writing a symbol (from a finite set) to the memory
location pointed at by the memory pointer and move the pointer one position up or down.
A finite number of "internal states" should also be provided for a specific Turing machine.
The input to the algorithm to be executed on a Turing machine is the initial state of the
memory.
The machine stops when it enters one of the special internal states labeled by "yes" and
"no" (corresponding to the answers to a decision problem).
Generalpurpose Methods for Combinatorial Optimization:
algorithm designer has three possibilities when confronted with an intractable problem.
1. try to solve the problem exactly if the problem size is sufficiently small using an
algorithm that has an exponential (or even a higher order) time complexity in the
worst case.
1. The simplest way to look for an exact solution is exhaustive search: it simply
visits all points in the search space in some order and retains the best solution
visited.
2. Other methods only visit part of the search space, albeit the number of points
visited may grow exponentially (or worse) with the problem size.
2. Approximation algorithms
3. Heuristics algorithms
The Unitsize Placement Problem:
Problem: how the cells should be interconnected.
A net can be seen as a set of cells that share the same electrical signal
The interconnections to be made are specified by nets
Placement is to assign a location to each cell such that the total chip area occupied is
minimized.
As the number of cells is not modified by placement, minimizing the area amounts to
avoiding empty space and keeping the wires that will realize the interconnections as
short as possible.
cells in the circuit are supposed to have a layout with dimensions 1x1(measured in some
abstract length unit)
it can be assumed that the only positions on which a cell can be put on the chip are the
grid points of a grid created by horizontal and vertical lines with unitlength separation.
I
Mstall pitch
3 HCM
V *
Metall w f t
WBw
M »1»Q pitch
V < tdS2 pitch
A nice property of unitsize placement is that the assignment of distinct coordinate pairs to
each cell guarantees that the layouts of the cells will not overlap.
If the range of coordinates available in two dimensions is fixed in advance, the only
contribution to the cost function will come from the area occupied by wiring.
The possible way to evaluate the quality of a solution for unitsize placement is to route
all nets and measure die extra area necessary for wiring.
rJB
: A, B, F, G
n 2 : B, E 2
n 3 : D, E
n 4 : A, C, D 1
B 5 : C, D, F
n 6 : C, E, F, G 0
7 : D, F i : :
n 8 : F, G 0 1 2
(a) (b)
AitiB El,
D E
u G
(e) (d)
A bad placement will have longer connections which normally will lead to more routing
tracks between the cells and therefore to a larger chip area.
Solving the routing problem is an expensive way to evaluate the quality of a
placement.
This is especially the case if many tentative placements have to be evaluated in an
algorithm that tries to find a good one.
An alternative used in most placement algorithms is to only estimate the wiring area.
Backtracking and Branch and bound:
an instance I of a combinatorial optimization problem was defined by a pair ( F , c ), with
F the " set of feasible solutions" ( also called the " search space or " solution
space " ) and
c a cost function assigning a real number to each element in F.
The explicit constraints then state that fie {0, 1} for all i.
The implicit constraints say that the edges selected by the variables should form a path.
Backtracking:
The principle of using backtracking for an exhaustive search of the solution space is to
start with an initial partial solution in which as many variables as possible are
left unspecified, and
then to systematically assign values to the unspecified variables until either a
single point in the search space is identified or an implicit constraint makes it impossible
to process more unspecified variables.
The cost of the feasible solution found can be computed if all variables are found.
The algorithm continues by going back to a partial solution generated earlier and then
assigning a next value to an unspecified variable (hence the name "backtracking")
It is assumed that all variables fi have type solutionelement.
The partial solutions are generated in such a way that the variables fi are specified for
1 < i < k and are unspecified for i > k.
Partial solutions having this structure will be denoted by f~(k).
f~(n)corresponds to a fullyspecified solution (a member of the set of feasible solutions).
The global array val corresponds to the vector f(k). The value of fk is stored in val[k — 1].
So, the values of array elements with index greater than or equal to k are meaningless
and should not be inspected.
The procedure cost(val) is supposed to compute the cost of a feasible solution using the
cost function c. It is only called when k = n
float bestcost; main ()
solution _element valf /i ]* bestsolution [n]; {
bestxost := oo;
backtrack(int A:) backtrack(0);
( report(bestjolution );
float new.cost ; 1
if ( k = n ) {
newxost := cost(val );
_
if ( new cost < best jcost ) {
procedure allowed(val, k) returns a set of
values allowed by the explicit and implicit
=
besLcost : new.cost: constraints for the variable fk+I given f~(K)
=
bestsolution : copy( val);
}
}
else
for each (el e allowed( va] k )) (
t
val[Jt] = el;
backtrack + 1);
)
)
A B
5 4
3 5
5
F
C
E 7
2
D
/, ' A
/2 = B b F
A" C F D F B C D E
A' n F C n E c F H C D C B D C E D
h E F D E D C E D (B F B C C B D C 1) EHB C
A  F E E D E C F C B D B E B
A  AHA A
27 31 33
A
A
27
r.
AHA
20 2 7 33
A AHA
31 20
A
27
Branchandbound:
• Information about a certain partial solution f ( k ) 1< k < n, at a certain level can indicate
that any fullyspecified solution f ( n ) E D ( f ( k )) derived from it can never be the optimal
solution.
• If inspection of can guarantee that all of the solutions belongingto f ( k ) have a higher
cost than some solution already found earlier during the backtracking none of the
children of need any further investigation.
• One says that the node in the tree corresponding to can be killed.
}
>
_
Procedure Lower_ bound cost is called to get a lower bound of the partial solution based
on the function C .
/ . A
h E
5+ 15
F D F
h
A
_ C
F C D c
S+ 16
22+9 21 +6 11 +9
A F C E D B F
23+8 X 14+ 10 a + i? means that the node is killed
A F F
fi ~ Qy [ A)
27 20
An essential point is that the function cffW ) that computes the lower bound of the
solutions in D{ f & ) should, in general, be easier to compute than the mere traversal
of the subtree at in order to have some computational gain.
Dynamic Programming:
Dynamic programming can be applied to such a problem if there is a rule to construct the
optimal solution for p = k (complete solution) from the optimal solutions of instances for
which p < k (set of partial solutions).
The fact that an optimal solution for a specific complexity can be constructed from the
optimal lower complexity problems only, is essential for dynamic programming.
If p = k, the optimization goal becomes: find the shortest path from vs to all other vertices
in the graph considering paths that only pass through the first k closest vertices to vs.
The optimal solution for the instance with p = 0 is found in a trivial way by assigning the
edge weight w((vs, u)) to the distance attribute of all vertices u.
Suppose that the optimal solution for p = k is known and that the k closest vertices to vs
have been identified and transferred from V to T.
Then, solving the problem for p = k+1 is simple: transfer the vertex u in V having
the lowest value for its distance attribute from V to T and update the value of the distance
attributes for those vertices remaining in V.
Xi , that may assume negative values, can be replaced by the difference Xi — Xk of two
new variables Xi and Xk that are both restricted to be positive.
standard form
Ax b
x>0
k
Cost function=
i =\
> (
* >* xi e {0, l} t z" = 1, 2 k
In the optimal solution, only those xi that correspond to edges in the optimal tour
have a value 1
V
1 « »2 ri * . *i
2 *7 '2
5 *4
e3 *4
m e9 'IP
e» I I *8
v5 12
ve
]ocaI _search{ )
{
struct feasiblesolution f \
set of struct feasiblesolution G;
/ initialjsolutionO;
do j
O « { g \ s e // ( / ), c( g ) < c( / ) J;
if ( G 0)
/ « any element of G ;
} while ( G # 0);
“ report / ;
)
• The principle of local search is to subsequently visit a number of feasible solutions in
the search space.
• transition from one solution to the next in the neighbourhood is called a move or a
local transformation
 _
while (!thermal jequilibrium( ) J;
T 4 new temperature( 7 ); '
while (!stopO );
report / ;
1
Tabu Search

Q * “ empty ;
__
b « initial solution ( );
/4
do
 {
initial so) ution ();
G 4 some subset of N ( f )
lf < G 0> {
g * “ cheapest element of G";
“ shift g into Q \
f « «;
if (c(/) < c(h))
h 4 /;
}
)
while ( G # 0 or stop( ));
report b \
)
Genetic Algorithms .
instead of repetitively transforming a single current solution into a next one by the
application of a move,
the algorithm simultaneously keeps track of a set P of feasible
solutions, called the population .
First of all, this operation assumes that all feasible solutions / E F can be encoded by a
fixed length vectorf = [ fl,f 2 .. . fn ] T = f as was the case for the backtracking algorithm
Number of vector elements n is fixed, but that the number of bits to representthe
value of each elementfi(l< i < n) is also fixed.
The string of bits that specifies a feasible solution in this way, is called a chromosome .
Consider an instance of the unitsize placement problem with 100 cells and a 10x 10 grid.
As 4 bits are necessary to represent one coordinate value (each value is an integer between
1and 10) and
200 coordinates (100 coordinate pairs) specify a feasible solution, the chromosomes of this
problem instance have a length of 800 bits.
Given two chromosomes, a crossover operator will use some of the bits of the first
parents and some of the second parent to create a new bit string representing the
Child.
Generate a random number r between 1 and the length I of the bit strings for the
problem instance.
Copy the bits 1through r — 1 from the first parent and the bits r through Ifrom
the second parent into the bit string for the child. Sometimes, it is customary to
generate a second child using the same r, now reversing the roles of both parents
when copying the bits.
Suppose that the bit strings of the example represent the coordinates of the placement
problem on a 10 x 10 grid, now with only a single cell to place (an artificial problem).
The bit string for a feasible solution is then obtained by concatenating the two 4bit values
of the coordinates of the cell.
So, f(k) is a placement on position (5, 9) and g(k) one on position (8, 6).
The children generated by crossover represent placements at respectively (5, 14) and (8, 1).
Clearly, a placement at (5, 14) is illegal: it does not represent a feasible solution as
coordinate values cannot exceed 10.
r
First parent: First child:
01011 001 01011 1 0
</* + 1>)
Second parent: Second child:
10000 1 1 0 1 oooo 0 0 1
<**>
The combination of the chromosome representation and the crossover operator for
generating new feasible solutions, may leads to more complications.
Consider e.g. the traveling salesman problem for which each of the feasible solutions can
be represented by a permutation of the cities.
Two example chromosomes for a six city problem instance with cities c1 through c6 could
then look like
"C1C3C6C5C2C4“ and "C4C2C1C5C3C6".
In such a situation, the application of the crossover operator as described for binary
strings is very likely to produce solutions that are not feasible.
Consider again the chromosomes "C1C3C6C5C2C4“ and "C4C2C1C5C3C6" cut after the
second city.
Then the application of order crossover would lead to the child
"C1C3C4C2C5C6"
geneticO
{
ini popsize;
set of struct chromosome pop, newpop;
struct chromosome parent L , parent2, child;

pop « 0;
for (t * 1; t < popsize ; t * i + 1)
pop * pop U ( chromosome of random feasible solution };
do { The function select is responsible for
newpop < 0; the selection of feasible solutions
 
for (i « 1; i < popsize; i « i 4 1) { from the current population favouring

parent « select(pop);
parent2 « select(pop);
those that have a better cost

child « crossover(parentl , parent2); The function stop decides when to
terminate the search, e.g. when there
newpop * newpop U ( child );
has been no improvement on the
} best solution in the population during
pop * newpop; the last m iterations, where m is a
} while ( IstopO); parameter of the algorithm
report best solution";
}
The description in the figure deals with chromosomes that are manipulated, not the
feasible solutions themselves.
stronger preference given to parents with a lower cost when selecting pairs of parents
to be submitted to the crossover operator
One can work with more sophisticated crossover operators, e.g. operators that
make multiple cuts in a chromosome.
One can copy some members of the population entirely to the new generation
instead of generating new children from them.
• A variable pi is associated with each vertex vi to keep count of the edges incident to
vi that have already been processed.
• Because the graph is acyclic, once all incoming edges have been processed, the
longest path to vi is known.
• Any data structure that is able to implement the semantics of a "set" can be used.
• All edges in the graph are visited exactly once during the execution of the inner for
loop.
longestpath( G )
[
For ( i 1; i < n\ f « i + 1 )
Pi
Q < t «oh

"in degree of t?, ;
while ( Q # 0) {
v2
uf « any element from Q \
Q Q\[ vih
for each v j "such that ( u/ V J ) Q E {
t :
xj « max ( xj , Xi + djj ); vo
Pj Pj U
it ( Pj < 0)
 V
\

4
V
5
Q « fiu [«;J ;
)
}
I
main ()
{
for (i 0; J < rt\ i « j + 1)
xi « 0;
longestpath(G);
!
v
> 3
v2
5 n
4
V\
2 V
5
Q pi pi Pi PA P5 * ] *2 x3 X4
*5
not initialized 1 2 1 2 ) 0 0 0 0 0
bo ) 0 1 1 2 1 1 5 0 0 0
( vi } 0 0 1 2 0 1 5 0 0 3
. >
( V2 «5 0 0 0 1 0 1 5 6 6 3
.
( V3 v5}
(vs )
0
0
0
0
0
0 0
1 0
0
1
1
5
5
6
6
6
7
3
3
( V4} 0 0 0 0 0 i 5 6 7 3
Layout Compaction:
At the lowest level, the level of the mask patterns for the fabrication of the circuit, a final
optimization can be applied to remove redundant space.
This optimization is called layout compaction
1. Rigid rectangles correspond to transistors and contact cuts whose length and width
are fixed.
When they are moved during a compaction process, their lengths and widths
do not change.
• When one dimensional compaction tools are used, the layout elements are only
moved along one direction (either vertically or horizontally).
This means that the tool has to be applied at least twice: once for horizontal
and once for vertical compaction.
A minimumdistance design rule between two rectangle edges can now be expressed as
an inequality j  X{ > d i j
*
the minimum width for the layer concerned is a and the minimum separation is b
*x 2i  x2\ >> ab
X$
X
 JC6 > b
1 xl *2
*
A graph following all these inequalities is called constraint graph.
*3 *4
There is a source vertex no. located at x = 0
*1 * 2
* X3 X4
X2 xt > a
b
*X 3  *2 b
3
*6
0. a
v V
2 £ a
V3 V4
Vn\ 0 *
V5
£ *
6
*
,
Directed acyclic graph
A constraint graph derived from only minimumdistance constraints has no cycles
The length of the longest path from the source vertex v0 to a specific vertex vi in a the
constraint graph G(V, E) gives the minimal xcoordinate xi, associated to that vertex.