You are on page 1of 31

Question 1: What are the goals of Synthesis ?

There are Mainly three goals of synthesis without changing the functionality

Reduce the area (chip cost reduce)

Increase performance

Reduce the power

Question 2: What are the Tech dependent inputs in PNR

There are three main tech depended inputs

Physical libraries -->format is .lef --->given by vendors

Technology file -->format is .tf --->given by fabrication peoples

TLU+ file -->format is .TLUP-->given by fabrication people

Question 3: What are the Design dependent inputs in PNR

There are six main design depended inputs

Logical libraries --> format is .lib --->given by Vendors

Netlist --->format is .v -->given by Synthesis People

Synopsys Design Constraints -->format is .SDC -->given by Synthesis People

MMMC --> format is .tcl --->given by Top level

UPF --> format is .upf --->given by Top level

SCAN DEF --> format is .def --->given by Synthesis People

Question 4: What are the types of cells in PNR

There are Four main PNR cells

Std cells

Hard macro

IO pads

Physical cells (end cap, welltap, tie cells, Decap cells, Filler cells)

Question 5: What are the types of IO pads

Signal pad
Power / ground pads

Filler pads

Corner pads

Bond pads

Question 6: What is the functionality of IO pads

Answer: The purpose of IO pad is Electrostatic discharge and level shifting

Electro Static Discharge (ESD) is sudden flow of static electricity between two electrically charged
objects for a very short duration of time.

A level shifter is an interfacing circuit which can interface low core voltage to high input- output
voltage

🔍 Question 7: What is the use of Bound pad

Answer: A Bound pad is used to connect the circuit on a die to a pin on a packaged chip

🔍 Question 8: How tool differentiate the stdcell, IOpad and Macro

Answer: Tool understand the stdcell, IOpad and Macro by using class

Std cells class CORE

IO pad class PAD

Hard Macro class BLOCK

🔍 Question 9: What is difference between soft and hard macro

Answer:

Hard Macro:

Its examples like SRAM memory Analog Macro like PLL, Digital to Analog, Analog to Digital converters
etc.

No change in height and width, No optimization even we cant see internal logic

Physical info available

Soft Macro:
Soft macro available either netlist or rtl

We can change the height and width

No physical info

Flexible change dimensions

🔍 Question 10: How tool calculate the rectilinear blocks area

Answer: Tool calculate the rectangle area by Lower left and Upper right coordinates

Tool calculate the rectilinear blocks area by converting different small rectangles then apply same
strategy Lower left and Upper right coordinates to calculate area

Question 11: Can we rotated the Macro in 90 or 270 degrees

Above 90 nm we can rotate the macro. But in the lower node we can’t because of poly orientation
restriction.

🤯 Question 12: Assume you have three types of block 7, 9, 12 Metal layers in 28 nm Technology
which having more performance and why

12 Metal layers Design having more Metal width so power planning and clock tree have less IR drop
compared to 7, 9 Metal layers so 12 having more performance.

💡 Question 13: Which inputs files having resistance and capacitance values

Interconnect technology format ITF file or TLU+ file and Technology file

in Technology file resistance and capacitance values are available but those are not accurate
compared to ITF file

📏 Question 14: We have different RC corners im i right, why we have different RC corners

TO eatching the metal have small variations in higher technology but it too much impact on lower
technology node so that's why ITF file considering to encountering eatching we have Cbest, Cworst,
RCbest, RCworst, Typical RC corners.

Question 15: How multi cut via increase the performance and yeild.
Multi cut via having less IR drop because parallel resistance decreased so performance increased.

If any one of the via fails then also connection will be present because of he other via but single cut
via fails then the net will be disconnected and it will cause chip failure.

Question 16: In which stage normal flop converted into scan flop

🔗In synthesis stage we can convert the normal flop to scan flop when we are setting DFT

Command : compile -scan [in design compiler tool]💥

🌐 Question 17: what is difference between normal flop and scan flop🌐

🔗 Scan flop having two more input scan input [SI] and scan enable [SE] by using this only we can give
the DFT test vector and test the design [normal flop + mux]

Note: refer the below image more understanding🌈

⚡ Question 18: what is scan chain where we are used it⚡

🔗Scan chain means the flop Q pin directly connected to other flop SI pin this can be enable by scan
enable SE pin this is used for DFT testing

Note: refer the below image more understanding🌌

🔍 Question 19: what is the formula for core, die and std cell utilization 🔍

🔗Core utilization = {std cell area + Macro area } / Total Core area

Die utilization = {std cell area + Macro area + IO area } / Total Die area

Std cell utilization = Area of std cell / {Core area - Macro area}🎉
📊 Question 20:what is the formula for channel spacing 📊

🔗Spacing = {no.of pins X pitch of the higher metal layer } / {available metal layer / 2}🔮

Question 21: How cell driving strength increase

The driving strength of the cells increase fingering

Let us understand fingering by comparing it with a real life scenario. Let us say you have a tap which
draws out water. How can you make the tap pull out more water?? It is by increasing the diameter of
the tap. More is the diameter more is the current that goes out

Increasing the diameter of the tap is analogous to increasing the height or length of the transistor
region so as to deliver more current.

Now i will give a constraint that you cannot increase the diameter of the pipe to draw out
more water. So what do you do to get more water???. Simple. Put more pipes in parallel so as to
increase the water flow.

This is exactly what we are doing in fingering. We keep the transistor of fixed size so that the height
remains constant but then we put more transistors in parallel so as to deliver more current to the
load.

In this case we join the sources and drains in chained fashion so that they act like taps in
parallel. In general when we say 2x and 4x drive strength buffers what exactly it means is that it could
deliver 8 times more current than the normal 1x buffer by employing 8 fingers or even more.

Another advantage of fingering is that resistance reduces drastically by fingering. Let us say you have
a resistance of R from the given figure. Now when fingering is done, all these resistances come in
parallel hence, the resistance reduces by a factor of N. This is another remarkable advantage of doing
fingering.

Note: For more information refer the below image.

Question 22: How Fabrication peoples created different types of Vt cells like HVT, LVT and SVT etc.

There are to technic to create the different Vt cells


If gate oxide thickness increase the cell act like HVT if gate oxide thickness decrease the cell act as
like LVT based on gate oxide thickness they will create different Vt cells

Based on Doping concentration also they will created different Vt cells if Doping concentration is high
the cell Vt decreased {acts like LVT} if Doping concentration low the cell Vt increased {acts like HVT}

Question 23: what is clock gating why we are using clock gating what are the types of clock gating ✨

🔗 Clock gating is a power-saving feature in semiconductor microelectronics that enables switching off
circuits. Many electronic devices use clock gating to turn off buses, controllers, bridges and parts of
processors, to reduce dynamic power consumption.

There are two types of clock gating styles available. They are:

1) Latch-based clock gating

2) Latch-free clock gating.

Latch free clock gating or AND, OR based clock gating : The latch-free clock gating style uses a simple
AND or OR gate (depending on the edge on which flip-flops are triggered). Here if the enable signal
goes inactive in between the clock pulse or if it multiple times then gated clock output either can
terminate prematurely or generate multiple clock pulses. This restriction makes the latch-free clock
gating style inappropriate for our single-clock flip-flop based design.

Latch based clock gating : The latch-based clock gating style adds a level-sensitive latch to the design
to hold the enable signal from the active edge of the clock until the inactive edge of the clock. Since
the latch captures the state of the enable signal and holds it until the complete clock pulse has been
generated, the enable signal need only be stable around the rising edge of the clock, just as in the
traditional ungated design style

Specific clock gating cells are required in the library to be utilized by the synthesis tools. Availability
of clock gating cells and automatic insertion by the EDA tools makes it a simpler method of low
power technique. Advantage of this method is that clock gating does not require modifications to RTL
description.

? Question 24:What is difference between AND,OR and ICG based clock Gating 🔍
🔗 AND,OR based clock gating produce the glitch but ICG based clock gating not produce the glitch for
more information check above answer

🌟 Question 25: What is pad limited and Core limited Design how to overcome it 🌟

Pad limited Design : A pad limited design is when the die size is determined by the number of pads
rather than by the size of the core. It occurs in cases when the number of pads is relatively high and
therefore requires more silicon space

To improve core utilization in pad limited die, we used IO staggering.

Core limited Design : The area of the Die is decided based on core logic

Note: pad limited and Core limited terms related to chip level not block level

Question 26: What is difference between IO PADS and IO PINS 🚀

🔗 IO PADS related to chip level it have both Electrostatic Discharge(ESD) and level shifting but IO PINS
related to block level doesn't have this thing that's why we are using level shifter cells in block level

🌐 Question 27: What is difference between IO PIN and terminal 🌐

🔗 There is no difference, physical representation of the pin we call the terminal.

⚡ Question 28:What are the ways to place IO pins in design ⚡

🔗 What are the ways to place IO pins in design

In ICC2 tool we can place IO pins in three ways

Using pin guide → create_pin_guid and place_pin -self is the command


Using block pin constraint → set_block_pin_constraints, set_individual_pin_constraints and
place_pin -self is the command

Using DEF file from full chip → just source the or read the def file

🔍 Question 29: What are the guidelines for pin placement 🔍

🔗The pins that are planned to place on top and bottom boundary need to be in vertical metal layers

The pins that are planned to place on left and right boundary need to be in horizontal layers

The terminal has to be aligned to the track

Use pin depth as minimum length to avoid min area violation

The terminal doesn't cross the offset region

If you have sufficient track area, try to place with interleaving to overcome congestion

Question 30: Why we need MMMC file🌟

🔗 MMMC analysis is very important to perform, so that the IC can work on different mode of PVT
(Process, Voltage, and Temperature). The variations in PVT can insert extra delay in the circuits and
due to this delay timing constraints may not be met. Thus the IC must be robustly checked for every
process corners

Question 31: In which input file having high fanout information🔥

🔗 No input file have high fanout information that's why we need to constrain max_fanout

🌐 Question 32: In which file noise margin information content 🌐

🔗 Timing library contains noise margin information

✨ Question 33: In which file cross talk information contains ✨

🔗 SPEF file have cross talk information


🔍 Question 34: what are the Macro guidelines to place in floor plan 🔍

🔗 Macro should not be place center of core region it should place boundary of core region

Macro pins need to facing towards std cell region

Macro channel has to be create when placing multiple macros next to each other

Need to avoid criss cross connection

Place the macro which are interact with IO pins those macro near to IO pins

Macro placement should not block the accessibility of IO pins

Try to place Macro belongs to same hierarchy as a group do not split the group as it may lead to
timing issues due to std cell separation

Std cell region need to continuous / avoid trap pocket

🌟 Question 35: what are the types of blockage we used and why 🌟

🔗Blockages used in Physical Design mainly for two purposes, Placement and Routing.

Placement Blockages: There are mainly three types.

Hard Blockage- No cell is allowed to be placed in a specified region.

Soft Blockage- Only buffers/inverters needed for optimization are allowed to be placed in specified
regions.

Partial Blockage- Any type of cells are allowed to place but only for defined areas of specific regions.
(like 60% of specified area)

Routing Blockages: There are mainly two types.

Hard Blockage- Any type of net is not allowed to route through specified regions.

Signal Blockage- Any type of data and clock signal net is not allowed to route through a specified
region, but power net is allowed to route.

Question 36:what is keep out margin or halo 🔥


🔗Keep-out margin: it is a region around the boundary of fixed cells or macro in a block in which no
other cells are placed. The width of the keep-out margin on each side of the fixed cell or macro can
be the same or different.

🌌Question 37:what is difference between keep out margin and blockage🌌

🔗 There is no difference between keep out margin and blockage both are same but only difference is
keep out margin move along with the std cell or macro but block not move along with std cell or
macro

✨ Question 38:After loading Design what are the sanity check we have to do and what you observe

from that✨

🔗 Check_design -all → it gives all design related information

Check_timing → any constraint related and timing related warning and errors it gives

Few more commands like check_netlist, check_lib.

Note : above all commands related to ICC2 tool

🔍 Question 39:why we are using Boundary or End cap cells. If we place this cells after placement what

happened🔍

🔗The end cap cells are called a physical-only cells are placed in the design because of the following
reasons:

To reduce well proximity effect {WPE means vt variation of cell because of n well not present in
either of side}

To protect the gate of a standard cell placed near the boundary from damage during manufacturing.

To avoid the base layer DRC and terminate the Nwell continuity (Nwell and Implant layer) at the
boundary.

To make the proper alignment with the other block.

Some standard cell libraries have end cap cells which serve as decap cells also.

If we place end cap cells after std cells placement there is no use because already std cells will site
near to boundary
Question 40:Why we used Well Tap and Tie cells if we are not use what happened🔥

🔗Well tap cells (or Tap cells) are used to prevent the latch-up issue in the CMOS design. Well tap cells
connect the Nwell to VDD and p-substrate to VSS in order to prevent the latch-up issue. There is no
logical function in well tap cell rather than proving a taping to Nwell and p-substrate therefore well
tap cell is called a physical-only cell

The tie cell is a standard cell, designed specially to provide the high or low signal to the input (gate
terminal) of any logic gate. The high/low signal can not be applied directly to the gate of any
transistors because of some limitations of transistors, especially in the lower node. In the lower
technology node, the gate oxide under the poly gate is a very thin and the most sensitive part of the
transistor. We need to take special care of this thin gate oxide while fabrication (associated issue is
antenna effect) as well as in operation too. It has been observed that if the polysilicon gate connects
directly to VDD or VSS for a constant high/low input signal, and in case any surge/glitch arises in the
supply voltage it results in damage of sensitive gate oxide. To avoid the damages mentioned above,
we avoid the direct connection from VDD or VSS to the input of any logic gates. A tie cell is used to
connect the input of any logic to the VDD or VSS.

🌐 Question 41:How you can estimate the power for your design

Power Calculations🌐

🔗Number Of The Core Power Pad Required For Each Side Of Chip=(Total Core Power)/{(Number Of
Side)*(Core Voltage)*Maximum Allowable Current For a I/O Pad)}

Core Current(mA)=(CORE Power)/(Core Voltage )

Core P/G Ring Width=(Total Core Current)/{(N0.Of.Sides)*(Maximum Current Density Of The Metal
Layer Used For Pg Ring)}

Total Current =Total Power Consumption Of Chip(P)/Voltage(V)

No.Of Power Pads(Npads) = Itotal/Ip

No.Of Power Pins=Itotal/Ip

Where,
Itotal =TOTAL Current

Ip Obtained From Io Library Specification.

Total Power=Static Power+Dynamic Power

=Leakage Power+[Internal Power+Ext Switching Power]

=Leakage Power+[{Short Ckt+Int Power}]+Ext Switching Power]

=Leakage Power+[{(Vdd*Isc)+(C*V*V*F)+(1/2*C*V*V*F)]

Note : some cases we are neglected int power

Question 42:What are the goals of Power plan🌐

🔗Power planning is typically part of the floor planning stage, in which a power grid network is created
to distribute the power uniformly to each part of the chip.

• Power planning means to provide power to the every macros and standard cells and all other cells
are present in the design.

• Power planning is also called Pre-routing as the Power Network Synthesis (PNS) is done before
actual signal routing and clock routing.

Power ring is designed around the core.

• Power rings contain both VDD and VSS rings. After the ring is placed a power is designed such that
power reaches all the cells easily, power mesh is nothing but horizontal and vertical lines on the chip.

During power planning, the VDD and VSS rails also have to be defined.

Objective of power planning is to meet the IR drop budget.

• Power planning involves- calculating number of power pins required, number of rings and straps,
width of rings and straps and IR drop.

⚡ Question 43:what are the techniques used to reduce IR drop⚡

🔗 Method to reduce the voltage IR drop

Reducing the wire resistance.

Increase the number of Vdd and Vss pads in the chip to reduce the current consumption for each pair
of Vdd and Vss pads.

Reducing the current consumption (Iavg) of logic gates


🔍 Question 44:what is formula for dynamic and static power🔍

🔗 Total Power=Static Power+Dynamic Power

Static Power = Vleakage * Ileakage

Dynamic Power = Short Ckt + Ext Switching Power

Short Ckt = Vdd*Isc

Ext Switching Power = 1/2*C*V*V*F

Question 45:What is difference between flip chip and wire bound design🌌

🔗Flip chip means “flipped over the circuit board” facing downward. Instead of facing the upside. So
flip chips allow for a large number of interconnects with shorter distances than wire, which greatly
reduces the distance and area. The process of attaching a semiconductor die to a substrate or carrier
with the bond pad facing down is referred to as a flip chip.

On the die bond pad, there is a conductive bump that is used to make the electrical connection. The
stand-off space between the die and substrate is normally filled with a non-conductive adhesive
known as underfill once the die is connected. Between the die and carrier, the underfill relieves
tension, increases robustness, and shields the component from any moisture infiltration.

Comparing flip-chip bonding to other connectivity techniques can provide a variety of benefits. Since
the entire region of the die can be used for connections, flip chip bonding can increase the number
of I/O's. The speed of a device can be increased since the connectivity paths are shorter than they
would be with wire bonds. Additionally, the removal of wire bond loops results in a reduced form
factor.

🔮 Question 46: 🔮

🔗Why we are using Decap cells and filler cells

Decap cells are basically a charge storing device made of the capacitors and used to support the
instant current requirement in the power delivery network. There are various reasons for the instant
large current requirement in the circuit and if there are no adequate measures taken to handle this
requirement, power droop or ground bounce may occur. These power droop or ground bounce will
affect the constant power supply and ultimately the delay of standard cells may get affected. To
support the power delivery network from such sudden power requirements, Decap cells are inserted
throughout the design
Filler cells primarily are non-functional cells used to continue the VDD and VSS rails. Filler cells are
used to establish the continuity of the N- well and the implant layers on the standard cell rows. The
use of Filler Cells is to reduce the DRC Violations created by the base(N-Well, PPlus & NPlus) layers.

Question 47:What are the Floor plan sanity checks 🌌

🔗 check _pin_placement → check ports are a line with track or not and also is any missing pins, pin
shorts, technology spacing problems

check_boundary_cells → any boundary cell related violation

check_pg_drc → check for spacing , width and via enclosure related violations on power and ground
nets

check_pg_missing_vias → to report missing vias at insertion point

check_pg-connectivity → to report shorts and opens related to planets

Note: above commands related to ICC2 tool

✨ Question 48:can we do macro placement after power plan ✨

🔗No we can't because its created drc violations related to power plan

🔍 Question 49:can we do power plan after routing🔍

🔗 No, because if you preserve the top metal layer for the power plan also if it is too congested then
there is a chance proper vias are not to be drop it cause more IR drop not only this if you do not
preserve the top metals then may see more congestion

🌟 Question 50:What is Spare cells Why we are using Spare cells 🌟

🔗 Spare cells generally consist of a group of standard cells mainly inverter, buffer, nand, nor, and, or,
exor, mux, flip flops and maybe some specially designed configurable spare cells. Ideally, spare cells
do not perform any logical operation in the design and act as a filler cell only.
The inputs of spare cells are tied either VDD or VSS through the tie cell and the output is left floating.
Input can not be left floating as a floating input will be prone to get affected by noise and this could
result in unnecessary switching in space cells which leads to extra power dissipation.

Spare cells reduce the violation or enable us to modify/improve the functionality of a chip with
minimal changes in the mask. We can use already placed spare cells from the nearby location and
just need to modify the metal interconnect. There is no need to make any changes in the base layers.
Using metal ECO we can modify the interconnect metal connection and make use of spare cells. We
only need to change some metal masks, not the base layer masks.

Question 51: In which stage we are define spare cells🔥

🔗 Mostly spare cells are defined in the floor plan stage only in some cases we can define in the
routing stage also.

🌐 Question 52:What is the different define Spare cells in floor plan and routing stage 🌐

🔗 If you define Spare cells in floor plan that will be in clustered /group format and if flop is present in
Spare cells it clock pin would be balanced in cts stage

If you define Spare cell in routing it will not be clustered format and its flop clock pin also not
balanced, that is the reason we preferred to place Spare cells in the floor plan stage

Question 53:what are the ways to place std cells in the core region

🔗In ICC2 there are two ways to place std cells in core reason one is create_floor plan command it just
place the std cell in core reason no legalize and no optimize the other is place_opt command it
optimize as well as legalize also

🔮 Question 54:what are the five stages of place_opt command and explain 🔮

🔗The five stages of place_opt commands is

initial_place

initial_drc
initial_opto

final_place

final_opto

Initial_place do cores placement of cells, cells are not legally placed and there can be overlap and it
doesn't have any optimization of cells

Initial_drc do high fanout net synthesis HFNS → tool address max tran and max cap on high fanout
nets all high fanout nets all are buffered

initial_opto this is the first stage of optimization to improve timing QOR fix setup, Max tran and Max
cap fixes on all nets

final_place do legalize the placement of all cells {no overlap,}

final_opto do the incremental optimization to fix the left over new violation

🌟 Question 55:What is global routing why we are doing global routing 🌟

🔗Whole region is divided in to an array of rectangular subregion called GRC cell or bin each of which
may accommodated tens of routing tracks in each GRC cells

Global routing is preferred to estimating the interconnect parasitics and Routing congestion MAP

Height of GRC cells is 2 times of std cells row height

Each GRC having horizontal and vertical tracks

If demand is more than the supply or capacity its called overflow

If demand is less than the supply or capacity its called underflow

Overflow cause the congestion

Question 56: what are the goals of placement 🔥

🔗There are mainly three goals of placement

Timing aware

Routing aware

Power aware
🌐 Question 57:what are the goals of floor plan 🌐

🔗Objectives of Floorplan

minimize the area.

minimize the Timing.

Reduce the wire length.

Making routing easy.

Reduce IR drop.

✨ Question 58:What are the reason for congestion how to fix the congestion✨

🔗Pin density : in a small area having more pins are called pin density this pin density create
congestion to fix congestion due to pin density apply cell padding or keep out margin

Insufficient macro channel spacing : because of it cause congestion to fix this increase the channel
spacing

Rectilinear block corner : this one also create congestion to fix the create placement blockage

At macro corners: this one also create the congestion to overcome this create placement blockage

🔍 Question 59: How you can control the std cell placement 🔍

🔗Magnetic placement : pulling the std cell logic towards the fixed object (eg: IO ports and macro pins
can be assumed as fixed object ) command : magnetic_placement

Bounds or place bounds { in innovus regions } : crate the bound related logic with or without
coordinates command : create_bound

Placement blockage

🌟 Question 60:What are the types of bounds 🌟

🔗There are two types of bounds move bound and group bound

Move bounds means with definite coordinate if we created bound that's we called move bounds this
are three types
Soft move bound : In optimization some cells are go out from the bound to meet timing QOR other
logic cell comes and site in that bound

Hard move bound : The cells should not move out and other cells are allowed in that bound

Exclusive move e bound : The cells not move out and other cells not allowed in that bound

Group bounds means without any location and coordination's group bounds are two types soft and
hard group bounds

Question 61: If timing is bad in your design after placement stage then what kind of technique you

use to overcome🌌

🔗The timing will be bad in placement stage because of two reason

Bad placement : if timing violation comes because of bad macro placement change the macro
placement

Too many buffers are added : then check any constraint related problem

To fix timing violations in placement stage

Change the placement timing effort to high

Create group paths and give weightage to high frequency clock

Create a bounds if the violation are not fix by above solution

⚡ Question 62:Can we do optimization in placement stage without cell swapping upsizing and

adding buffers ⚡

🔗 No without swapping upsizing and adding buffers we can't to optimization we can reduce net
length but that's not impact much

🔮 Question 63: Why we are checking setup only in placement stage why we are not checking hold🔮

🔗 Before CTS we don't know the skew tool is working on local skew which impacts timing. The local
skew depends on launch and capture delay; this delay depends on placement of the flop. Most skew
will be positive; it is good for setup but bad for hold timing. For ideal clock skew is zero it is
pessimistic for setup timing analysis not for hold time analysis that's why we check only setup in
placement stage.
🔍 Question 64: Why we are doing IO buffering if we are not do IO buffering what happened🔍

🔗 To maintain good transition in our block we will do IO buffering if we are not do IO buffering the
chase of bad transition it impact the cell delays cause the setup timing violations

🌟 Question 65: What is pipeline concept why we are using🌟

🔗 If more logic is there between into reg or reg to out there is difficult to meet timing then we add
the register to meet timing

Question 66:What is congestion hotspot 🔥

🔗Any region with too many GRC overflows that region we called the congestion hotspot.

🌐 Question 67: What are the sanity check you did in placement stage and why🌐

🔗report_timing → check setup timing

report_qor → check setup, max tran , max cap in different scenario

Analyze_design_violation → To clearly check max tran , max cap

Report_congestion → to check congestion in design

Report_utilization → how much core utilized in placement we can observe

Report_desgn → complete design related information will report

Check_legality → what ever placed cells are legally placed or not we have to check

✨ Question 68:why we have to do scan chain reorder in placement if not what happened✨

🔗Scan stitching will happened during synthesis stage based on the connectivity of flop
But during the actual placement the flop may not be placed close to each other. Due to this there are
chances of increase in overall routing length of scan chain related nets.

This may lead to congestion in routing critical design

To improve routability, we can enable scan reordering which will try to reduce route length as the
reordering of scan chain will be based on physical location of flops (NOTE:To enable scan reordering
SCAN DEF is must)

🔍 Question 69:You don't have any pin & cell density and macro placement also good still your

getting congestion what would be the reason🔍

🔗If above mentioned all are clean still your getting congestion means the maximum signal routing will
set less layer (example your design have 9 metal layers but if maximum routing layer set as 3 then
you definitely see congestion)

To overcome this change maximum routing layer (ICC2 command : set_ignor_layer)

🌟 Question 70:What are the types of CTS🌟

🔗There are two types of CTS one is balanced CTS and unbalanced CTS again balanced CTS is two
types OCV aware CTS and without OCV CTS again OCV aware CTS is two types H tree and clock spine

Question 71:What is difference between CPPR and CRPR

🔗CPPR is caused mainly by OCV fluctuations, whereas CRPR is an architectural artifact.Many times
your chip is overdesigned due to undue pessimism in timing calculations. Pessimism in timing
analysis makes it difficult for designs to close timing and it is imperative that analysis is not overly
pessimistic. There is a clock path related to pessimism observed in timing calculated in on-chip-
variation mode, and EDA tools have the capability to automatically remove the pessimism while
analysis.

Common Path Pessimism Removal (CPPR) A timing path consists of launch and capture paths. The
launch path has further components – the launch clock path and the data path.In the below circuit
snippet, the launch path is c1->c2->c3 -> CP-to-Q of FF1 -> c5 -> FF2/DThe capture path is c1->c2->c4-
>FF2/CP late and early derates are set for cells and nets while doing timing analysis in on-chip-
variation mode. For setup analysis, STA tool does late check for launch clock path and the data path,
and early check for the capture clock path. However, part of both capture and launch clock paths are
the same, till node n1. In the image given, numbers in red denote the max delays(late delay
numbers) and numbers in green are min delays(early delay numbers). Let us assume the net delays
are included in the numbers.

For setup analysis, the launch clock path delay is:

`c1->c2->c3 ->FF1/CP`

`1ns+1ns+1ns+ = 3ns`

The capture clock path delay is

`c1->c2->c4->FF2/CP`

`0.8ns+0.8ns+0.8ns = 2.4ns`

However, part of the clock paths is common, till node n1. It is not realistic that these have two
different delays for the same analysis. Using the late and early timing numbers for the common path
creates unwanted pessimism in timing analysis leading to difficulties in timing closure or overdesign.
Hence removal of this pessimism is necessary.

For the example noted above we can see a “CPPR adjustment” of 0.4, i.e. the skew between the
clock paths will be 0.2ns, instead of 0.6ns.

`+ CPPR Adjustment 0.4`

Clock Reconvergence Pessimism some case clocks reconverge after taking different paths. In the
circuit given below, the clock splits into two different combinational logic before converging through
mux m1.The worst case analysis will have the launch clock path through c3->c4->m1 whereas the
capture clock path through c1->m1->c5. However, if this is not a possibility by design, reconvergence
pessimism should be also removed so as to avoid the over-design. In hold check, since the timing
check is at the same clock edge, this pessimism should always be removed in the analysis. The clock
convergent point in m1/Y.

Launch clock till convergence:

`c3->c4->m1 `

`1ns+1ns+1ns = 3ns`

Capture clock till m1/Y:

`c1->m1`

`0.8ns+0.8ns = 1.6ns`

Clock reconvergence pessimism: 1.4ns


Question 72: Why we are building CTS after placement only.

🔗To build CTS we need fixed flop location to get fixed flop location after placement only during
placement also flops are moved to final place only those are fixed that is the main reason we are
building CTS after placement only.

🌌 Question 73: On which bases you will say your skew is good🌌

🔗 My skew will good or bad i will decided based on the my clock period (mostly we consider less
than 10% clock period value is a good skew )

✨ Question 74:Consider you have two designs one have more skew and less latency one have less

skew and more latency which one you consider and why✨

🔗To answer this kind of questions we need to consider design characteristics like design is timing
critical or power critical

If design is timing critical then we consider bad skew with good latency we can't meet the timing
in this condition i will choose the design with good skew with bad latency

If design is power critical then we consider good skew with bad latency then my power
consumption is too high i can't meet power requirement in this condition i will choose the design
with bad skew but good latency

Generally this kind of question they will ask for how your critical thinking you're thinking in all
scenarios are not they check that's it

🔍 Question 75:My skew is Zero is good or bad 🔍

🔗 If my skew is zero good with respect to timing point of view but bad with respect to power
point of view because all flops switch at a time my dynamic power consumption is too high it
may damage the power routing also thats the main reason we are not going with zero skew we
have to maintain some target

Question 76:what are the types of skews and tool is work on which ske🔥

🔗Skew means the delta difference between clock arriving latencies of two flops there are two
types of skew local and global skew
Local skew : The delta difference between clock arriving latencies of capture and launch flop is
called local skew tool work on local skew
Global skew: The delta difference between clock arriving latencies of maximum path in design to
the minimum path in design is called global skew

🌐 Question 77:Consider I have two design one have more latency one have less latency which one

you choose and why🌐

🔗Less latency design I will choose because based on latency only my buffers levels are added if
latency high means too many buffers added in my design for those buffer power consumption
and area also high so we have to consider less latency design only.

⚡ Question 78:On which reference CTS will be build⚡

🔗Based on sink pin the clock will be build there are three types of pins
Sink pin : all flop or macro cp pins are sink pins here clock propagation stop and consider for
balancing
Through pin : The is flop cp pin only but Q pin connected to other flop cp pin not consider for
balancing or combinational cell output is connected to cp pin also comes under through pin
Ignore pin: D pin, Reset and set pins are comes under ignore pins not consider for balancing

🔮 Question 79:If I'm not define clock can my CTS build or not🔮

🔗Yes CTS will build tool consider default frequency and it build but it should not be meet timing
with actual frequency
We are defining clock in design to consider this based on build the clock it is like more
pessimistic if we are define also it build but we can't guarantee it will work

🌟 Question 80:What is inter clock balancing🌟

🔗If a clock group will balancing with other clock group if timing path present in between them
then we called interclock balancing

Question 81:What is useful skew how it is help for timing🔥

🔗Using Skew to address setup or hold violation is called the useful skew means we
are playing with launch or capture clock path to meet timing
With respect to start point the positive slack is more then only we introduce value
this all process tool do automatically by using CCD/CCOPT (concurrent clock
optimization ) it automatic algorithm

🌐 Question 82:Flop having setup and hold value I'm I right are those values

constant or it can change🌐

🔗No those values are not constant those are depend on data and clock transition for
better understanding refer below image

⚡ Question 83:Are the flop setup and hold values negative⚡

🔗 Setup and Hold times in digital circuit design are typically specified as non-negative
values. The setup time refers to the minimum amount of time before the clock edge
at which the input data must be stable and available for the correct operation of the
circuit. The hold time refers to the minimum amount of time after the clock edge
during which the input data must remain stable.
However, in some cases, you may encounter negative setup or hold times. This can
happen due to a variety of reasons, such as:
Clock Skew: Clock skew refers to the variation in arrival times of the clock signal at
different parts of the circuit. If the clock signal arrives earlier at a particular flip-flop
compared to the arrival time at the data input, it can result in a negative setup time.
Similarly, if the clock signal arrives later at the flip-flop compared to the arrival time
at the data input, it can result in a negative hold time.
Process Variations: Process variations in semiconductor manufacturing can cause
variations in the electrical properties of transistors and interconnects. These
variations can lead to negative setup and hold times in some cases.
Timing Analysis Methodology: Some timing analysis methodologies or tools may
allow for negative setup and hold times to be specified. This can be useful in certain
situations where specific timing constraints need to be met, such as in certain high-
speed designs.
Negative setup and hold times are not common in most digital circuit designs and
are typically avoided. They can introduce additional challenges in timing analysis and
make the design more sensitive to variations. Designers strive to ensure positive
setup and hold times to ensure reliable and robust circuit operation.

Question 84:What is difference between normal buffer and clock buffe🔮

🔗In digital circuit design, both normal buffers and clock buffers are used to amplify and shape
signals. However, they serve different purposes and have specific characteristics related to their
usage.
Normal Buffer: A normal buffer, also known as a data buffer or signal buffer, is used to amplify
and buffer a general data signal. It is typically used to drive signals over longer distances, improve
signal integrity, and isolate the source from the load. Normal buffers are used for non-clock
signals in the circuit, such as data inputs, control signals, or any other signals that need to be
propagated through the circuit.
Clock Buffer: A clock buffer, as the name suggests, is specifically designed to handle clock signals.
Clocks are critical for synchronization in digital systems, and clock buffers are used to distribute
clock signals throughout the circuit. They ensure that the clock signal maintains its integrity and
has consistent characteristics across different parts of the design. Clock buffers are optimized for
low skew, low jitter, and fast rise/fall times to ensure accurate clock propagation and
synchronization.
The key differences between normal buffers and clock buffers can be summarized as follows:
Purpose: Normal buffers are used for general data signals, while clock buffers are specifically
designed for clock signals.
Signal Characteristics: Clock buffers are designed to minimize skew, jitter, and provide fast edges
to maintain the integrity and synchronization of clock signals. Normal buffers do not have the
same stringent requirements.
Timing Considerations: Clock buffers play a crucial role in meeting setup and hold time
requirements in synchronous digital systems. They are carefully placed and sized to ensure proper
clock distribution and synchronization. Normal buffers do not have the same timing
considerations.
Design Optimization: Clock buffers are often optimized for low power consumption, low noise,
and high performance to meet the stringent requirements of clock distribution. Normal buffers
may prioritize other design considerations.
It's important to note that while normal buffers and clock buffers have different design
considerations, they can both be implemented using similar circuit topologies (e.gCMOS
inverters) with appropriate sizing(clock buffer PMOS width is 2.5 times high compared to normal
buffer) and optimization for their specific purposes.

Question 85:For CTS building which one you choose clock buffer or clock inverter 🌟

🔗 In clock tree synthesis (CTS) in VLSI design, the choice between using clock buffers or clock
inverters depends on several factors, including the design specifications, timing requirements,
power considerations, and the specific CTS methodology being employed. Both clock buffers and
clock inverters can be used in CTS, and the selection depends on the design goals and
constraints.
Clock Buffers: Clock buffers are commonly used in CTS for clock distribution. They are designed
to provide low skew, low jitter, and fast rise/fall times, which are essential for maintaining the
integrity and synchronization of clock signals. Clock buffers help amplify the clock signal and
drive it to multiple clock sinks (flip-flops) in the design. They are typically used in clock trees
where the goal is to minimize clock skew and ensure reliable clock distribution.
Clock Inverters: Clock inverters, as the name suggests, invert the clock signal. In certain CTS
methodologies, clock inverters can be used for balancing the clock tree and achieving better
clock skew control. By strategically placing clock inverters, the path lengths of different branches
of the clock tree can be adjusted to reduce clock skew. Clock inverters are used to equalize the
delay of different branches and ensure that the clock signal arrives at the flip-flops with minimal
skew.
The choice between clock buffers and clock inverters in CTS depends on various considerations,
such as:
Clock Skew Control: If minimizing clock skew is a critical objective, clock inverters may be used
strategically to balance the clock tree and equalize path lengths.
Design Constraints: The specific design constraints, such as power consumption, area, and timing
requirements, may influence the choice of clock buffers or clock inverters.
CTS Methodology: Different CTS methodologies may have different recommendations or
preferences for using clock buffers or clock inverters. The chosen methodology and associated
tools may guide the decision-making process.
It's worth noting that in many CTS implementations, a combination of clock buffers and clock
inverters may be used to optimize the clock tree and achieve the desired objectives. The selection
of clock buffers, clock inverters, or a combination thereof should be made based on careful
analysis of the design requirements, timing constraints, and the specific CTS methodology being
employed

Question 86:What is CTS SPEC file what it contain🌟

🔗 In VLSI design, a CTS (Clock Tree Synthesis) SPEC file, also known as a CTS constraints file, is a
file that contains the specifications and constraints for performing clock tree synthesis. It provides
important information to the CTS tool regarding the desired characteristics and requirements of
the clock tree.
The CTS SPEC file typically includes the following information:
Clock Netlist: The CTS SPEC file includes the netlist or connectivity information related to the
clock network. It specifies the clock source(s), clock sinks (typically flip-flops), and the
interconnections between them.
Clock Constraints: The file contains clock-related constraints, such as the desired clock frequency,
clock waveform specifications (rise/fall times, duty cycle), and any other timing requirements
specific to the clock network.
Clock Tree Topology: The CTS SPEC file provides information about the desired clock tree
topology, including the number of levels, the types of buffers or inverters to be used, and their
placement locations.
Buffer Sizing and Placement Constraints: It includes constraints related to buffer sizing, placement
locations, and any specific rules or guidelines for buffer insertion in the clock tree.
Clock Skew and Jitter Constraints: The file may specify constraints related to allowable clock skew
(the variation in arrival times of the clock signal at different points in the clock tree) and clock
jitter (the variation in the timing of clock edges).
Power and Area Constraints: The CTS SPEC file may include constraints related to power
consumption and area utilization of the clock tree. This information helps guide the optimization
process during clock tree synthesis.
Miscellaneous Constraints: Any additional constraints or guidelines specific to the clock tree
synthesis process may be included in the CTS SPEC file.
The CTS SPEC file serves as input to the CTS tool, which uses this information to perform the
clock tree synthesis process. The tool analyzes the input specifications, optimizes the clock tree
topology, inserts buffers or inverters, and performs various optimizations to meet the specified
constraints and requirements.
The specific format and syntax of the CTS SPEC file may vary depending on the CTS tool and
methodology used in the design flow. The tool's documentation or user guide typically provides
information on the required format and the available options for creating the CTS SPEC file.
Question 87: If skew is bad how you can overcome🌟

🔗When the arrival times of the clock signal vary at different points in a circuit, it is called clock
skew, which is generally undesirable in VLSI design. To overcome or minimize clock skew, there
are a few approaches:
Balancing the Clock Paths: By adjusting the delays in different branches of the clock network,
such as using buffers or inverters strategically, the path lengths can be equalized. This helps
ensure that the clock signal reaches various parts of the circuit at similar times, reducing clock
skew.
Adding Buffers: Placing buffers at specific locations along the clock paths can amplify the clock
signal and control its arrival time. This helps in reducing clock skew by making the clock signal
more consistent across the circuit.
Considering Skew in Placement: During the physical design phase, careful placement of circuit
elements, such as flip-flops or clock sinks, can help minimize clock skew. By considering the clock
network structure and arranging these elements close to each other, the impact of delays and
variations can be reduced.
Skew-Aware Routing: Similar to placement, routing techniques can be employed that take clock
skew constraints into account. By optimizing the paths that the clock signals take, the effects of
delays and variations can be minimized, resulting in lower clock skew.
Compensation Techniques: In some cases, additional circuitry, like delay elements or phase-
locked loops (PLLs), can be used to actively adjust the clock arrival times and compensate for
skew. These techniques can be effective but may introduce complexity and increased power
consumption.
Optimizing Clock Distribution: Improving the clock distribution network itself, such as using
efficient metal layers, reducing parasitic capacitance, and carefully routing the clock signals, can
help reduce clock skew. These optimizations aim to minimize delays and variations in the clock
paths.

Question 88: If latency is bad how you can overcome


Answer 88:When latency, which is the delay in signal propagation, is considered undesirable in
VLSI design, there are several ways to overcome or reduce it:
Pipeline Design: Breaking down complex operations into smaller stages or steps helps reduce
latency. Each step can be executed concurrently, processing multiple operations simultaneously.
Parallel Processing: Using multiple processing units or functional units in parallel can reduce
latency. This means that different parts of the data are processed simultaneously, speeding up
computations.
Optimized Circuit Design: Careful design techniques, such as optimizing gate-level
implementations and minimizing long interconnects, can help reduce latency. These
optimizations focus on improving the speed and efficiency of individual logic gates and
connections.
Increasing Clock Frequency: Raising the clock frequency can potentially reduce latency by
allowing circuits to operate at higher speeds. However, there are limitations due to power
consumption and timing constraints that need to be considered.
Memory Hierarchy and Caching: Implementing memory hierarchy and caching techniques can
improve latency in accessing data. Frequently used data is stored in faster and closer memory
levels, reducing the time required for data retrieval.
Algorithmic Optimization: Analyzing and optimizing algorithms used in the design can reduce
latency. More efficient algorithms or algorithmic improvements can decrease the number of
computational steps or simplify operations, leading to lower latency.
Balancing Trade-offs: Consider the trade-offs between latency, area utilization, and power
consumption. Reducing latency may require additional hardware, which can increase area usage
and power consumption. Finding the right balance is important.
It's important to note that reducing latency often involves a combination of these approaches.
The specific techniques used depend on the design requirements, constraints, and the trade-offs
between performance, power, area, and timing considerations.

Question 89:How Skew effect on setup and hold time


Answer 89:Positive skew means capture delay is more compared to launch it helps the setup time
but violated the hold time
Negative skew means launch delay is more compared to capture it helps the hold time but
violated the setup time

Question 90: Why we are using NDR in CTS


Answer 90:NDR means Non Default Rule in this we used double width and double spacing
Double spacing used for reduce coupling capacitance because of that cross talk will be reduce
Double width used for to overcome electro migration

Question 91:You have nine metal layers in your design which metal layer you preferred for
CTS and why
Answer 91:Top two metal layers preferred for power after that means 6 and 7 metal layer i
preferred for CTS routing because clock is very important in timing analysis i have to ensure that
my net delay should be less then only i will meet the timing

Question 92: what are the default clock skew groups in design
Answer 92: Tool created default clock skew group for every master clock if you want to overwrite
that create_clock_skew_group is a ICC2 command

Question 93:Explain clock_opt command


Answer 93: Clock_opt command have three stages
Build_clock → it add the clock buffers based on flop location
Route_clock → it route the clock buffers
Final_opt → optimize the clock tree

Question 94:If clock is not propagate what happened in CTS

📈 Answer 94: If clock is not propagated then the io latencies are consider zero so what ever
optimization done that not effective optimization

🔍 Question 95:What are the sanity check you did after CTS
📈 Answer 95:report_timing → check setup timing
report_qor → check setup, max tran , max cap in different scenario
Analyze_design_violation → To clearly check max tran , max cap
Report_congestion → to check congestion in design
Report_utilization → how much core utilized in placement we can observe
Report_desgn → complete design related information will report
Check_legality → what ever placed cells are legally placed or not we have to check
Report_clock_qor → for know clock tree related violation
Report_clock_timing → for report either skew or latency there is a option like -type by using this
we will report

🔍 Question 96:What is clock mesh were we used it

📈 Answer 96:How we do power mesh same like we create mesh for clock it is used to minimize the
skew for high frequency design

🔍 Question 97: What is multi source CTS why we used it

📈 Answer 97:We are creating multiple source point in design for cts to minimize the clock latency

Question 99: What is antenna effect how to reduce

🧠 Answer 99: The charge accumulated on metal shape during CMP (chemical mechanical
polishing) step of fabrication can damage the gate oxide which will lead to chip failure
Fixing1 : layer hoping or layer jumping
Fixing 2 : Antenna diode

🎯 Question 100: What is antenna ratio where this information present

🧠 Answer 100: Tool know antenna effect based on antenna ratio this information present in LEF
file
Antenna ratio = metal shape area / gate area

🎯 Question 101: what are the sanity checks you did in routing stage

🧠 Answer 101: report_timing → check setup timing


report_qor → check setup, max tran , max cap in different scenario
Analyze_design_violation → To clearly check max tran , max cap
Report_congestion → to check congestion in design
Report_utilization → how much core utilized in placement we can observe
Report_desgn → complete design related information will report
Check_legality → what ever placed cells are legally placed or not we have to check
Report_clock_qor → for know clock tree related violation
Report_clock_timing → for report either skew or latency there is a option like -type by using this
we will report
Report_routes → To report routing related warnings and errors
Check_lvs → To reports opens and shorts in design

🎯 Question 102: What are the types of routing

🧠 Answer 102: There are three types of routing power routing,clock tree routing and signal routing

🎯 Question 103: How many phases routing will don

🧠 Answer 103: Routing will don in three phases


Global routing or Trail route
Track assignment
Detail routing

You might also like