You are on page 1of 11

LVS:

The Layout Versus Schematic (LVS) is the class of electronic design automation (EDA)
verification software that determines whether a particular integrated circuit layoutcorresponds to
the original schematic or circuit diagram of the design.
LVS checking software recognizes the drawn shapes of the layout that represent the electrical
components of the circuit, as well as the connections between them. This netlist is compared by
the "LVS" software against a similar schematic or circuit diagram's netlist.
LVS Checking involves following three steps:
1. Extraction: The software program takes a database file containing all the layers drawn to
represent the circuit during layout. It then runs the database through many area
based logic operations to determine the semiconductor components represented in the
drawing by their layers of construction. Area based logical operations use polygon areas
as inputs and generate output polygon areas from these operations. These operations
are used to define the device recognition layers, the terminals of these devices, the
wiring conductors and via structures, and the locations of pins (also known as
hierarchical connection points). The layers that form devices can have various
measurements performed to and these measurements can be attached to these
devices. Layers that represent "good" wiring (conductors) are usually made of and
called metals. Vertical connections between these layers are often called vias.
2. Reduction: During reduction the software combines the extracted components into
series and parallel combinations if possible and generates a netlist representation of the
layout database. A similar reduction is performed on the "source" Schematic netlist.
3. Comparison: The extracted layout netlist is then compared to the netlist taken from the
circuit schematic. If the two netlists match, then the circuit passes the LVS check. At this
point it is said to be "LVS clean." (Mathematically, the layout and schematic netlists are
compared by performing a Graph isomorphism check to see if they are equivalent.)
In most cases the layout will not pass LVS the first time requiring the layout engineer to examine
the LVS software's reports and make changes to the layout. Typical errors encountered during
LVS include:
1. Shorts: Two or more wires that should not be connected have been and must be
separated.
2. Opens: Wires or components that should be connected are left dangling or only partially
connected. These must be connected properly to fix this.
3. Component Mismatches: Components of an incorrect type have been used (e.g. a low
Vt MOS device instead of a standard Vt MOS device)
4. Missing Components: An expected component has been left out of the layout.

5. Parameter Mismatch: Components in the netlist can contain properties. The LVS tool
can be configured to compare these properties to a desired tolerance. If this tolerance is
not met, then the LVS run is deemed to have a Property Error. A parameter that is
checked may not be an exact match, but may still pass if the lvs tool tolerance allows it.
(example: if a resistor in a schematic had resistance=1000 (ohms) and the extracted
netlist had the a matched resistor with resistance=997(ohms) and the tolerance was set
to 2%, then this device parameter would pass as 997 is within 2% of 1000 ( 997 is
99.7% of 1000 which is within the 98% to 102% range of the acceptable +-2% tolerance
error) )

DRC:
Design Rule Checking or Check(s) (DRC) is the area of Electronic Design Automation that
determines whether the physical layout of a particular chip layout satisfies a series of
recommended parameters called Design Rules. Design rule checking is a major step
during Physical verification signoff on the design, which also involves LVS (Layout versus
schematic) Check, XOR Checks, ERC (Electrical Rule Check) and Antenna Checks. For
advanced processes some fabs also insist upon the use of more restricted rules to improve
yield.
Some example of DRC's in IC design include:

Active to active spacing

Well to well spacing

Minimum channel length of the transistor

Minimum metal width

Metal to metal spacing

Metal fill density (for processes using CMP)

Poly density

ESD and I/O rules

Antenna effect

What s ESD
Electrostatic discharge (ESD) is the sudden flow of electricity between two electrically
charged objects caused by contact, anelectrical short, or dielectric breakdown. A buildup of
static electricity can be caused by tribocharging or by electrostatic induction. The ESD occurs

when differently-charged objects are brought close together or when the dielectric between
them breaks down, often creating a visible spark.
ESD can create spectacular electric sparks (lightning, with the accompanying sound of thunder,
is a large-scale ESD event), but also less dramatic forms which may be neither seen nor heard,
yet still be large enough to cause damage to sensitive electronic devices. Electric sparks require
a field strength above approximately 40 kV/cm in air, as notably occurs in lightning strikes. Other
forms of ESD include corona discharge from sharp electrodes and brush discharge from blunt
electrodes.
ESD can cause a range of harmful effects of importance in industry, including gas, fuel vapour
and coal dust explosions, as well as failure of solid state electronics components such
as integrated circuits. These can suffer permanent damage when subjected to high voltages.
Electronics manufacturers therefore establish electrostatic protective areas free of static, using
measures to prevent charging, such as avoiding highly charging materials and measures to
remove static such as grounding human workers, providingantistatic devices, and controlling
humidi
Cause for ESD
One of the causes of ESD events is static electricity. Static electricity is often generated
through tribocharging, the separation of electric charges that occurs when two materials are
brought into contact and then separated. Examples of tribocharging include walking on a rug,
rubbing a plastic comb against dry hair, rubbing a balloon against a sweater, ascending from a
fabric car seat, or removing some types of plastic packaging. In all these cases, the breaking of
contact between two materials results in tribocharging, thus creating a difference of electrical
potential that can lead to an ESD event.

What is pass transistor


In electronics, pass transistor logic (PTL) describes several logic families used in the design
of integrated circuits. It reduces the count of transistors used to make different logic gates, by
eliminating redundant transistors. Transistors are used as switches to pass logic levels between
nodes of a circuit, instead of as switches connected directly to supply voltages. [1] This reduces
the number of active devices, but has the disadvantage that the difference of the voltage
between high and low logic levels decreases at each stage. Each transistor in series is less
saturated at its output than at its input. [2] If several devices are chained in series in a logic path,
a conventionally constructed gate may be required to restore the signal voltage to the full value.
By contrast, conventional CMOS logic switches transistors so the output connects to one of the
power supply rails, so logic voltage levels in a sequential chain do not decrease. Simulation of
circuits may be required to ensure adequate performance.

Basic principles of pass transistor circuits[edit]

The pass transistor is driven by a periodic clock signal and acts as an access switch to either
charge up or charge down the parasitic capacitance Cx, depending on the input signal Vin. Thus,
the two possible operations when the clock signal is active (CK = 1) are the logic "1" transfer
(charging up the capacitance Cx to a logic-high level) and the logic "0" transfer (charging down
the capacitance Cxto a logic-low level). In either case, the output of the depletion load nMOS
inverter obviously assumes a logic-low or a logic-high level, depending upon the voltage Vx.

Applications
Pass transistor logic often uses fewer transistors, runs faster, and requires less power than the
same function implemented with the same transistors in fully complementary CMOS logic.[3]
XOR has the worst-case Karnaugh map -- if implemented from simple gates, it requires more
transistors than any other function. The designers of the Z80 and many other chips save a few
transistors by implementing the XOR using pass-transistor logic rather than simple gates.[4]

Decoder
In digital electronics, a binary decoder is a combinational logic circuit that converts
a binary integer value to an associated pattern of output bits. They are used in a
wide variety of applications, including data demultiplexing, seven segment displays,
and memory address decoding.

Multplixer
In electronics,
a multiplexer (or mux)
is
a
device
that
selects
one
of
several analog or digital input signals and forwards the selected input into a single line. [1] A
multiplexer of 2n inputs has n select lines, which are used to select which input line to send to
the output.[2] Multiplexers are mainly used to increase the amount of data that can be sent over
the network within a certain amount of time and bandwidth.[1] A multiplexer is also called a data
selector. Multiplexers can also be used to implement Boolean functions of multiple variables.
An electronic multiplexer makes it possible for several signals to share one device or resource,
for example one A/D converter or one communication line, instead of having one device per
input signal.

Conversely, a demultiplexer (or demux) is a device taking a single input signal and selecting
one of many data-output-lines, which is connected to the single input. A multiplexer is often used
with a complementary demultiplexer on the receiving end.[1]
An electronic multiplexer can be considered as a multiple-input, single-output switch, and a
demultiplexer as a single-input, multiple-output switch.[3] The schematic symbol for a multiplexer
is an isosceles trapezoid with the longer parallel side containing the input pins and the short
parallel side containing the output pin. [4] The schematic on the right shows a 2-to-1 multiplexer
on the left and an equivalent switch on the right. The wire connects the desired input to the
output

DOMINO cmos logic


Domino logic is a CMOS-based evolution of the dynamic logic techniques based on either
PMOS or NMOS transistors. It allows a rail-to-rail logic swing. It was developed to speed up
circuits.

Logic features[edit]

They have smaller areas than conventional CMOS logic (as does all Dynamic Logic).

Parasitic capacitances are smaller so that higher operating speeds are possible.

Operation is free of glitches as each gate can make only one transition.

Only non-inverting structures are possible because of the presence of inverting buffer.

Charge distribution may be a problem

SRAM Operation

An SRAM cell has three different states: standby (the circuit is idle), reading (the data has been
requested) or writing (updating the contents). SRAM operating in read mode and write modes
should have "readability" and "write stability", respectively. The three different states work as
follows:
Standby
If the word line is not asserted, the access transistors M5 and M6 disconnect the cell from
the bit lines. The two cross-coupled inverters formed by M1 M4 will continue to reinforce
each other as long as they are connected to the supply.

Reading
In theory, reading only requires asserting the word line WL and reading the SRAM cell
state by a single access transistor and bit line, e.g. M6, BL. Nevertheless, bit lines are
relatively long and have large parasitic capacitance. To speed up reading, a more
complex process is used in practice: The read cycle is started by precharging both bit
lines BL and BL, i.e., driving the bit lines to a threshold voltage (midrange voltage
between logical 1 and 0) by an external module (not shown in the figures). Then
asserting the word line WL enables both the access transistors M 5 and M6, which causes
the bit line BL voltage to either slightly drop (bottom NMOS transistor M 3 is ON and top
PMOS transistor M4 is off) or rise (top PMOS transistor M4 is on). It should be noted that
if BL voltage rises, the BL voltage drops, and vice versa. Then the BL and BL lines will
have a small voltage difference between them. A sense amplifier will sense which line
has the higher voltage and thus determine whether there was 1 or 0 stored. The higher
the sensitivity of the sense amplifier, the faster the read operation.
Writing
The write cycle begins by applying the value to be written to the bit lines. If we wish to
write a 0, we would apply a 0 to the bit lines, i.e. setting BL to 1 and BL to 0. This is
similar to applying a reset pulse to an SR-latch, which causes the flip flop to change
state. A 1 is written by inverting the values of the bit lines. WL is then asserted and the
value that is to be stored is latched in. This works because the bit line input-drivers are
designed to be much stronger than the relatively weak transistors in the cell itself so they
can easily override the previous state of the cross-coupled inverters. In practice, access
NMOS transistors M5 and M6 have to be stronger than either bottom NMOS (M 1, M3) or
top PMOS (M2, M4) transistors. This is easily obtained as PMOS transistors are much
weaker than NMOS when same sized. Consequently when one transistor pair (e.g.
M3 and M4) is only slightly overriden by the write process, the opposite transistors pair
(M1 and M2) gate voltage is also changed. This means that the M1 and M2transistors can
be easier overriden, and so on. Thus, cross-coupled inverters magnify the writing
process.

Design
A typical SRAM cell is made up of six MOSFETs. Each bit in an SRAM is stored on
four transistors (M1, M2, M3, M4) that form two cross-coupled inverters. This storage cell has
two stable states which are used to denote 0 and 1. Two additional access transistors serve to
control the access to a storage cell during read and write operations. In addition to such sixtransistor (6T) SRAM, other kinds of SRAM chips use 4, 8, 10 (4T, 8T, 10T SRAM), or more
transistors per bit.[5][6][7] Four-transistor SRAM is quite common in stand-alone SRAM devices (as
opposed to SRAM used for CPU caches), implemented in special processes with an extra layer
ofpolysilicon, allowing for very high-resistance pull-up resistors. [8] The principal drawback of
using 4T SRAM is increased static powerdue to the constant current flow through one of the
pull-down transistors.

Four transistor SRAM provides advantages in density at the cost of manufacturing complexity. The resistors
must have small dimensions and large values.

This is sometimes used to implement more than one (read and/or write) port, which may be
useful in certain types of video memoryand register files implemented with multi-ported SRAM
circuitry.
Generally, the fewer transistors needed per cell, the smaller each cell can be. Since the cost of
processing a silicon wafer is relatively fixed, using smaller cells and so packing more bits on one
wafer reduces the cost per bit of memory.
Memory cells that use fewer than four transistors are possible but, such 3T[9][10] or 1T cells
are DRAM, not SRAM (even the so-called 1T-SRAM).
Access to the cell is enabled by the word line (WL in figure) which controls the
two access transistors M5 and M6 which, in turn, control whether the cell should be connected to
the bit lines: BL and BL. They are used to transfer data for both read and write operations.
Although it is not strictly necessary to have two bit lines, both the signal and its inverse are
typically provided in order to improve noise margins.
During read accesses, the bit lines are actively driven high and low by the inverters in the SRAM
cell. This improves SRAM bandwidth compared to DRAMs in a DRAM, the bit line is
connected to storage capacitors and charge sharing causes the bitline to swing upwards or
downwards. The symmetric structure of SRAMs also allows for differential signaling, which
makes small voltage swings more easily detectable. Another difference with DRAM that
contributes to making SRAM faster is that commercial chips accept all address bits at a time. By
comparison, commodity DRAMs have the address multiplexed in two halves, i.e. higher bits
followed by lower bits, over the same package pins in order to keep their size and cost down.
The size of an SRAM with m address lines and n data lines is 2m words, or 2m n bits. The most
common word size is 8 bits, meaning that a single byte can be read or written to each of
2m different words within the SRAM chip. Several common SRAM chips have 11 address lines
(thus a capacity of 2m = 2,048 = 2k words) and an 8-bit word, so they are referred to as "2k 8
SRAM".

NOISE MARGIN
In electrical engineering, noise margin is the amount by which a signal exceeds the minimum
amount for proper operation. It is commonly used in at least two contexts:

In communications system engineering, noise margin is the ratio by which the signal
exceeds the minimum acceptable amount. It is normally measured in decibels.

In a digital circuit, the noise margin is the amount by which the signal exceeds the
threshold for a proper '0' or '1'. For example, a digital circuit might be designed to swing
between 0.0 and 1.2 volts, with anything below 0.2 volts considered a '0', and anything
above 1.0 volts considered a '1'. Then the noise margin for a '0' would be the amount that a
signal is below 0.2 volts, and the noise margin for a '1' would be the amount by which a
signal exceeds 1.0 volt. In this case noise margins are measured as an absolute voltage,
not a ratio. Noise margins for CMOS chips are usually much greater than those for TTL
because the VOH min is closer to the power supply voltage and VOL max is closer to zero.

In simple words, Noise margin (in circuits) is the amount of noise that a circuit can withstand.
Noise margins are generally defined so that positive values ensure proper operation, and
negative margins result in compromised operation, or perhaps outright failure.

Charge sharing

In digital electronics, charge sharing is an undesirable signal integrity phenomenon observed


most commonly in the Domino logic family of digital circuits. The charge sharing problem occurs
when the charge which is stored at the output node in the phase is shared among the output or
junction capacitances of transistors which are in the evaluation phase. Charge sharing may
degrade the output voltage level or even cause erroneous output value[1]

Substratre coupling

In an integrated circuit, a signal can couple from one node to another via the substrate. This
phenomenon is referred to as substrate coupling or substrate noise coupling.
The push for reduced cost, more compact circuit boards, and added customer features has
provided incentives for the inclusion of analog functions on primarily digital MOSintegrated
circuits (ICs) forming mixed-signal ICs. In these systems, the speed of digital circuits is
constantly increasing, chips are becoming more densely packed, interconnect layers are added,
and analog resolution is increased. In addition, recent increase in wireless applications and its
growing market are introducing a new set of aggressive design goals for realizing mixed-signal
systems. Here, the designer integrates radio frequency (RF) analog and base band digital
circuitry on a single chip. The goal is to make single-chip radio frequency integrated circuits
(RFICs) on silicon, where all the blocks are fabricated on the same chip. One of the advantages

of this integration is low power dissipation for portability due to a reduction in the number of
package pins and associated bond wire capacitance. Another reason that an integrated solution
offers lower power consumption is that routing high-frequency signals off-chip often requires a
50 impedance match, which can result in higher power dissipation. Other advantages include
improved high-frequency performance due to reduced package interconnect parasitics, higher
system reliability, smaller package count, and higher integration of RF components with VLSIcompatible digital circuits. In fact, the single-chip transceiver is now a reality.
The design of such systems, however, is a complicated task. There are two main challenges in
realizing mixed-signal ICs. The first challenging task, specific to RFICs, is to fabricate good onchip passive elements such as high-Q inductors. The second challenging task, applicable to any
mixed-signal IC and the subject of this chapter, is to minimize noise coupling between various
parts of the system to avoid any malfunctioning of the system. In other words, for successful
system-on-chip integration of mixed-signal systems, the noise coupling caused by nonideal
isolation must be minimized so that sensitive analog circuits and noisy digital circuits can
effectively coexist, and the system operates correctly. To elaborate, note that in mixed-signal
circuits, both sensitive analog circuits and high-swing high-frequency noise injector digital
circuits may be present on the same chip, leading to undesired signal coupling between these
two types of circuit via the conductive substrate. The reduced distance between these circuits,
which is the result of constant technology scaling (see Moore's law and the International
Technology Roadmap for Semiconductors), exacerbates the coupling. The problem is severe,
since signals of different nature and strength interfere, thus affecting the overall performance,
which demands higher clock rates and greater analog precisions.
The primary mixed-signal noise coupling problem comes from fast-changing digital signals
coupling to sensitive analog nodes. Another significant cause of undesired signal coupling is
the Crosstalk (electronics) between analog nodes themselves owing to high-frequency/highpower analog signals. One of the media through which mixed-signal noise coupling occurs is
the substrate. Digital operations cause fluctuations in the underlying substrate voltage, which
spreads through the common substrate causing variations in the substrate potential of sensitive
devices in the analog section. Similarly, in the case of crosstalk between analog nodes, a signal
can couple from one node to another via the substrate. This phenomenon is referred to
as substrate coupling or substrate noise coupling.

Crosstalk
In electronics, crosstalk is any phenomenon by which a signal transmitted on one circuit or
channel of a transmission system creates an undesired effect in another circuit or channel.
Crosstalk is usually caused by undesired capacitive, inductive, or conductive coupling from
one circuit, part of a circuit, or channel, to another.

What is em
EM is Electromigration. which determines the maximum current that can carried by a metal
line for a period of time. Usually there are three important currents that we calculate for EM,
peak current, RMS current and the DC current. Almost all the relations between current and
metal width is given by the foundry and you can find them in your Design Rule Manual. RMS
current normally decides the metal width that you require in your design. RMS currents are
usually applied to signals that are charging and discharging . DC currents for signals which
have steady state values. Peak current to complement the other two. RMS current can be
calculated by putting a resistance at the output of the signal driver. With the RMS current
value, apply it in the equation provided by the foundry and calculate the required metal
width as well as no of vias that you need for your signal. EM is directly responsible for
reliability of your design. Usually a chip has a reliability of 20yrs...But due to improper EM or
not doing EM analysis can lead to shortening of the chips life...To sum it up, its about
Heating effect and the current density in a metal line...another way is to find the average
current by CVF formula...

Electromigration is the transport of material caused by the gradual movement of the ions in
a conductor due to the momentum transfer between conducting electrons and diffusing
metal atoms. The effect is important in applications where high direct current densities are used,
such as in microelectronics and related structures. As the structure size in electronics such
as integrated circuits (ICs) decreases, the practical significance of this effect increases.

Thermal effects[edit]
In an ideal conductor, where atoms are arranged in a perfect lattice structure, the electrons
moving through it would experience no collisions and electromigration would not occur. In real
conductors, defects in the lattice structure and the random thermal vibration of the atoms about
their positions causes electrons to collide with the atoms andscatter, which is the source of
electrical resistance (at least in metals; see electrical conduction). Normally, the amount of
momentum imparted by the relatively low-masselectrons is not enough to permanently displace
the atoms. However, in high-power situations (such as with the increasing current draw and
decreasing wire sizes in modernVLSI microprocessors), if many electrons bombard the atoms
with enough force to become significant, this will accelerate the process of electromigration by
causing the atoms of the conductor to vibrate further from their ideal lattice positions, increasing
the amount of electron scattering. High current density increases the number of electrons
scattering against the atoms of the conductor, and hence the speed at which those atoms are
displaced.

In integrated circuits, electromigration does not occur in semiconductors directly, but in the
metal interconnects deposited onto them (see semiconductor device fabrication).
Electromigration is exacerbated by high current densities and the Joule heating of the conductor
(see electrical resistance), and can lead to eventual failure of electrical components. Localized
increase of current density is known as current crowding.

You might also like