You are on page 1of 8

VLSI DESIGN By : Dr.

SP Singh

CHAPTER 1
INTRODUCTION TO VLSI

1.1 A Historical Perspective


The idea of implementing computation using an encoded data was initiated by Babbage. Babbage
envisioned large scale mechanical computing devices, called Difference Engines.

This engine was developed in 1834 and was a general purpose computing machine with features
close to modern computers. This machine operated in a two-cycle sequence called “store” and
“mill” (execute), similar to current computers. But, this machine was very much costly and
complex.

Early digital electronics systems were based on magnetically controlled switches (or relays). They
were mainly used in the implementation of very simple logic networks. The age of digital electronic
computing only started in full with the introduction of the vacuum tube. The vacuum tube was
useful for digital computations. Soon complete computers were realized. The era of the vacuum tube
based computer culminated in the design of machines such as the ENIAC (intended for computing
artillery firing tables) and the UNIVAC I (the first successful commercial computer). But due to
reliability problems and excessive power consumption, larger vacuum tube computer manufacturing
became uneconomical and practically infeasible.

In next stage, transistor at Bell Telephone Laboratories in 1947 was invented and in 1949 Schockley
invented bipolar transistor.

After that, integrated-circuit logic gates were invented and were called the Fairchild Micrologic
family. The first truly successful IC logic family, TTL (Transistor-Transistor Logic) was
manufactured in 1962. Other logic families were devised with higher performance. Examples of
current switching circuits were - ECL (Emitter-Coupled Logic) family. TTL had the advantage of
offering a higher integration density and was the basis of the first integrated circuit revolution. In
fact, the manufacturing of TTL components is the first large semiconductor companies such as
Fairchild, National, and Texas Instruments. They had the monopoly until the 1980s. These all also
went out of the market due to large power consumption per gate puts an upper limit on the number
of gates that can be reliably integrated on a single die, package, housing, or box. Although attempts
were made to develop high integration density, low-power bipolar families (such as I2L—Integrated
Injection Logic, but no success could be achieved.

In 1970, MOS digital integrated circuits were manufactured. The basic principle behind the
MOSFET transistor (originally called IGFET) was proposed in a patent by J. Lilienfeld (Canada) as
early as 1925, and, independently, by O. Heil in England in 1935. The first MOS logic gates
introduced were of the CMOS variety, and this trend continued till the late 1960s. The first practical
MOS integrated circuits were implemented in PMOS-only logic and were used in applications such
as calculators.

The second age of the digital integrated circuit was started by Intel in 1972 for first microprocessors.
These processors were implemented in NMOS-only logic, which has the advantage of higher speed
over the PMOS logic. MOS technology gave the first high density semiconductor memories.

NMOS-only logic also could not met the increasing demands of high-density hence became
unattractive or infeasible due to power consumption. After this, finally tilted the balance towards the
CMOS technology, and this is where we still are today. It also requires new inventions, but still we
are doing only with CMOS technology.
Page : 1
VLSI DESIGN By : Dr. SP Singh

Now a days, the large majority of the current integrated circuits are implemented in the MOS
technology. BiCMOS is used in high-speed memories and gate arrays. When even higher
performance we also use bipolar silicon ECL family—Gallium-Arsenide, Silicon-Germanium and
superconducting technologies. These technologies only play a very small role. Our intention will be
on CMOS only.

1.2 Issues in Digital Integrated Circuit Design


In the 1960s, Gordon Moore predicted that the number of transistors that can be integrated on a
single die would grow exponentially with time. This prediction, later called Moore’s law, has
proven to be amazingly visionary. This is shown below –

Fig. Evolution of integration complexity of logic ICs and memories as a function of time.

This figure gives evolution of integration complexity of logic ICs as a function of time. This shows
the integration density of both logic IC’s and memory as a function of time. Integration complexity
doubles approximately every 1 to 2 years. As a result, memory density has increased by more than a
thousand fold since 1970. The microprocessor has grown in performance and complexity at a steady
and predictable pace. Clock frequencies double every three years and have reached into the GHz
range.

Presently, designers have increasingly adhered to rigid design methodologies and strategies that are
more amenable to design automation. New approaches deal with the complexity issue. New models
model contains virtually all the information needed to deal with the block (or model) at the next
level of hierarchy. For instance, once a designer has implemented a multiplier module, its
performance can be defined very accurately and can be captured in a model.

Design issues
1. It is required to design and implement the module libraries for using cell (or module) from
one technology to other. This change happens approximately every two years and hence
requires a redesign of the library.
2. It is required to create an adequate model of a cell or module. This requires an in-depth
understanding of its internal operation. For example, to identify the dominant performance
parameters of a given design, one has to recognize the critical timing path.
Page : 2
VLSI DESIGN By : Dr. SP Singh

3. The library-based approach works fine when the design constraints (speed, cost or power) are
not stringent. This is the case for a large number of application-specific designs, where the
main goal is to provide a more integrated system solution and performance requirements are
easily within the capabilities of the technology. For a large number of products such as
microprocessors, success is obtained by high performance, and designers tries to achieve it.
4. The abstraction-based approach is only correct to a certain degree. That is, practically design
may behave differently. For example, an adder can be substantially influenced by the way it
is connected to its environment. The interconnection wires themselves contribute to delay as
they introduce parasitic capacitances, resistances and even inductances. The impact of the
interconnect parasitics would have to be changed with scaling of technology.
5. Scaling tends to emphasize some other deficiencies of the abstraction-based model such as
clock synchronization and supply lines globally. Issues such as clock distribution, circuit
synchronization, and supply-voltage distribution are becoming more and more critical.
Coping with them requires a profound understanding of the intricacies of digital circuit
design.
6. The new design issues and constraints tend to emerge over time. For example, reemergence
of power dissipation as a constraining factor and changing ratio between device and
interconnect parasitics. To cope with these unforeseen factors, one must at least be able to
model and analyze their impact. This requires to look into circuit topology and their
behaviour.
7. Finally, when things can go wrong, what they do? A fabricated circuit does not exhibit the
exact waveforms. Deviations can be caused by variations in the fabrication process
parameters, or by the inductance of the package, or by a badly modeled clock signal.
Troubleshooting a design requires circuit expertise.

Due to these reasons, an in-depth knowledge of digital circuit design techniques and approaches is
necessary for a digital-system designer.

1.3 Quality Metrics of a Digital Design


In this section, we shall study a set of basic properties of a digital design costs. These help us to
quantify the quality of a design in terms of - cost, functionality, robustness, performance, and energy
consumption. Which one of these metrics is most important depends upon the application. For
example - pure speed is a property in a computer server and energy consumption is a dominant
metric for hand-held mobile phone applications.

1.3.1 Cost of an Integrated Circuit


The total cost is comprised of two costs –
a. Non-recurring expenses or the fixed cost and
b. Recurring or variable cost

a. Fixed Cost
It is independent of the sales volume, the number of products sold. Fixed cost of integrated circuit
depends on time and manpower it takes to produce. Bringing down the design cost is one of the
major challenges. One has to see indirect costs such as – company overhead, R and D, equipment,
marketing, building infrastructure and sales etc. The design cost is strongly influenced by the
complexity of the design, the aggressiveness of the specifications, and the productivity of the
designer.
b. Variable Cost
It is directly related to a manufactured product and is proportional to the product volume. This cost
includes – cost of parts, assembly, and testing costs. The impact of the fixed cost is more pronounced
for small-volume products.
Page : 3
VLSI DESIGN By : Dr. SP Singh

Therefore, total cost per IC = variable cost per IC + fixed cost / volume

Other costs are –


i. Variable cost

ii. Cost of die

iii. Die yield

α = 3 for CMOS processes

Example 1 -

1.3.2 Functionality and Robustness


The prime requirement for a circuit is that, it should perform the functions for what it is designed.
But, practically it does not perform exactly due to – voltage transfer variation, noise etc. The
presence of disturbing noise sources ON or OFF the chip is also a source of deviations in circuit
response. Main noise sources in digital circuits are –
a. Inductive coupling
b. Capacitive coupling
c. Power and ground noise

Different factors affecting function and robustness are –

1.3.2 The Voltage-Transfer Characteristic


It is a graph or plot between voltage Vin and voltage Vout. If an ideal nominal value is applied at the
input of gate, the output signal may deviate from the expected value.

Page : 4
VLSI DESIGN By : Dr. SP Singh

Fig. Inverter voltage-transfer characteristic


Let the above fig. shows VTC (voltage transfer characteristics) of logical inverter. The high and low
nominal voltages, VOH and VOL, can readily be identified - VOH = f(VOL) and VOL = f(VOH). Another
point of interest of the VTC is the gate or switching threshold voltage VM. VM can be found
graphically at the intersection of the VTC curve and the line given by Vout = Vin.

If an ideal nominal value is applied at the input of a gate, the output signal often deviates from the
expected nominal value. These deviations can be caused by noise or by the loading on the output of
the gate (i.e., by the number of gates connected to the output signal).

b. Noise Margins
It measures robustness of a circuit against noise. It expresses the capability of a circuit to
“overpower” a in noise source. Noise margin represents the levels of noise that can be sustained
when gates are cascaded. Thus, margins should be larger between zero (‘0’) and ‘1’ intervals for a
digital circuit to be functional. Large noise margin is desirable but not sufficient requirement.

c. Regenerative Property
The effect of different noise sources may accumulate and lastly force a signal level into the
undefined region. This does not happen if the gate has regenerative property.

Regenerative property ensures that a disturbed signal gradually converges back to one of the nominal
voltage level after passing through a number of logical gates.

Let, an input voltage vin ( ≈ “0”) is applied to a say N even number of inverters (as shown below.
Then, the output voltage vout (N∞) will equal VOL if and only if the inverter possesses the
regenerative property. Similarly, when an input voltage vin ( ≈ “1”) is applied to this set up, then the
output voltage will approach the nominal value VOH.

Fig. Regenerative property


d. Noise Immunity
It expresses the ability of the system to process and transmit information correctly in the presence of
noise. Circuits that do not have noise immunity property they can easily have noise.

To have noise immunity, the signal swing (and the noise margin) has to be large enough to
overpower the impact of the fixed sources.
Page : 5
VLSI DESIGN By : Dr. SP Singh

e. Directivity
It requires that a gate should be unidirectional. That is, changes in an output level should not appear
at any unchanging input of the same circuit. If this property does not meet then an output-signal
transition reflects to the gate inputs as a noise signal and affects the signal integrity. However,
practically, full directivity can never be achieved because some feedback of changes in output levels
to the inputs cannot be avoided e.g. capacitive coupling between inputs and outputs gives such a
feedback.

The fan-out denotes the number of load gates N that are connected to the output of the driving gate.
More the fan-out of a gate affect its logic output levels (i.e. directivity). When the fan-out is large,
the added load can deteriorate the dynamic performance of the driving gate. Thus, we have a limit on
fan-out to guarantee that the static and dynamic performance of the element meet specification. The
fan-in of a gate is defined as the number of inputs to the gate. Gates with large fan-in tend to be
more complex, which often results in inferior static and dynamic properties.

f. The Ideal Digital Gate


Ideal digital gate forms a static perspective. The ideal inverter model is important because it gives us
a metric by which we can judge the quality of actual implementations.

Refer below figure –

Fig. Ideal voltage transfer characteristics

This figure has the following properties –


1. Infinite gain in the transition region, and gate threshold located in the middle of the logic
swing, with high and low noise margins equal to half the swing.
2. The input and output impedances of the ideal gate are infinity and zero, respectively (i.e., the
gate has unlimited fan-out).
3. This ideal VTC is not practically true in real designs but CMOS do well near to ideal VTC.
1.3.3 Performance
System designers computes the performance of a digital circuit by computational load that the circuit
can manage. For example - a microprocessor is characterized by the number of instructions it can
execute per second. This performance metric depends both on the architecture of the processor and
the actual design of logic circuitry. Architecture is not our topic, our topic here is to analyse design
of logic circuit – for design - performance is normally expressed by the duration of the clock period
(clock cycle time), or its rate (clock frequency). The minimum value of the clock period for design is
set by -
1. time it takes for the signals to propagate through the logic,
2. the time it takes to get the data in and out of the registers,
3. and the uncertainty of the clock arrival times.
Page : 6
VLSI DESIGN By : Dr. SP Singh

The whole performance analysis lays the performance of an individual gate. When comparing the
performance of gates implemented in different technologies or circuit styles, it is important not to
confuse the picture by including parameters such as load factors, fan in and fan out.

The propagation delay tp of a gate defines how quickly a gate responds to a change at its input(s). It
expresses the delay experienced by a signal when passing through a gate. It is measured between the
50% transition points of the input and output waveforms shown below –

Figure Definition of propagation delays and rise and fall times.

Where,
Rise time = tr
Fall time = tf
Time of gate for a low to high (positive) = tpLH
Time of gate for a high to low (negative) = tpLH

Two definitions of the propagation delay are important –


1. The tpLH defines the response time of the gate for a low to high (or positive) output transition
2. The tpHL refers to a high to low (or negative) transition.

The propagation delay tp is the average of tpLH and tpHL. And is given as –

The transient response of first order RC network shown below is given as –

Where,
τ = RC = time constant or the network.
Time to reach the 50% point is computed as –

Time to reach the 90% point is computed as –


t
The energy Ein delivered by the source is given as –

Page : 7
VLSI DESIGN By : Dr. SP Singh

And, stored energy Ec on capacitor at the end of the transition is given as –

Example
Find following for first order RC network shown above for V = 10 V, R = 1 ohm and C = 1 F –
a. Time constant
b. Time to reach 50 and 90 % points
c. Vout for 50 and 90%
d. Propagation delay for tpLH = 8 micro sec. and tpHL = 2 micro sec.

1.3.4 Power and Energy Consumption


The power consumption of a design determines how much energy is consumed per operation and
how much heat the circuit dissipates. These factors influence a great number of critical design
decisions such as –
1. the power-supply capacity,
2. the battery lifetime,
3. supply-line
4. sizing,
5. packaging and
6. cooling requirements
So power dissipation is an important property of a design that affects feasibility, cost, and reliability.
Peak power dissipation Ppeak is important when studying supply line and average power dissipation
Pav is important when studying cooling or battery requirement. These are given as –

Where,
p(t) = instantaneous power
isupply = current being drawn from supply voltage Vsupply
ipeak = maximum value of isupply

The dissipation is of two types –


1. Static and
2. Dynamic

Static dissipation occurs due to static conductive paths between the supply rails or by leakage
currents. It is present when no switching occurs.

Dynamic dissipation occurs during transients, when gate is switching. It is due to charging of
capacitors and temporary current paths.

An ideal gate is one that is fast and consumes little energy.

Page : 8

You might also like